亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A saddlepoint of an $n \times n$ matrix $A$ is an entry of $A$ that is a maximum in its row and a minimum in its column. Knuth (1968) gave several different algorithms for finding a saddlepoint. The worst-case running time of these algorithms is $\Theta(n^2)$, and Llewellyn, Tovey, and Trick (1988) showed that this cannot be improved, as in the worst case all entries of A may need to be queried. A strict saddlepoint of $A$ is an entry that is the strict maximum in its row and the strict minimum in its column. The strict saddlepoint (if it exists) is unique, and Bienstock, Chung, Fredman, Sch\"affer, Shor, and Suri (1991) showed that it can be found in time $O(n \log{n})$, where a dominant runtime contribution is sorting the diagonal of the matrix. This upper bound has not been improved since 1991. In this paper we show that the strict saddlepoint can be found in $O(n \log^{*}{n})$ time, where $\log^{*}$ denotes the very slowly growing iterated logarithm function, coming close to the lower bound of $\Omega(n)$. In fact, we can also compute, within the same runtime, the value of a non-strict saddlepoint, assuming one exists. Our algorithm is based on a simple recursive approach, a feasibility test inspired by searching in sorted matrices, and a relaxed notion of saddlepoint.

相關內容

Let $X = \{X_{u}\}_{u \in U}$ be a real-valued Gaussian process indexed by a set $U$. It can be thought of as an undirected graphical model with every random variable $X_{u}$ serving as a vertex. We characterize this graph in terms of the covariance of $X$ through its reproducing kernel property. Unlike other characterizations in the literature, our characterization does not restrict the index set $U$ to be finite or countable, and hence can be used to model the intrinsic dependence structure of stochastic processes in continuous time/space. Consequently, this characterization is not in terms of the zero entries of an inverse covariance. This poses novel challenges for the problem of recovery of the dependence structure from a sample of independent realizations of $X$, also known as structure estimation. We propose a methodology that circumvents these issues, by targeting the recovery of the underlying graph up to a finite resolution, which can be arbitrarily fine and is limited only by the available sample size. The recovery is shown to be consistent so long as the graph is sufficiently regular in an appropriate sense. We derive corresponding convergence rates and finite sample guarantees. Our methodology is illustrated by means of a simulation study and two data analyses.

For a set $Q$ of points in the plane and a real number $\delta \ge 0$, let $\mathbb{G}_\delta(Q)$ be the graph defined on $Q$ by connecting each pair of points at distance at most $\delta$. We consider the connectivity of $\mathbb{G}_\delta(Q)$ in the best scenario when the location of a few of the points is uncertain, but we know for each uncertain point a line segment that contains it. More precisely, we consider the following optimization problem: given a set $P$ of $n-k$ points in the plane and a set $S$ of $k$ line segments in the plane, find the minimum $\delta\ge 0$ with the property that we can select one point $p_s\in s$ for each segment $s\in S$ and the corresponding graph $\mathbb{G}_\delta ( P\cup \{ p_s\mid s\in S\})$ is connected. It is known that the problem is NP-hard. We provide an algorithm to exactly compute an optimal solution in $O(f(k) n \log n)$ time, for a computable function $f(\cdot)$. This implies that the problem is FPT when parameterized by $k$. The best previous algorithm uses $O((k!)^k k^{k+1}\cdot n^{2k})$ time and computes the solution up to fixed precision.

Given a collection of $m$ sets from a universe $\mathcal{U}$, the Maximum Set Coverage problem consists of finding $k$ sets whose union has largest cardinality. This problem is NP-Hard, but the solution can be approximated by a polynomial time algorithm up to a factor $1-1/e$. However, this algorithm does not scale well with the input size. In a streaming context, practical high-quality solutions are found, but with space complexity that scales linearly with respect to the size of the universe $|\mathcal{U}|$. However, one randomized streaming algorithm has been shown to produce a $1-1/e-\varepsilon$ approximation of the optimal solution with a space complexity that scales only poly-logarithmically with respect to $m$ and $|\mathcal{U}|$. In order to achieve such a low space complexity, the authors used a technique called subsampling, based on independent-wise hash functions, and $F_0$-sketching. This article focuses on this sublinear-space algorithm and introduces methods to reduce the time cost of subsampling. Firstly, we give some optimizations that do not alter the space complexity, number of passes and approximation quality of the original algorithm. In particular, we reanalyze the error bounds to show that the original independence factor of $\Omega(\varepsilon^{-2} k \log m)$ can be fine-tuned to $\Omega(k \log m)$. Secondly we show that $F_0$-sketching can be replaced by a much more simple mechanism. Finally, our experimental results show that even a pairwise-independent hash-function sampler does not produce worse solution than the original algorithm, while running significantly faster by several orders of magnitude.

$\textbf{Background and aims}$: Artificial Intelligence (AI) Computer-Aided Detection (CADe) is commonly used for polyp detection, but data seen in clinical settings can differ from model training. Few studies evaluate how well CADe detectors perform on colonoscopies from countries not seen during training, and none are able to evaluate performance without collecting expensive and time-intensive labels. $\textbf{Methods}$: We trained a CADe polyp detector on Israeli colonoscopy videos (5004 videos, 1106 hours) and evaluated on Japanese videos (354 videos, 128 hours) by measuring the True Positive Rate (TPR) versus false alarms per minute (FAPM). We introduce a colonoscopy dissimilarity measure called "MAsked mediCal Embedding Distance" (MACE) to quantify differences between colonoscopies, without labels. We evaluated CADe on all Japan videos and on those with the highest MACE. $\textbf{Results}$: MACE correctly quantifies that narrow-band imaging (NBI) and chromoendoscopy (CE) frames are less similar to Israel data than Japan whitelight (bootstrapped z-test, |z| > 690, p < $10^{-8}$ for both). Despite differences in the data, CADe performance on Japan colonoscopies was non-inferior to Israel ones without additional training (TPR at 0.5 FAPM: 0.957 and 0.972 for Israel and Japan; TPR at 1.0 FAPM: 0.972 and 0.989 for Israel and Japan; superiority test t > 45.2, p < $10^{-8}$). Despite not being trained on NBI or CE, TPR on those subsets were non-inferior to Japan overall (non-inferiority test t > 47.3, p < $10^{-8}$, $\delta$ = 1.5% for both). $\textbf{Conclusion}$: Differences that prevent CADe detectors from performing well in non-medical settings do not degrade the performance of our AI CADe polyp detector when applied to data from a new country. MACE can help medical AI models internationalize by identifying the most "dissimilar" data on which to evaluate models.

We consider the problem of chance constrained optimization where it is sought to optimize a function and satisfy constraints, both of which are affected by uncertainties. The real world declinations of this problem are particularly challenging because of their inherent computational cost. To tackle such problems, we propose a new Bayesian optimization method. It applies to the situation where the uncertainty comes from some of the inputs, so that it becomes possible to define an acquisition criterion in the joint controlled-uncontrolled input space. The main contribution of this work is an acquisition criterion that accounts for both the average improvement in objective function and the constraint reliability. The criterion is derived following the Stepwise Uncertainty Reduction logic and its maximization provides both optimal controlled and uncontrolled parameters. Analytical expressions are given to efficiently calculate the criterion. Numerical studies on test functions are presented. It is found through experimental comparisons with alternative sampling criteria that the adequation between the sampling criterion and the problem contributes to the efficiency of the overall optimization. As a side result, an expression for the variance of the improvement is given.

We study the spectral properties of flipped Toeplitz matrices of the form $H_n(f)=Y_nT_n(f)$, where $T_n(f)$ is the $n\times n$ Toeplitz matrix generated by the function $f$ and $Y_n$ is the $n\times n$ exchange (or flip) matrix having $1$ on the main anti-diagonal and $0$ elsewhere. In particular, under suitable assumptions on $f$, we establish an alternating sign relationship between the eigenvalues of $H_n(f)$, the eigenvalues of $T_n(f)$, and the quasi-uniform samples of $f$. Moreover, after fine-tuning a few known theorems on Toeplitz matrices, we use them to provide localization results for the eigenvalues of $H_n(f)$. Our study is motivated by the convergence analysis of the minimal residual (MINRES) method for the solution of real non-symmetric Toeplitz linear systems of the form $T_n(f)\mathbf x=\mathbf b$ after pre-multiplication of both sides by $Y_n$, as suggested by Pestana and Wathen.

If $G$ is a graph, $A$ and $B$ its induced subgraphs, and $f\colon A\to B$ an isomorphism, we say that $f$ is a partial automorphism of $G$. In 1992, Hrushovski proved that graphs have the extension property for partial automorphisms (EPPA, also called the Hrushovski property), that is, for every finite graph $G$ there is a finite graph $H$, its EPPA-witness, such that $G$ is an induced subgraph of $H$ and every partial automorphism of $G$ extends to an automorphism of $H$. The EPPA number of a graph $G$, denoted by $\mathop{\mathrm{eppa}}\nolimits(G)$, is the smallest number of vertices of an EPPA-witness for $G$, and we put $\mathop{\mathrm{eppa}}\nolimits(n) = \max\{\mathop{\mathrm{eppa}}\nolimits(G) : \lvert G\rvert = n\}$. In this note we review the state of the area, prove several lower bounds (in particular, we show that $\mathop{\mathrm{eppa}}\nolimits(n)\geq \frac{2^n}{\sqrt{n}}$, thereby identifying the correct base of the exponential) and pose many open questions. We also briefly discuss EPPA numbers of hypergraphs, directed graphs, and $K_k$-free graphs.

We consider the Distinct Shortest Walks problem. Given two vertices $s$ and $t$ of a graph database $\mathcal{D}$ and a regular path query, enumerate all walks of minimal length from $s$ to $t$ that carry a label that conforms to the query. Usual theoretical solutions turn out to be inefficient when applied to graph models that are closer to real-life systems, in particular because edges may carry multiple labels. Indeed, known algorithms may repeat the same answer exponentially many times. We propose an efficient algorithm for multi-labelled graph databases. The preprocessing runs in $O{|\mathcal{D}|\times|\mathcal{A}|}$ and the delay between two consecutive outputs is in $O(\lambda\times|\mathcal{A}|)$, where $\mathcal{A}$ is a nondeterministic automaton representing the query and $\lambda$ is the minimal length. The algorithm can handle $\varepsilon$-transitions in $\mathcal{A}$ or queries given as regular expressions at no additional cost.

For finite abstract simplicial complex $\Sigma$, initial realization $\alpha$ in $\mathbb{E}^d$, and desired edge lengths $L$, we give practical sufficient conditions for the existence of a non-self-intersecting perturbation of $\alpha$ realizing the lengths $L$. We provide software to verify these conditions by computer and optionally assist in the creation of an initial realization from abstract simplicial data. Applications include proving the existence of a planar embedding of a graph with specified edge lengths or proving the existence of polyhedra (or higher-dimensional polytopes) with specified edge lengths.

The expressivity of Graph Neural Networks (GNNs) can be entirely characterized by appropriate fragments of the first-order logic. Namely, any query of the two variable fragment of graded modal logic (GC2) interpreted over labeled graphs can be expressed using a GNN whose size depends only on the depth of the query. As pointed out by [Barcelo & Al., 2020, Grohe, 2021], this description holds for a family of activation functions, leaving the possibility for a hierarchy of logics expressible by GNNs depending on the chosen activation function. In this article, we show that such hierarchy indeed exists by proving that GC2 queries cannot be expressed by GNNs with polynomial activation functions. This implies a separation between polynomial and popular non-polynomial activations (such as Rectified Linear Units) and answers an open question formulated by [Grohe, 2021].

北京阿比特科技有限公司