亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present approximation algorithms for the Fault-tolerant $k$-Supplier with Outliers ($\mathsf{F}k\mathsf{SO}$) problem. This is a common generalization of two known problems -- $k$-Supplier with Outliers, and Fault-tolerant $k$-Supplier -- each of which generalize the well-known $k$-Supplier problem. In the $k$-Supplier problem the goal is to serve $n$ clients $C$, by opening $k$ facilities from a set of possible facilities $F$; the objective function is the farthest that any client must travel to access an open facility. In $\mathsf{F}k\mathsf{SO}$, each client $v$ has a fault-tolerance $\ell_v$, and now desires $\ell_v$ facilities to serve it; so each client $v$'s contribution to the objective function is now its distance to the $\ell_v^{\text{th}}$ closest open facility. Furthermore, we are allowed to choose $m$ clients that we will serve, and only those clients contribute to the objective function, while the remaining $n-m$ are considered outliers. Our main result is a $\min\{4t-1,2^t+1\}$-approximation for the $\mathsf{F}k\mathsf{SO}$ problem, where $t$ is the number of distinct values of $\ell_v$ that appear in the instance. At $t=1$, i.e. in the case where the $\ell_v$'s are uniformly some $\ell$, this yields a $3$-approximation, improving upon the $11$-approximation given for the uniform case by Inamdar and Varadarajan [2020], who also introduced the problem. Our result for the uniform case matches tight $3$-approximations that exist for $k$-Supplier, $k$-Supplier with Outliers, and Fault-tolerant $k$-Supplier. Our key technical contribution is an application of the round-or-cut schema to $\mathsf{F}k\mathsf{SO}$. Guided by an LP relaxation, we reduce to a simpler optimization problem, which we can solve to obtain distance bounds for the "round" step, and valid inequalities for the "cut" step.

相關內容

We present a new distance oracle in the fully dynamic setting: given a weighted undirected graph $G=(V,E)$ with $n$ vertices undergoing both edge insertions and deletions, and an arbitrary parameter $\epsilon$ where $1/\log^{c} n<\epsilon<1$ and $c>0$ is a small constant, we can deterministically maintain a data structure with $n^{\epsilon}$ worst-case update time that, given any pair of vertices $(u,v)$, returns a $2^{{\rm poly}(1/\epsilon)}$-approximate distance between $u$ and $v$ in ${\rm poly}(1/\epsilon)\log\log n$ query time. Our algorithm significantly advances the state-of-the-art in two aspects, both for fully dynamic algorithms and even decremental algorithms. First, no existing algorithm with worst-case update time guarantees a $o(n)$-approximation while also achieving an $n^{2-\Omega(1)}$ update and $n^{o(1)}$ query time, while our algorithm offers a constant $O_{\epsilon}(1)$-approximation with $n^{\epsilon}$ update time and $O_{\epsilon}(\log \log n)$ query time. Second, even if amortized update time is allowed, it is the first deterministic constant-approximation algorithm with $n^{1-\Omega(1)}$ update and query time. The best result in this direction is the recent deterministic distance oracle by Chuzhoy and Zhang [STOC 2023] which achieves an approximation of $(\log\log n)^{2^{O(1/\epsilon^{3})}}$ with amortized update time of $n^{\epsilon}$ and query time of $2^{{\rm poly}(1/\epsilon)}\log n\log\log n$. We obtain the result by dynamizing tools related to length-constrained expanders [Haeupler-R\"acke-Ghaffari, STOC 2022; Haeupler-Hershkowitz-Tan, 2023; Haeupler-Huebotter-Ghaffari, 2022]. Our technique completely bypasses the 40-year-old Even-Shiloach tree, which has remained the most pervasive tool in the area but is inherently amortized.

We study the approximability of the MaxCut problem in the presence of predictions. Specifically, we consider two models: in the noisy predictions model, for each vertex we are given its correct label in $\{-1,+1\}$ with some unknown probability $1/2 + \epsilon$, and the other (incorrect) label otherwise. In the more-informative partial predictions model, for each vertex we are given its correct label with probability $\epsilon$ and no label otherwise. We assume only pairwise independence between vertices in both models. We show how these predictions can be used to improve on the worst-case approximation ratios for this problem. Specifically, we give an algorithm that achieves an $\alpha + \widetilde{\Omega}(\epsilon^4)$-approximation for the noisy predictions model, where $\alpha \approx 0.878$ is the MaxCut threshold. While this result also holds for the partial predictions model, we can also give a $\beta + \Omega(\epsilon)$-approximation, where $\beta \approx 0.858$ is the approximation ratio for MaxBisection given by Raghavendra and Tan. This answers a question posed by Ola Svensson in his plenary session talk at SODA'23.

We present an algorithm which can generate all pairwise non-isomorphic $K_2$-hypohamiltonian graphs, i.e. non-hamiltonian graphs in which the removal of any pair of adjacent vertices yields a hamiltonian graph, of a given order. We introduce new bounding criteria specifically designed for $K_2$-hypohamiltonian graphs, allowing us to improve upon earlier computational results. Specifically, we characterise the orders for which $K_2$-hypohamiltonian graphs exist and improve existing lower bounds on the orders of the smallest planar and the smallest bipartite $K_2$-hypohamiltonian graphs. Furthermore, we describe a new operation for creating $K_2$-hypohamiltonian graphs that preserves planarity under certain conditions and use it to prove the existence of a planar $K_2$-hypohamiltonian graph of order $n$ for every integer $n\geq 134$. Additionally, motivated by a theorem of Thomassen on hypohamiltonian graphs, we show the existence $K_2$-hypohamiltonian graphs with large maximum degree and size.

Years ago Zeev Rudnick defined the ${\lambda}$-Poisson generic sequences as the infinite sequences of symbols in a finite alphabet where the number of occurrences of long words in the initial segments follow the Poisson distribution with parameter ${\lambda}$. Although almost all sequences, with respect to the uniform measure, are Poisson generic, no explicit instance has yet been given. In this note we give a construction of an explicit ${\lambda}$-Poisson generic sequence over any alphabet and any positive ${\lambda}$, except for the case of the two-symbol alphabet, in which it is required that ${\lambda}$ be less than or equal to the natural logarithm of $2$. Since ${\lambda}$-Poisson genericity implies Borel normality, the constructed sequences are Borel normal. The same construction provides explicit instances of Borel normal sequences that are not ${\lambda}$-Poisson generic.

We present a dynamic algorithm for maintaining $(1+\epsilon)$-approximate maximum eigenvector and eigenvalue of a positive semi-definite matrix $A$ undergoing \emph{decreasing} updates, i.e., updates which may only decrease eigenvalues. Given a vector $v$ updating $A\gets A-vv^{\top}$, our algorithm takes $\tilde{O}(\mathrm{nnz}(v))$ amortized update time, i.e., polylogarithmic per non-zeros in the update vector. Our technique is based on a novel analysis of the influential power method in the dynamic setting. The two previous sets of techniques have the following drawbacks (1) algebraic techniques can maintain exact solutions but their update time is at least polynomial per non-zeros, and (2) sketching techniques admit polylogarithmic update time but suffer from a crude additive approximation. Our algorithm exploits an oblivious adversary. Interestingly, we show that any algorithm with polylogarithmic update time per non-zeros that works against an adaptive adversary and satisfies an additional natural property would imply a breakthrough for checking psd-ness of matrices in $\tilde{O}(n^{2})$ time, instead of $O(n^{\omega})$ time.

The $L_{\infty}$ star discrepancy is a very well-studied measure used to quantify the uniformity of a point set distribution. Constructing optimal point sets for this measure is seen as a very hard problem in the discrepancy community. Indeed, optimal point sets are, up to now, known only for $n\leq 6$ in dimension 2 and $n \leq 2$ for higher dimensions. We introduce in this paper mathematical programming formulations to construct point sets with as low $L_{\infty}$ star discrepancy as possible. Firstly, we present two models to construct optimal sets and show that there always exist optimal sets with the property that no two points share a coordinate. Then, we provide possible extensions of our models to other measures, such as the extreme and periodic discrepancies. For the $L_{\infty}$ star discrepancy, we are able to compute optimal point sets for up to 21 points in dimension 2 and for up to 8 points in dimension 3. For $d=2$ and $n\ge 7$ points, these point sets have around a 50% lower discrepancy than the current best point sets, and show a very different structure.

Given the growing significance of reliable, trustworthy, and explainable machine learning, the requirement of uncertainty quantification for anomaly detection systems has become increasingly important. In this context, effectively controlling Type I error rates ($\alpha$) without compromising the statistical power ($1-\beta$) of these systems can build trust and reduce costs related to false discoveries, particularly when follow-up procedures are expensive. Leveraging the principles of conformal prediction emerges as a promising approach for providing respective statistical guarantees by calibrating a model's uncertainty. This work introduces a novel framework for anomaly detection, termed cross-conformal anomaly detection, building upon well-known cross-conformal methods designed for prediction tasks. With that, it addresses a natural research gap by extending previous works in the context of inductive conformal anomaly detection, relying on the split-conformal approach for model calibration. Drawing on insights from conformal prediction, we demonstrate that the derived methods for calculating cross-conformal $p$-values strike a practical compromise between statistical efficiency (full-conformal) and computational efficiency (split-conformal) for uncertainty-quantified anomaly detection on benchmark datasets.

We consider the problem of the exact computation of the marginal eigenvalue distributions in the Laguerre and Jacobi $\beta$ ensembles. In the case $\beta=1$ this is a question of long standing in the mathematical statistics literature. A recursive procedure to accomplish this task is given for $\beta$ a positive integer, and the parameter $\lambda_1$ a non-negative integer. This case is special due to a finite basis of elementary functions, with coefficients which are polynomials. In the Laguerre case with $\beta = 1$ and $\lambda_1 + 1/2$ a non-negative integer some evidence is given of their again being a finite basis, now consisting of elementary functions and the error function multiplied by elementary functions. Moreover, from this the corresponding distributions in the fixed trace case permit a finite basis of power functions, as also for $\lambda_1$ a non-negative integer. The fixed trace case in this setting is relevant to quantum information theory and quantum transport problem, allowing particularly the exact determination of Landauer conductance distributions in a previously intractable parameter regime. Our findings also aid in analyzing zeros of the generating function for specific gap probabilities, supporting the validity of an associated large $N$ local central limit theorem.

Let $G$ be a simple graph with adjacency matrix $A(G)$, signless Laplacian matrix $Q(G)$, degree diagonal matrix $D(G)$ and let $l(G)$ be the line graph of $G$. In 2017, Nikiforov defined the $A_\alpha$-matrix of $G$, $A_\alpha(G)$, as a linear convex combination of $A(G)$ and $D(G)$, the following way, $A_\alpha(G):=\alpha A(G)+(1-\alpha)D(G),$ where $\alpha\in[0,1]$. In this paper, we present some bounds for the eigenvalues of $A_\alpha(G)$ and for the largest and smallest eigenvalues of $A_\alpha(l(G))$. Extremal graphs attaining some of these bounds are characterized.

We present a KE-tableau-based procedure for the main TBox and ABox reasoning tasks for the description logic $\mathcal{DL}\langle \mathsf{4LQS^{R,\!\times}}\rangle(\mathbf{D})$, in short $\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$. The logic $\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$, representable in the decidable multi-sorted quantified set-theoretic fragment $\mathsf{4LQS^R}$, combines the high scalability and efficiency of rule languages such as the Semantic Web Rule Language (SWRL) with the expressivity of description logics. Our algorithm is based on a variant of the KE-tableau system for sets of universally quantified clauses, where the KE-elimination rule is generalized in such a way as to incorporate the $\gamma$-rule. The novel system, called KE$^\gamma$-tableau, turns out to be an improvement of the system introduced in \cite{RR2017} and of standard first-order KE-tableau \cite{dagostino94}. Suitable benchmark test sets executed on C++ implementations of the three mentioned systems show that the performances of the KE$^\gamma$-tableau-based reasoner are often up to about 400% better than the ones of the other two systems. This a first step towards the construction of efficient reasoners for expressive OWL ontologies based on fragments of computable set-theory.

北京阿比特科技有限公司