亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a new approximation algorithm for the (metric) prize-collecting traveling salesperson problem (PCTSP). In PCTSP, opposed to the classical traveling salesperson problem (TSP), one may not include a vertex of the input graph in the returned tour at the cost of a given vertex-dependent penalty, and the objective is to balance the length of the tour and the incurred penalties for omitted vertices by minimizing the sum of the two. We present an algorithm that achieves an approximation guarantee of $1.774$ with respect to the natural linear programming relaxation of the problem. This significantly reduces the gap between the approximability of classical TSP and PCTSP, beating the previously best known approximation factor of $1.915$. As a key ingredient of our improvement, we present a refined decomposition technique for solutions of the LP relaxation, and show how to leverage components of that decomposition as building blocks for our tours.

相關內容

A current assumption of most clustering methods is that the training data and future data are taken from the same distribution. However, this assumption may not hold in most real-world scenarios. In this paper, we propose an information theoretical importance sampling based approach for clustering problems (ITISC) which minimizes the worst case of expected distortions under the constraint of distribution deviation. The distribution deviation constraint can be converted to the constraint over a set of weight distributions centered on the uniform distribution derived from importance sampling. The objective of the proposed approach is to minimize the loss under maximum degradation hence the resulting problem is a constrained minimax optimization problem which can be reformulated to an unconstrained problem using the Lagrange method. The optimization problem can be solved by both an alternative optimization algorithm or a general optimization routine by commercially available software. Experiment results on synthetic datasets and a real-world load forecasting problem validate the effectiveness of the proposed model. Furthermore, we show that fuzzy c-means is a special case of ITISC with the logarithmic distortion, and this observation provides an interesting physical interpretation for fuzzy exponent $m$.

We present a new distribution-free conformal prediction algorithm for sequential data (e.g., time series), called the \textit{sequential predictive conformal inference} (\texttt{SPCI}). We specifically account for the nature that time series data are non-exchangeable, and thus many existing conformal prediction algorithms are not applicable. The main idea is to adaptively re-estimate the conditional quantile of non-conformity scores (e.g., prediction residuals), upon exploiting the temporal dependence among them. More precisely, we cast the problem of conformal prediction interval as predicting the quantile of a future residual, given a user-specified point prediction algorithm. Theoretically, we establish asymptotic valid conditional coverage upon extending consistency analyses in quantile regression. Using simulation and real-data experiments, we demonstrate a significant reduction in interval width of \texttt{SPCI} compared to other existing methods under the desired empirical coverage.

Over the last decade, approximating functions in infinite dimensions from samples has gained increasing attention in computational science and engineering, especially in computational uncertainty quantification. This is primarily due to the relevance of functions that are solutions to parametric differential equations in various fields, e.g. chemistry, economics, engineering, and physics. While acquiring accurate and reliable approximations of such functions is inherently difficult, current benchmark methods exploit the fact that such functions often belong to certain classes of holomorphic functions to get algebraic convergence rates in infinite dimensions with respect to the number of (potentially adaptive) samples $m$. Our work focuses on providing theoretical approximation guarantees for the class of $(\boldsymbol{b},\varepsilon)$-holomorphic functions, demonstrating that these algebraic rates are the best possible for Banach-valued functions in infinite dimensions. We establish lower bounds using a reduction to a discrete problem in combination with the theory of $m$-widths, Gelfand widths and Kolmogorov widths. We study two cases, known and unknown anisotropy, in which the relative importance of the variables is known and unknown, respectively. A key conclusion of our paper is that in the latter setting, approximation from finite samples is impossible without some inherent ordering of the variables, even if the samples are chosen adaptively. Finally, in both cases, we demonstrate near-optimal, non-adaptive (random) sampling and recovery strategies which achieve close to same rates as the lower bounds.

We introduce a sum-of-squares SDP hierarchy approximating the ground-state energy from below for quantum many-body problems, with a natural quantum embedding interpretation. We establish the connections between our approach and other variational methods for lower bounds, including the variational embedding, the RDM method in quantum chemistry, and the Anderson bounds. Additionally, inspired by the quantum information theory, we propose efficient strategies for optimizing cluster selection to tighten SDP relaxations while staying within a computational budget. Numerical experiments are presented to demonstrate the effectiveness of our strategy. As a byproduct of our investigation, we find that quantum entanglement has the potential to capture the underlying graph of the many-body Hamiltonian.

We prove new upper and lower bounds on the number of iterations the $k$-dimensional Weisfeiler-Leman algorithm ($k$-WL) requires until stabilization. For $k \geq 3$, we show that $k$-WL stabilizes after at most $O(kn^{k-1}\log n)$ iterations (where $n$ denotes the number of vertices of the input structures), obtaining the first improvement over the trivial upper bound of $n^{k}-1$ and extending a previous upper bound of $O(n \log n)$ for $k=2$ [Lichter et al., LICS 2019]. We complement our upper bounds by constructing $k$-ary relational structures on which $k$-WL requires at least $n^{\Omega(k)}$ iterations to stabilize. This improves over a previous lower bound of $n^{\Omega(k / \log k)}$ [Berkholz, Nordstr\"{o}m, LICS 2016]. We also investigate tradeoffs between the dimension and the iteration number of WL, and show that $d$-WL, where $d = \lceil\frac{3(k+1)}{2}\rceil$, can simulate the $k$-WL algorithm using only $O(k^2 \cdot n^{\lfloor k/2\rfloor + 1} \log n)$ many iterations, but still requires at least $n^{\Omega(k)}$ iterations for any $d$ (that is sufficiently smaller than $n$). The number of iterations required by $k$-WL to distinguish two structures corresponds to the quantifier rank of a sentence distinguishing them in the $(k + 1)$-variable fragment $C_{k+1}$ of first-order logic with counting quantifiers. Hence, our results also imply new upper and lower bounds on the quantifier rank required in the logic $C_{k+1}$, as well as tradeoffs between variable number and quantifier rank.

This article introduces randomized block Gram-Schmidt process (RBGS) for QR decomposition. RBGS extends the single-vector randomized Gram-Schmidt (RGS) algorithm and inherits its key characteristics such as being more efficient and having at least as much stability as any deterministic (block) Gram-Schmidt algorithm. Block algorithms offer superior performance as they are based on BLAS3 matrix-wise operations and reduce communication cost when executed in parallel. Notably, our low-synchronization variant of RBGS can be implemented in a parallel environment using only one global reduction operation between processors per block. Moreover, the block Gram-Schmidt orthogonalization is the key element in the block Arnoldi procedure for the construction of a Krylov basis, which in turn is used in GMRES, FOM and Rayleigh-Ritz methods for the solution of linear systems and clustered eigenvalue problems. In this article, we develop randomized versions of these methods, based on RBGS, and validate them on nontrivial numerical examples.

In the literature of high-dimensional central limit theorems, there is a gap between results for general limiting correlation matrix $\Sigma$ and the strongly non-degenerate case. For the general case where $\Sigma$ may be degenerate, under certain light-tail conditions, when approximating a normalized sum of $n$ independent random vectors by the Gaussian distribution $N(0,\Sigma)$ in multivariate Kolmogorov distance, the best-known error rate has been $O(n^{-1/4})$, subject to logarithmic factors of the dimension. For the strongly non-degenerate case, that is, when the minimum eigenvalue of $\Sigma$ is bounded away from 0, the error rate can be improved to $O(n^{-1/2})$ up to a $\log n$ factor. In this paper, we show that the $O(n^{-1/2})$ rate up to a $\log n$ factor can still be achieved in the degenerate case, provided that the minimum eigenvalue of the limiting correlation matrix of any three components is bounded away from 0. We prove our main results using Stein's method in conjunction with previously unexplored inequalities for the integral of the first three derivatives of the standard Gaussian density over convex polytopes. These inequalities were previously known only for hyperrectangles. Our proof demonstrates the connection between the three-components condition and the third moment Berry--Esseen bound.

Elitism, which constructs the new population by preserving best solutions out of the old population and newly-generated solutions, has been a default way for population update since its introduction into multi-objective evolutionary algorithms (MOEAs) in the late 1990s. In this paper, we take an opposite perspective to conduct the population update in MOEAs by simply discarding elitism. That is, we treat the newly-generated solutions as the new population directly (so that all selection pressure comes from mating selection). We propose a simple non-elitist MOEA (called NE-MOEA) that only uses Pareto dominance sorting to compare solutions, without involving any diversity-related selection criterion. Preliminary experimental results show that NE-MOEA can compete with well-known elitist MOEAs (NSGA-II, SMS-EMOA and NSGA-III) on several combinatorial problems. Lastly, we discuss limitations of the proposed non-elitist algorithm and suggest possible future research directions.

In Bayesian inference, the approximation of integrals of the form $\psi = \mathbb{E}_{F}{l(X)} = \int_{\chi} l(\mathbf{x}) d F(\mathbf{x})$ is a fundamental challenge. Such integrals are crucial for evidence estimation, which is important for various purposes, including model selection and numerical analysis. The existing strategies for evidence estimation are classified into four categories: deterministic approximation, density estimation, importance sampling, and vertical representation (Llorente et al., 2020). In this paper, we show that the Riemann sum estimator due to Yakowitz (1978) can be used in the context of nested sampling (Skilling, 2006) to achieve a $O(n^{-4})$ rate of convergence, faster than the usual Ergodic Central Limit Theorem. We provide a brief overview of the literature on the Riemann sum estimators and the nested sampling algorithm and its connections to vertical likelihood Monte Carlo. We provide theoretical and numerical arguments to show how merging these two ideas may result in improved and more robust estimators for evidence estimation, especially in higher dimensional spaces. We also briefly discuss the idea of simulating the Lorenz curve that avoids the problem of intractable $\Lambda$ functions, essential for the vertical representation and nested sampling.

Simulation models of critical systems often have parameters that need to be calibrated using observed data. For expensive simulation models, calibration is done using an emulator of the simulation model built on simulation output at different parameter settings. Using intelligent and adaptive selection of parameters to build the emulator can drastically improve the efficiency of the calibration process. The article proposes a sequential framework with a novel criterion for parameter selection that targets learning the posterior density of the parameters. The emergent behavior from this criterion is that exploration happens by selecting parameters in uncertain posterior regions while simultaneously exploitation happens by selecting parameters in regions of high posterior density. The advantages of the proposed method are illustrated using several simulation experiments and a nuclear physics reaction model.

北京阿比特科技有限公司