亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In a widely-studied class of multi-parametric optimization problems, the objective value of each solution is an affine function of real-valued parameters. Then, the goal is to provide an optimal solution set, i.e., a set containing an optimal solution for each non-parametric problem obtained by fixing a parameter vector. For many multi-parametric optimization problems, however, an optimal solution set of minimum cardinality can contain super-polynomially many solutions. Consequently, no polynomial-time exact algorithms can exist for these problems even if $\textsf{P}=\textsf{NP}$. We propose an approximation method that is applicable to a general class of multi-parametric optimization problems and outputs a set of solutions with cardinality polynomial in the instance size and the inverse of the approximation guarantee. This method lifts approximation algorithms for non-parametric optimization problems to their parametric version and provides an approximation guarantee that is arbitrarily close to the approximation guarantee of the approximation algorithm for the non-parametric problem. If the non-parametric problem can be solved exactly in polynomial time or if an FPTAS is available, our algorithm is an FPTAS. Further, we show that, for any given approximation guarantee, the minimum cardinality of an approximation set is, in general, not $\ell$-approximable for any natural number $\ell$ less or equal to the number of parameters, and we discuss applications of our results to classical multi-parametric combinatorial optimizations problems. In particular, we obtain an FPTAS for the multi-parametric minimum $s$-$t$-cut problem, an FPTAS for the multi-parametric knapsack problem, as well as an approximation algorithm for the multi-parametric maximization of independence systems problem.

相關內容

Center-based clustering is a pivotal primitive for unsupervised learning and data analysis. A popular variant is undoubtedly the k-means problem, which, given a set $P$ of points from a metric space and a parameter $k<|P|$, requires to determine a subset $S$ of $k$ centers minimizing the sum of all squared distances of points in $P$ from their closest center. A more general formulation, known as k-means with $z$ outliers, introduced to deal with noisy datasets, features a further parameter $z$ and allows up to $z$ points of $P$ (outliers) to be disregarded when computing the aforementioned sum. We present a distributed coreset-based 3-round approximation algorithm for k-means with $z$ outliers for general metric spaces, using MapReduce as a computational model. Our distributed algorithm requires sublinear local memory per reducer, and yields a solution whose approximation ratio is an additive term $O(\gamma)$ away from the one achievable by the best known sequential (possibly bicriteria) algorithm, where $\gamma$ can be made arbitrarily small. An important feature of our algorithm is that it obliviously adapts to the intrinsic complexity of the dataset, captured by the doubling dimension $D$ of the metric space. To the best of our knowledge, no previous distributed approaches were able to attain similar quality-performance tradeoffs for general metrics.

We consider universal approximations of symmetric and anti-symmetric functions, which are important for applications in quantum physics, as well as other scientific and engineering computations. We give constructive approximations with explicit bounds on the number of parameters with respect to the dimension and the target accuracy $\epsilon$. While the approximation still suffers from the curse of dimensionality, to the best of our knowledge, these are the first results in the literature with explicit error bounds for functions with symmetry or anti-symmetry constraints.

Many applications, such as system identification, classification of time series, direct and inverse problems in partial differential equations, and uncertainty quantification lead to the question of approximation of a non-linear operator between metric spaces $\mathfrak{X}$ and $\mathfrak{Y}$. We study the problem of determining the degree of approximation of a such operators on a compact subset $K_\mathfrak{X}\subset \mathfrak{X}$ using a finite amount of information. If $\mathcal{F}: K_\mathfrak{X}\to K_\mathfrak{Y}$, a well established strategy to approximate $\mathcal{F}(F)$ for some $F\in K_\mathfrak{X}$ is to encode $F$ (respectively, $\mathcal{F}(F)$) in terms of a finite number $d$ (repectively $m$) of real numbers. Together with appropriate reconstruction algorithms (decoders), the problem reduces to the approximation of $m$ functions on a compact subset of a high dimensional Euclidean space $\mathbb{R}^d$, equivalently, the unit sphere $\mathbb{S}^d$ embedded in $\mathbb{R}^{d+1}$. The problem is challenging because $d$, $m$, as well as the complexity of the approximation on $\mathbb{S}^d$ are all large, and it is necessary to estimate the accuracy keeping track of the inter-dependence of all the approximations involved. In this paper, we establish constructive methods to do this efficiently; i.e., with the constants involved in the estimates on the approximation on $\\mathbb{S}^d$ being $\mathcal{O}(d^{1/6})$. We study different smoothness classes for the operators, and also propose a method for approximation of $\mathcal{F}(F)$ using only information in a small neighborhood of $F$, resulting in an effective reduction in the number of parameters involved. To further mitigate the problem of large number of parameters, we propose prefabricated networks, resulting in a substantially smaller number of effective parameters.

The fair $k$-median problem is one of the important clustering problems. The current best approximation ratio is 4.675 for this problem with 1-fair violation, which was proposed by Bercea et al. [APPROX-RANDOM'2019]. As far as we know, there is no available approximation algorithm for the problem without any fair violation. In this paper, we consider the fair $k$-median problem in bounded doubling metrics and general metrics. We provide the first QPTAS for fair $k$-median problem in doubling metrics. Based on the split-tree decomposition of doubling metrics, we present a dynamic programming process to find the candidate centers, and apply min-cost max-flow method to deal with the assignment of clients. Especially, for overcoming the difficulties caused by the fair constraints, we construct an auxiliary graph and use minimum weighted perfect matching to get part of the cost. For the fair $k$-median problem in general metrics, we present an approximation algorithm with ratio $O(\log k)$, which is based on the embedding of given space into tree metrics, and the dynamic programming method. Our two approximation algorithms for the fair $k$-median problem are the first results for the corresponding problems without any fair violation, respectively.

We propose an efficient numerical method for computing natural gradient descent directions with respect to a generic metric in the state space. Our technique relies on representing the natural gradient direction as a solution to a standard least-squares problem. Hence, instead of calculating, storing, or inverting the information matrix directly, we apply efficient methods from numerical linear algebra to solve this least-squares problem. We treat both scenarios where the derivative of the state variable with respect to the parameter is either explicitly known or implicitly given through constraints. We apply the QR decomposition to solve the least-squares problem in the former case and utilize the adjoint-state method to compute the natural gradient descent direction in the latter case. As a result, we can reliably compute several natural gradient descents, including the Wasserstein natural gradient, for a large-scale parameter space with thousands of dimensions, which was believed to be out of reach. Finally, our numerical results shed light on the qualitative differences among the standard gradient descent method and various natural gradient descent methods based on different metric spaces in large-scale nonconvex optimization problems.

It is often the case in Statistics that one needs to compute sums of infinite series, especially in marginalising over discrete latent variables. This has become more relevant with the popularization of gradient-based techniques (e.g. Hamiltonian Monte Carlo) in the Bayesian inference context, for which discrete latent variables are hard or impossible to deal with. For many commonly used infinite series, custom algorithms have been developed which exploit specific features of each problem. General techniques, suitable for a large class of problems with limited input from the user are less established. We employ basic results from the theory of infinite series to investigate general, problem-agnostic algorithms to truncate infinite sums within an arbitrary tolerance $\varepsilon > 0$ and provide robust computational implementations with provable guarantees. We compare three tentative solutions to estimating the infinite sum of interest: (i) a "naive" approach that sums terms until the terms are below the threshold $\varepsilon$; (ii) a `bounding pair' strategy based on trapping the true value between two partial sums; and (iii) a `batch' strategy that computes the partial sums in regular intervals and stops when their difference is less than $\varepsilon$. We show under which conditions each strategy guarantees the truncated sum is within the required tolerance and compare the error achieved by each approach, as well as the number of function evaluations necessary for each one. A detailed discussion of numerical issues in practical implementations is also provided. The paper provides some theoretical discussion of a variety of statistical applications, including raw and factorial moments and count models with observation error. Finally, detailed illustrations in the form noisy MCMC for Bayesian inference and maximum marginal likelihood estimation are presented.

Multi-material problems often exhibit complex geometries along with physical responses presenting large spatial gradients or discontinuities. In these cases, providing high-quality body-fitted finite element analysis meshes and obtaining accurate solutions remain challenging. Immersed boundary techniques provide elegant solutions for such problems. Enrichment methods alleviate the need for generating conforming analysis grids by capturing discontinuities within mesh elements. Additionally, increased accuracy of physical responses and geometry description can be achieved with higher-order approximation bases. In particular, using B-splines has become popular with the development of IsoGeometric Analysis. In this work, an eXtended IsoGeometric Analysis (XIGA) approach is proposed for multi-material problems. The computational domain geometry is described implicitly by level set functions. A novel generalized Heaviside enrichment strategy is employed to accommodate an arbitrary number of materials without artificially stiffening the physical response. Higher-order B-spline functions are used for both geometry representation and analysis. Boundary and interface conditions are enforced weakly via Nitsche's method, and a new face-oriented ghost stabilization methodology is used to mitigate numerical instabilities arising from small material integration subdomains. Two- and three-dimensional heat transfer and elasticity problems are solved to validate the approach. Numerical studies provide insight into the ability to handle multiple materials considering sharp-edged and curved interfaces, as well as the impact of higher-order bases and stabilization on the solution accuracy and conditioning.

As a traditional and widely-adopted mortality rate projection technique, by representing the log mortality rate as a simple bilinear form $\log(m_{x,t})=a_x+b_xk_t$. The Lee-Carter model has been extensively studied throughout the past 30 years, however, the performance of the model in the presence of outliers has been paid little attention, particularly for the parameter estimation of $b_x$. In this paper, we propose a robust estimation method for Lee-Carter model by formulating it as a probabilistic principal component analysis (PPCA) with multivariate $t$-distributions, and an efficient expectation-maximization (EM) algorithm for implementation. The advantages of the method are threefold. It yields significantly more robust estimates of both $b_x$ and $k_t$, preserves the fundamental interpretation for $b_x$ as the first principal component as in the traditional approach and is flexible to be integrated into other existing time series models for $k_t$. The parameter uncertainties are examined by adopting a standard residual bootstrap. A simulation study based on Human Mortality Database shows superior performance of the proposed model compared to other conventional approaches.

This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司