亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we propose a modified Lie-type spectral splitting approximation where the external potential is of quadratic type. It is proved that we can approximate the solution to a nonlinear Schroedinger equation by solving the linear problem and treating the nonlinear term separately, with a rigorous estimate of the remainder term. Furthermore, we show by means of numerical experiments that such a modified approximation is more efficient than the standard one.

相關內容

The study of statistical estimation without distributional assumptions on data values, but with knowledge of data collection methods was recently introduced by Chen, Valiant and Valiant (NeurIPS 2020). In this framework, the goal is to design estimators that minimize the worst-case expected error. Here the expectation is over a known, randomized data collection process from some population, and the data values corresponding to each element of the population are assumed to be worst-case. Chen, Valiant and Valiant show that, when data values are $\ell_{\infty}$-normalized, there is a polynomial time algorithm to compute an estimator for the mean with worst-case expected error that is within a factor $\frac{\pi}{2}$ of the optimum within the natural class of semilinear estimators. However, their algorithm is based on optimizing a somewhat complex concave objective function over a constrained set of positive semidefinite matrices, and thus does not come with explicit runtime guarantees beyond being polynomial time in the input. In this paper we design provably efficient algorithms for approximating the optimal semilinear estimator based on online convex optimization. In the setting where data values are $\ell_{\infty}$-normalized, our algorithm achieves a $\frac{\pi}{2}$-approximation by iteratively solving a sequence of standard SDPs. When data values are $\ell_2$-normalized, our algorithm iteratively computes the top eigenvector of a sequence of matrices, and does not lose any multiplicative approximation factor. We complement these positive results by stating a simple combinatorial condition which, if satisfied by a data collection process, implies that any (not necessarily semilinear) estimator for the mean has constant worst-case expected error.

We propose and analyze an unfitted finite element method for solving elliptic problems on domains with curved boundaries and interfaces. The approximation space on the whole domain is obtained by the direct extension of the finite element space defined on interior elements, in the sense that there is no degree of freedom locating in boundary/interface elements. The boundary/jump conditions are imposed in a weak sense in the scheme. The method is shown to be stable without any mesh adjustment or any special stabilization. Optimal convergence rates under the $L^2$ norm and the energy norm are derived. Numerical results in both two and three dimensions are presented to illustrate the accuracy and the robustness of the method.

In this paper we prove upper and lower bounds on the minimal spherical dispersion. In particular, we see that the inverse $N(\varepsilon,d)$ of the minimal spherical dispersion is, for fixed $\varepsilon>0$, up to logarithmic terms linear in the dimension $d$. We also derive upper and lower bounds on the expected dispersion for points chosen independently and uniformly at random from the Euclidean unit sphere.

The gradient noise of Stochastic Gradient Descent (SGD) is considered to play a key role in its properties (e.g. escaping low potential points and regularization). Past research has indicated that the covariance of the SGD error done via minibatching plays a critical role in determining its regularization and escape from low potential points. It is however not much explored how much the distribution of the error influences the behavior of the algorithm. Motivated by some new research in this area, we prove universality results by showing that noise classes that have the same mean and covariance structure of SGD via minibatching have similar properties. We mainly consider the Multiplicative Stochastic Gradient Descent (M-SGD) algorithm as introduced by Wu et al., which has a much more general noise class than the SGD algorithm done via minibatching. We establish nonasymptotic bounds for the M-SGD algorithm mainly with respect to the Stochastic Differential Equation corresponding to SGD via minibatching. We also show that the M-SGD error is approximately a scaled Gaussian distribution with mean $0$ at any fixed point of the M-SGD algorithm. We also establish bounds for the convergence of the M-SGD algorithm in the strongly convex regime.

In online experimentation, trigger-dilute analysis is an approach to obtain more precise estimates of intent-to-treat (ITT) effects when the intervention is only exposed, or "triggered", for a small subset of the population. Trigger-dilute analysis cannot be used for estimation when triggering is only partially observed. In this paper, we propose an unbiased ITT estimator with reduced variance for cases where triggering status is only observed in the treatment group. Our method is based on the efficiency augmentation idea of CUPED and draws upon identification frameworks from the principal stratification and instrumental variables literature. The unbiasedness of our estimation approach relies on a testable assumption that an augmentation term used for covariate adjustment equals zero in expectation. When this augmentation term fails a mean-zero test, we show how our estimator can incorporate in-experiment observations to reduce the augmentation's bias, by sacrificing the amount of variance reduced. This provides an explicit knob to trade off bias with variance. We demonstrate through simulations that our estimator can remain unbiased and achieve precision improvements as good as if triggering status were fully observed, and in some cases outperforms trigger-dilute analysis.

In this article, we address a class of non convex, integer, non linear mathematical programs using dynamic programming. The mathematical program considered, whose properties are studied in this article, may be used to model the optimal liquidation problem of a single asset portfolio, held in a very large quantity, in a low volatility and perfect memory market, with few market participants. In this context, the Portfolio Manager's selling actions convey information to market participants, which in turn lower bid prices and further penalize the liquidation proceeds we attempt to maximize. We show the problem can be solved exactly using Dynamic Programming (DP) in polynomial time. However, exact resolution is only efficient for small instances. For medium size and large instances, we introduce dedicated heuristics which provide thin admissible solutions, hence tight lower bounds for the initial problem. We also benchmark them against a commercial solver, such as LocalSolver [7]. We are also interested in the continuously relaxed problem, which is non convex. Firstly, we use continuous solutions, obtained by free solver NLopt [26] and transform them into thin admissible solutions of the discrete problem. Secondly, we provide, under some convexity assumptions, an upper bound for the continuous relaxation, and hence for the initial (integer) problem. Numerical experiments confirm the quality of proposed heuristics (lower bounds), which often reach the optimal, or prove very tight, for small and medium size instances, with a very fast CPU time. Our upper bound, however, is not tight.

This article presents a matheuristic algorithm for the single-source capacitated facility location problem (SSCFLP) and its variants: SSCFLP with K facilities (SSCKFLP), SSCFLP with contiguous service areas (CFLSAP), and SSCFLP with K facilities and contiguous service areas (CKFLSAP). The algorithm starts from an initial solution, and iteratively improves the solution by exactly solving large neighborhood-based sub-problems. The performance of the algorithm is tested on 5 sets of SSCFLP benchmark instances. Among the 272 instances, 191 optimal solutions are found, and 35 best-known solutions are updated. For the largest set of instances with 300-1000 facilities and 300-1500 customers (Avella and Boccia 2009), the proposed algorithm outperforms existing methods in terms of the solution quality and the computational time. Furthermore, based on two geographic areas, two sets of instances are generated to test the algorithm for solving SSCFLP and its variants. The solutions found by the proposed algorithm approximate optimal solutions or the lower bounds with average gaps of 0.07% for SSCFLP, 0.22% for CFLSAP, 0.04% for SSCKFLP, and 0.13% for CKFLSAP.

We propose a novel preconditioned inexact primal-dual interior point method for constrained convex quadratic programming problems. The algorithm we describe invokes the preconditioned conjugate gradient method on a new reduced Schur complement KKT system, in implicit form. In contrast to standard approaches, the Schur complement formulation we consider enables reuse of the factorization of the KKT matrix with rows and columns corresponding to inequality constraints excluded, across all interior point iterations. Further, two new preconditioners are presented for the resulting reduced system, that alleviate the ill-conditioning associated with slack variables in primal-dual interior point methods. Each of the preconditioners we propose also provably reduces the number of unique eigenvalues for the coefficient matrix, and thus the CG iteration count. One preconditioner is efficient when the number of equality constraints is small, while the other is efficient when the number of remaining degrees of freedom is small. Numerical experiments with synthetic problems and problems from the Maros-M\'esz\'aros QP collection show that our preconditioned inexact interior point solvers are effective at improving conditioning and reducing cost. Across all test problems for which the direct method is not fastest, our preconditioned methods achieve a reduction in cost by a geometric mean of $1.432$ relative to the best alternative preconditioned method for each problem.

We propose accelerated randomized coordinate descent algorithms for stochastic optimization and online learning. Our algorithms have significantly less per-iteration complexity than the known accelerated gradient algorithms. The proposed algorithms for online learning have better regret performance than the known randomized online coordinate descent algorithms. Furthermore, the proposed algorithms for stochastic optimization exhibit as good convergence rates as the best known randomized coordinate descent algorithms. We also show simulation results to demonstrate performance of the proposed algorithms.

We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.

北京阿比特科技有限公司