亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The problem of approximating the Pareto front of a multiobjective optimization problem can be reformulated as the problem of finding a set that maximizes the hypervolume indicator. This paper establishes the analytical expression of the Hessian matrix of the mapping from a (fixed size) collection of $n$ points in the $d$-dimensional decision space (or $m$ dimensional objective space) to the scalar hypervolume indicator value. To define the Hessian matrix, the input set is vectorized, and the matrix is derived by analytical differentiation of the mapping from a vectorized set to the hypervolume indicator. The Hessian matrix plays a crucial role in second-order methods, such as the Newton-Raphson optimization method, and it can be used for the verification of local optimal sets. So far, the full analytical expression was only established and analyzed for the relatively simple bi-objective case. This paper will derive the full expression for arbitrary dimensions ($m\geq2$ objective functions). For the practically important three-dimensional case, we also provide an asymptotically efficient algorithm with time complexity in $O(n\log n)$ for the exact computation of the Hessian Matrix' non-zero entries. We establish a sharp bound of $12m-6$ for the number of non-zero entries. Also, for the general $m$-dimensional case, a compact recursive analytical expression is established, and its algorithmic implementation is discussed. Also, for the general case, some sparsity results can be established; these results are implied by the recursive expression. To validate and illustrate the analytically derived algorithms and results, we provide a few numerical examples using Python and Mathematica implementations. Open-source implementations of the algorithms and testing data are made available as a supplement to this paper.

相關內容

黑塞矩陣(Hessian Matrix),又譯作海森矩陣、海瑟矩陣、海塞矩陣等,是一個多元函數的二階偏導數構成的方陣,描述了函數的局部曲率。黑塞矩陣最早于19世紀由德國數學家Ludwig Otto Hesse提出,并以其名字命名。黑塞矩陣常用于牛頓法解決優化問題。

We introduce a new algorithm and software for solving linear equations in symmetric diagonally dominant matrices with non-positive off-diagonal entries (SDDM matrices), including Laplacian matrices. We use pre-conditioned conjugate gradient (PCG) to solve the system of linear equations. Our preconditioner is a variant of the Approximate Cholesky factorization of Kyng and Sachdeva (FOCS 2016). Our factorization approach is simple: we eliminate matrix rows/columns one at a time and update the remaining matrix using sampling to approximate the outcome of complete Cholesky factorization. Unlike earlier approaches, our sampling always maintains a connectivity in the remaining non-zero structure. Our algorithm comes with a tuning parameter that upper bounds the number of samples made per original entry. We implement our algorithm in Julia, providing two versions, AC and AC2, that respectively use 1 and 2 samples per original entry. We compare their single-threaded performance to that of current state-of-the-art solvers Combinatorial Multigrid (CMG), BoomerAMG-preconditioned Krylov solvers from HyPre and PETSc, Lean Algebraic Multigrid (LAMG), and MATLAB's with Incomplete Cholesky Factorization (ICC). Our evaluation uses a broad class of problems, including all large SDDM matrices from the SuiteSparse collection and diverse programmatically generated instances. Our experiments suggest that our algorithm attains a level of robustness and reliability not seen before in SDDM solvers, while retaining good performance across all instances. Our code and data are public, and we provide a tutorial on how to replicate our tests. We hope that others will adopt this suite of tests as a benchmark, which we refer to as SDDM2023. Our solver code is available at: //github.com/danspielman/Laplacians.jl/ Our benchmarking data and tutorial are available at: //rjkyng.github.io/SDDM2023/

This work is intended for researchers in the field of side-channel attacks, countermeasure analysis, and probing security. It reports on a formalization of simulatability in terms of categorical properties, which we think will provide a useful tool in the practitioner toolbox. The formalization allowed us to revisit some existing definitions (such as probe isolating non-interference) in a simpler way that corresponds to the propagation of \textit{erase morphisms} in the diagrammatic language of \prop{} categories. From a theoretical perspective, we shed light into probabilistic definitions of simulatability and matrix-based spectral approaches. This could mean, in practice, that potentially better tools can be built. Readers will find a different, and perhaps less contrived, definition of simulatability, which could enable new forms of reasoning. This work does not cover any practical implementation of the proposed tools, which is left for future work.

In this work, we describe a generic approach to show convergence with high probability for both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous works for convex optimization, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations. This method can be applied to the non-convex case. We demonstrate an $O((1+\sigma^{2}\log(1/\delta))/T+\sigma/\sqrt{T})$ convergence rate when the number of iterations $T$ is known and an $O((1+\sigma^{2}\log(T/\delta))/\sqrt{T})$ convergence rate when $T$ is unknown for SGD, where $1-\delta$ is the desired success probability. These bounds improve over existing bounds in the literature. Additionally, we demonstrate that our techniques can be used to obtain high probability bound for AdaGrad-Norm (Ward et al., 2019) that removes the bounded gradients assumption from previous works. Furthermore, our technique for AdaGrad-Norm extends to the standard per-coordinate AdaGrad algorithm (Duchi et al., 2011), providing the first noise-adapted high probability convergence for AdaGrad.

We introduce a computationally efficient variant of the model-based ensemble Kalman filter (EnKF). We propose two changes to the original formulation. First, we phrase the setup in terms of precision matrices instead of covariance matrices, and introduce a new prior for the precision matrix which ensures it to be sparse. Second, we propose to split the state vector into several blocks and formulate an approximate updating procedure for each of these blocks. We study in a simulation example the computational speedup and the approximation error resulting from using the proposed approach. The speedup is substantial for high dimensional state vectors, allowing the proposed filter to be run on much larger problems than can be done with the original formulation. In the simulation example the approximation error resulting from using the introduced block updating is negligible compared to the Monte Carlo variability inherent in both the original and the proposed procedures.

The dual consistency, which is an important issue in developing dual-weighted residual error estimation towards the goal-oriented mesh adaptivity, is studied in this paper both theoretically and numerically. Based on the Newton-GMG solver, dual consistency had been discussed in detail to solve the steady Euler equations. Theoretically, based on the Petrov-Galerkin method, the primal and dual problems, as well as the dual consistency, are deeply studied. It is found that dual consistency is important both for error estimation and stable convergence rate for the quantity of interest. Numerically, through the boundary modification technique, dual consistency can be guaranteed for the problem with general configuration. The advantage of taking care of dual consistency on the Newton-GMG framework can be observed clearly from numerical experiments, in which an order of magnitude savings of mesh grids can be expected for calculating the quantity of interest, compared with the dual-inconsistent implementation. Besides, the convergence behavior from the dual-consistent algorithm is stable, which guarantees the precisions would be better with the refinement in this framework.

Elliptic interface problems whose solutions are $C^0$ continuous have been well studied over the past two decades. The well-known numerical methods include the strongly stable generalized finite element method (SGFEM) and immersed FEM (IFEM). In this paper, we study numerically a larger class of elliptic interface problems where their solutions are discontinuous. A direct application of these existing methods fails immediately as the approximate solution is in a larger space that covers discontinuous functions. We propose a class of high-order enriched unfitted FEMs to solve these problems with implicit or Robin-type interface jump conditions. We design new enrichment functions that capture the imposed discontinuity of the solution while keeping the condition number from fast growth. A linear enriched method in 1D was recently developed using one enrichment function and we generalized it to an arbitrary degree using two simple discontinuous one-sided enrichment functions. The natural tensor product extension to the 2D case is demonstrated. Optimal order convergence in the $L^2$ and broken $H^1$-norms are established. We also establish superconvergence at all discretization nodes (including exact nodal values in special cases). Numerical examples are provided to confirm the theory. Finally, to prove the efficiency of the method for practical problems, the enriched linear, quadratic, and cubic elements are applied to a multi-layer wall model for drug-eluting stents in which zero-flux jump conditions and implicit concentration interface conditions are both present.

This work considers the low-rank approximation of a matrix $A(t)$ depending on a parameter $t$ in a compact set $D \subset \mathbb{R}^d$. Application areas that give rise to such problems include computational statistics and dynamical systems. Randomized algorithms are an increasingly popular approach for performing low-rank approximation and they usually proceed by multiplying the matrix with random dimension reduction matrices (DRMs). Applying such algorithms directly to $A(t)$ would involve different, independent DRMs for every $t$, which is not only expensive but also leads to inherently non-smooth approximations. In this work, we propose to use constant DRMs, that is, $A(t)$ is multiplied with the same DRM for every $t$. The resulting parameter-dependent extensions of two popular randomized algorithms, the randomized singular value decomposition and the generalized Nystr\"{o}m method, are computationally attractive, especially when $A(t)$ admits an affine linear decomposition with respect to $t$. We perform a probabilistic analysis for both algorithms, deriving bounds on the expected value as well as failure probabilities for the $L^2$ approximation error when using Gaussian random DRMs. Both, the theoretical results and numerical experiments, show that the use of constant DRMs does not impair their effectiveness; our methods reliably return quasi-best low-rank approximations.

Stochastic gradient descent (SGD) is a scalable and memory-efficient optimization algorithm for large datasets and stream data, which has drawn a great deal of attention and popularity. The applications of SGD-based estimators to statistical inference such as interval estimation have also achieved great success. However, most of the related works are based on i.i.d. observations or Markov chains. When the observations come from a mixing time series, how to conduct valid statistical inference remains unexplored. As a matter of fact, the general correlation among observations imposes a challenge on interval estimation. Most existing methods may ignore this correlation and lead to invalid confidence intervals. In this paper, we propose a mini-batch SGD estimator for statistical inference when the data is $\phi$-mixing. The confidence intervals are constructed using an associated mini-batch bootstrap SGD procedure. Using ``independent block'' trick from \cite{yu1994rates}, we show that the proposed estimator is asymptotically normal, and its limiting distribution can be effectively approximated by the bootstrap procedure. The proposed method is memory-efficient and easy to implement in practice. Simulation studies on synthetic data and an application to a real-world dataset confirm our theory.

This paper introduces a general framework for iterative optimization algorithms and establishes under general assumptions that their convergence is asymptotically geometric. We also prove that under appropriate assumptions, the rate of convergence can be lower bounded. The convergence is then only geometric, and we provide the exact asymptotic convergence rate. This framework allows to deal with constrained optimization and encompasses the Expectation Maximization algorithm and the mirror descent algorithm, as well as some variants such as the alpha-Expectation Maximization or the Mirror Prox algorithm.Furthermore, we establish sufficient conditions for the convergence of the Mirror Prox algorithm, under which the method converges systematically to the unique minimizer of a convex function on a convex compact set.

We prove a convergence theorem for U-statistics of degree two, where the data dimension $d$ is allowed to scale with sample size $n$. We find that the limiting distribution of a U-statistic undergoes a phase transition from the non-degenerate Gaussian limit to the degenerate limit, regardless of its degeneracy and depending only on a moment ratio. A surprising consequence is that a non-degenerate U-statistic in high dimensions can have a non-Gaussian limit with a larger variance and asymmetric distribution. Our bounds are valid for any finite $n$ and $d$, independent of individual eigenvalues of the underlying function, and dimension-independent under a mild assumption. As an application, we apply our theory to two popular kernel-based distribution tests, MMD and KSD, whose high-dimensional performance has been challenging to study. In a simple empirical setting, our results correctly predict how the test power at a fixed threshold scales with $d$ and the bandwidth.

北京阿比特科技有限公司