亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Linear systems with a tensor product structure arise naturally when considering the discretization of Laplace type differential equations or, more generally, multidimensional operators with separable coefficients. In this work, we focus on the numerical solution of linear systems of the form $$ \left(I\otimes \dots\otimes I \otimes A_1+\dots + A_d\otimes I \otimes\dots \otimes I\right)x=b,$$ where the matrices $A_t\in\mathbb R^{n\times n}$ are symmetric positive definite and belong to the class of hierarchically semiseparable matrices. We propose and analyze a nested divide-and-conquer scheme, based on the technology of low-rank updates, that attains the quasi-optimal computational cost $\mathcal O(n^d (\log(n) + \log(\kappa)^2 + \log(\kappa) \log(\epsilon^{-1})))$ where $\kappa$ is the condition number of the linear system, and $\epsilon$ the target accuracy. Our theoretical analysis highlights the role of inexactness in the nested calls of our algorithm and provides worst case estimates for the amplification of the residual norm. The performances are validated on 2D and 3D case studies.

相關內容

In this paper, we study the following nonlinear matrix decomposition (NMD) problem: given a sparse nonnegative matrix $X$, find a low-rank matrix $\Theta$ such that $X \approx f(\Theta)$, where $f$ is an element-wise nonlinear function. We focus on the case where $f(\cdot) = \max(0, \cdot)$, the rectified unit (ReLU) non-linear activation. We refer to the corresponding problem as ReLU-NMD. We first provide a brief overview of the existing approaches that were developed to tackle ReLU-NMD. Then we introduce two new algorithms: (1) aggressive accelerated NMD (A-NMD) which uses an adaptive Nesterov extrapolation to accelerate an existing algorithm, and (2) three-block NMD (3B-NMD) which parametrizes $\Theta = WH$ and leads to a significant reduction in the computational cost. We also propose an effective initialization strategy based on the nuclear norm as a proxy for the rank function. We illustrate the effectiveness of the proposed algorithms (available on gitlab) on synthetic and real-world data sets.

Local search is a powerful heuristic in optimization and computer science, the complexity of which was studied in the white box and black box models. In the black box model, we are given a graph $G = (V,E)$ and oracle access to a function $f : V \to \mathbb{R}$. The local search problem is to find a vertex $v$ that is a local minimum, i.e. with $f(v) \leq f(u)$ for all $(u,v) \in E$, using as few queries as possible. The query complexity is well understood on the grid and the hypercube, but much less is known beyond. We show the query complexity of local search on $d$-regular expanders with constant degree is $\Omega\left(\frac{\sqrt{n}}{\log{n}}\right)$, where $n$ is the number of vertices. This matches within a logarithmic factor the upper bound of $O(\sqrt{n})$ for constant degree graphs from Aldous (1983), implying that steepest descent with a warm start is an essentially optimal algorithm for expanders. The best lower bound known from prior work was $\Omega\left(\frac{\sqrt[8]{n}}{\log{n}}\right)$, shown by Santha and Szegedy (2004) for quantum and randomized algorithms. We obtain this result by considering a broader framework of graph features such as vertex congestion and separation number. We show that for each graph, the randomized query complexity of local search is $\Omega\left(\frac{n^{1.5}}{g}\right)$, where $g$ is the vertex congestion of the graph; and $\Omega\left(\sqrt[4]{\frac{s}{\Delta}}\right)$, where $s$ is the separation number and $\Delta$ is the maximum degree. For separation number the previous bound was $\Omega\left(\sqrt[8]{\frac{s}{\Delta}} /\log{n}\right)$, given by Santha and Szegedy for quantum and randomized algorithms. We also show a variant of the relational adversary method from Aaronson (2006), which is asymptotically at least as strong as the version in Aaronson (2006) for all randomized algorithms and strictly stronger for some problems.

Integrating evolutionary partial differential equations (PDEs) is an essential ingredient for studying the dynamics of the solutions. Indeed, simulations are at the core of scientific computing, but their mathematical reliability is often difficult to quantify, especially when one is interested in the output of a given simulation, rather than in the asymptotic regime where the discretization parameter tends to zero. In this paper we present a computer-assisted proof methodology to perform rigorous time integration for scalar semilinear parabolic PDEs with periodic boundary conditions. We formulate an equivalent zero-finding problem based on a variations of constants formula in Fourier space. Using Chebyshev interpolation and domain decomposition, we then finish the proof with a Newton--Kantorovich type argument. The final output of this procedure is a proof of existence of an orbit, together with guaranteed error bounds between this orbit and a numerically computed approximation. We illustrate the versatility of the approach with results for the Fisher equation, the Swift--Hohenberg equation, the Ohta--Kawasaki equation and the Kuramoto--Sivashinsky equation. We expect that this rigorous integrator can form the basis for studying boundary value problems for connecting orbits in partial differential equations.

Under some regularity assumptions, we report an a priori error analysis of a dG scheme for the Poisson and Stokes flow problem in their dual mixed formulation. Both formulations satisfy a Babu\v{s}ka-Brezzi type condition within the space H(div) x L2. It is well known that the lowest order Crouzeix-Raviart element paired with piecewise constants satisfies such a condition on (broken) H1 x L2 spaces. In the present article, we use this pair. The continuity of the normal component is weakly imposed by penalizing jumps of the broken H(div) component. For the resulting methods, we prove well-posedness and convergence with constants independent of data and mesh size. We report error estimates in the methods natural norms and optimal local error estimates for the divergence error. In fact, our finite element solution shares for each triangle one DOF with the CR interpolant and the divergence is locally the best-approximation for any regularity. Numerical experiments support the findings and suggest that the other errors converge optimally even for the lowest regularity solutions and a crack-problem, as long as the crack is resolved by the mesh.

In this paper, we build on using the class of f-divergence induced coherent risk measures for portfolio optimization and derive its necessary optimality conditions formulated in CAPM format. We derive a new f-Beta similar to the Standard Betas and also extended it to previous works in Drawdown Betas. The f-Beta evaluates portfolio performance under an optimally perturbed market probability measure, and this family of Beta metrics gives various degrees of flexibility and interpretability. We conduct numerical experiments using selected stocks against a chosen S\&P 500 market index as the optimal portfolio to demonstrate the new perspectives provided by Hellinger-Beta as compared with Standard Beta and Drawdown Betas. In our experiments, the squared Hellinger distance is chosen to be the particular choice of the f-divergence function in the f-divergence induced risk measures and f-Betas. We calculate Hellinger-Beta metrics based on deviation measures and further extend this approach to calculate Hellinger-Betas based on drawdown measures, resulting in another new metric which is termed Hellinger-Drawdown Beta. We compare the resulting Hellinger-Beta values under various choices of the risk aversion parameter to study their sensitivity to increasing stress levels.

We describe a proof-of-concept development and application of a phase averaging technique to the nonlinear rotating shallow water equations on the sphere, discretised using compatible finite element methods. Phase averaging consists of averaging the nonlinearity over phase shifts in the exponential of the linear wave operator. Phase averaging aims to capture the slow dynamics in a solution that is smoother in time (in transformed variables) so that larger timesteps may be taken. We overcome the two key technical challenges that stand in the way of studying the phase averaging and advancing its implementation: 1) we have developed a stable matrix exponential specific to finite elements and 2) we have developed a parallel finite averaging proceedure. Following Peddle et al (2019), we consider finite width phase averaging windows, since the equations have a finite timescale separation. In our numerical implementation, the averaging integral is replaced by a Riemann sum, where each term can be evaluated in parallel. This creates an opportunity for parallelism in the timestepping method, which we use here to compute our solutions. Here, we focus on the stability and accuracy of the numerical solution. We confirm there is an optimal averaging window, in agreement with theory. Critically, we observe that the combined time discretisation and averaging error is much smaller than the time discretisation error in a semi-implicit method applied to the same spatial discretisation. An evaluation of the parallel aspects will follow in later work.

Many problems in robotics, such as estimating the state from noisy sensor data or aligning two LiDAR point clouds, can be posed and solved as least-squares problems. Unfortunately, vanilla nonminimal solvers for least-squares problems are notoriously sensitive to outliers. As such, various robust loss functions have been proposed to reduce the sensitivity to outliers. Examples of loss functions include pseudo-Huber, Cauchy, and Geman-McClure. Recently, these loss functions have been generalized into a single loss function that enables the best loss function to be found adaptively based on the distribution of the residuals. However, even with the generalized robust loss function, most nonminimal solvers can only be solved locally given a prior state estimate due to the nonconvexity of the problem. The first contribution of this paper is to combine graduated nonconvexity (GNC) with the generalized robust loss function to solve least-squares problems without a prior state estimate and without the need to specify a loss function. Moreover, existing loss functions, including the generalized loss function, are based on Gaussian-like distribution. However, residuals are often defined as the squared norm of a multivariate error and distributed in a Chi-like fashion. The second contribution of this paper is to apply a norm-aware adaptive robust loss function within a GNC framework. This leads to additional robustness when compared with state-of-the-art methods. Simulations and experiments demonstrate that the proposed approach is more robust and yields faster convergence times compared to other GNC formulations.

Boundary value problems based on the convection-diffusion equation arise naturally in models of fluid flow across a variety of engineering applications and design feasibility studies. Naturally, their efficient numerical solution has continued to be an interesting and active topic of research for decades. In the context of finite-element discretization of these boundary value problems, the Streamline Upwind Petrov-Galerkin (SUPG) technique yields accurate discretization in the singularly perturbed regime. In this paper, we propose efficient multigrid iterative solution methods for the resulting linear systems. In particular, we show that techniques from standard multigrid for anisotropic problems can be adapted to these discretizations on both tensor-product as well as semi-structured meshes. The resulting methods are demonstrated to be robust preconditioners for several standard flow benchmarks.

Low-rank matrix estimation under heavy-tailed noise is challenging, both computationally and statistically. Convex approaches have been proven statistically optimal but suffer from high computational costs, especially since robust loss functions are usually non-smooth. More recently, computationally fast non-convex approaches via sub-gradient descent are proposed, which, unfortunately, fail to deliver a statistically consistent estimator even under sub-Gaussian noise. In this paper, we introduce a novel Riemannian sub-gradient (RsGrad) algorithm which is not only computationally efficient with linear convergence but also is statistically optimal, be the noise Gaussian or heavy-tailed. Convergence theory is established for a general framework and specific applications to absolute loss, Huber loss, and quantile loss are investigated. Compared with existing non-convex methods, ours reveals a surprising phenomenon of dual-phase convergence. In phase one, RsGrad behaves as in a typical non-smooth optimization that requires gradually decaying stepsizes. However, phase one only delivers a statistically sub-optimal estimator which is already observed in the existing literature. Interestingly, during phase two, RsGrad converges linearly as if minimizing a smooth and strongly convex objective function and thus a constant stepsize suffices. Underlying the phase-two convergence is the smoothing effect of random noise to the non-smooth robust losses in an area close but not too close to the truth. Lastly, RsGrad is applicable for low-rank tensor estimation under heavy-tailed noise where a statistically optimal rate is attainable with the same phenomenon of dual-phase convergence, and a novel shrinkage-based second-order moment method is guaranteed to deliver a warm initialization. Numerical simulations confirm our theoretical discovery and showcase the superiority of RsGrad over prior methods.

Let $G$ be a graph on $n$ vertices of maximum degree $\Delta$. We show that, for any $\delta > 0$, the down-up walk on independent sets of size $k \leq (1-\delta)\alpha_c(\Delta)n$ mixes in time $O_{\Delta,\delta}(k\log{n})$, thereby resolving a conjecture of Davies and Perkins in an optimal form. Here, $\alpha_{c}(\Delta)n$ is the NP-hardness threshold for the problem of counting independent sets of a given size in a graph on $n$ vertices of maximum degree $\Delta$. Our mixing time has optimal dependence on $k,n$ for the entire range of $k$; previously, even polynomial mixing was not known. In fact, for $k = \Omega_{\Delta}(n)$ in this range, we establish a log-Sobolev inequality with optimal constant $\Omega_{\Delta,\delta}(1/n)$. At the heart of our proof are three new ingredients, which may be of independent interest. The first is a method for lifting $\ell_\infty$-independence from a suitable distribution on the discrete cube -- in this case, the hard-core model -- to the slice by proving stability of an Edgeworth expansion using a multivariate zero-free region for the base distribution. The second is a generalization of the Lee-Yau induction to prove log-Sobolev inequalities for distributions on the slice with considerably less symmetry than the uniform distribution. The third is a sharp decomposition-type result which provides a lossless comparison between the Dirichlet form of the original Markov chain and that of the so-called projected chain in the presence of a contractive coupling.

北京阿比特科技有限公司