亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper focuses on the analysis of conforming virtual element methods for general second-order linear elliptic problems with rough source terms and applies it to a Poisson inverse source problem with rough measurements. For the forward problem, when the source term belongs to $H^{-1}(\Omega)$, the right-hand side for the discrete approximation defined through polynomial projections is not meaningful even for standard conforming virtual element method. The modified discrete scheme in this paper introduces a novel companion operator in the context of conforming virtual element method and allows data in $H^{-1}(\Omega)$. This paper has {\it three} main contributions. The {\it first} contribution is the design of a conforming companion operator $J$ from the {\it conforming virtual element space} to the Sobolev space $V:=H^1_0(\Omega)$, a modified virtual element scheme, and the \textit{a priori} error estimate for the Poisson problem in the best-approximation form without data oscillations. The {\it second} contribution is the extension of the \textit{a priori} analysis to general second-order elliptic problems with source term in $V^*$. The {\it third} contribution is an application of the companion operator in a Poisson inverse source problem when the measurements belong to $V^*$. The Tikhonov's regularization technique regularizes the ill-posed inverse problem, and the conforming virtual element method approximates the regularized problem given a finite measurement data. The inverse problem is also discretised using the conforming virtual element method and error estimates are established. Numerical tests on different polygonal meshes for general second-order problems, and for a Poisson inverse source problem with finite measurement data verify the theoretical results.

相關內容

Interior-point methods offer a highly versatile framework for convex optimization that is effective in theory and practice. A key notion in their theory is that of a self-concordant barrier. We give a suitable generalization of self-concordance to Riemannian manifolds and show that it gives the same structural results and guarantees as in the Euclidean setting, in particular local quadratic convergence of Newton's method. We analyze a path-following method for optimizing compatible objectives over a convex domain for which one has a self-concordant barrier, and obtain the standard complexity guarantees as in the Euclidean setting. We provide general constructions of barriers, and show that on the space of positive-definite matrices and other symmetric spaces, the squared distance to a point is self-concordant. To demonstrate the versatility of our framework, we give algorithms with state-of-the-art complexity guarantees for the general class of scaling and non-commutative optimization problems, which have been of much recent interest, and we provide the first algorithms for efficiently finding high-precision solutions for computing minimal enclosing balls and geometric medians in nonpositive curvature.

In this paper, we propose a robust low-order stabilization-free virtual element method on quadrilateral meshes for linear elasticity that is based on the stress-hybrid principle. We refer to this approach as the Stress-Hybrid Virtual Element Method (SH-VEM). In this method, the Hellinger$-$Reissner variational principle is adopted, wherein both the equilibrium equations and the strain-displacement relations are variationally enforced. We consider small-strain deformations of linear elastic solids in the compressible and near-incompressible regimes over quadrilateral (convex and nonconvex) meshes. Within an element, the displacement field is approximated as a linear combination of canonical shape functions that are $\textit{virtual}$. The stress field, similar to the stress-hybrid finite element method of Pian and Sumihara, is represented using a linear combination of symmetric tensor polynomials. A 5-parameter expansion of the stress field is used in each element, with stress transformation equations applied on distorted quadrilaterals. In the variational statement of the strain-displacement relations, the divergence theorem is invoked to express the stress coefficients in terms of the nodal displacements. This results in a formulation with solely the nodal displacements as unknowns. Numerical results are presented for several benchmark problems from linear elasticity. We show that SH-VEM is free of volumetric and shear locking, and it converges optimally in the $L^2$ norm and energy seminorm of the displacement field, and in the $L^2$ norm of the hydrostatic stress.

We develop novel theory and algorithms for computing approximate solution to $Ax=b$, or to $A^TAx=A^Tb$, where $A$ is an $m \times n$ real matrix of arbitrary rank. First, we describe the {\it Triangle Algorithm} (TA), where given an ellipsoid $E_{A,\rho}=\{Ax: \Vert x \Vert \leq \rho\}$, in each iteration it either computes successively improving approximation $b_k=Ax_k \in E_{A,\rho}$, or proves $b \not \in E_{A, \rho}$. We then extend TA for computing an approximate solution or minimum-norm solution. Next, we develop a dynamic version of TA, the {\it Centering Triangle Algorithm} (CTA), generating residuals $r_k=b - Ax_k$ via iterations of the simple formula, $F_1(r)=r-(r^THr/r^TH^2r)Hr$, where $H=A$ when $A$ is symmetric PSD, otherwise $H=AA^T$ but need not be computed explicitly. More generally, CTA extends to a family of iteration function, $F_t( r)$, $t=1, \dots, m$ satisfying: On the one hand, given $t \leq m$ and $r_0=b-Ax_0$, where $x_0=A^Tw_0$ with $w_0 \in \mathbb{R}^m$ arbitrary, for all $k \geq 1$, $r_k=F_t(r_{k-1})=b-Ax_k$ and $A^Tr_k$ converges to zero. Algorithmically, if $H$ is invertible with condition number $\kappa$, in $k=O( (\kappa/t) \ln \varepsilon^{-1})$ iterations $\Vert r_k \Vert \leq \varepsilon$. If $H$ is singular with $\kappa^+$ the ratio of its largest to smallest positive eigenvalues, in $k =O(\kappa^+/t\varepsilon)$ iterations either $\Vert r_k \Vert \leq \varepsilon$ or $\Vert A^T r_k\Vert= O(\sqrt{\varepsilon})$. If $N$ is the number of nonzero entries of $A$, each iteration take $O(Nt+t^3)$ operations. On the other hand, given $r_0=b-Ax_0$, suppose its minimal polynomial with respect to $H$ has degree $s$. Then $Ax=b$ is solvable if and only if $F_{s}(r_0)=0$. Moreover, exclusively $A^TAx=A^Tb$ is solvable, if and only if $F_{s}(r_0) \not= 0$ but $A^T F_s(r_0)=0$. Additionally, $\{F_t(r_0)\}_{t=1}^s$ is computable in $O(Ns+s^3)$ operations.

In building practical applications of evolutionary computation (EC), two optimizations are essential. First, the parameters of the search method need to be tuned to the domain in order to balance exploration and exploitation effectively. Second, the search method needs to be distributed to take advantage of parallel computing resources. This paper presents BLADE (BLAnket Distributed Evolution) as an approach to achieving both goals simultaneously. BLADE uses blankets (i.e., masks on the genetic representation) to tune the evolutionary operators during the search, and implements the search through hub-and-spoke distribution. In the paper, (1) the blanket method is formalized for the (1 + 1)EA case as a Markov chain process. Its effectiveness is then demonstrated by analyzing dominant and subdominant eigenvalues of stochastic matrices, suggesting a generalizable theory; (2) the fitness-level theory is used to analyze the distribution method; and (3) these insights are verified experimentally on three benchmark problems, showing that both blankets and distribution lead to accelerated evolution. Moreover, a surprising synergy emerges between them: When combined with distribution, the blanket approach achieves more than $n$-fold speedup with $n$ clients in some cases. The work thus highlights the importance and potential of optimizing evolutionary computation in practical applications.

PDE-constrained inverse problems are some of the most challenging and computationally demanding problems in computational science today. Fine meshes that are required to accurately compute the PDE solution introduce an enormous number of parameters and require large scale computing resources such as more processors and more memory to solve such systems in a reasonable time. For inverse problems constrained by time dependent PDEs, the adjoint method that is often employed to efficiently compute gradients and higher order derivatives requires solving a time-reversed, so-called adjoint PDE that depends on the forward PDE solution at each timestep. This necessitates the storage of a high dimensional forward solution vector at every timestep. Such a procedure quickly exhausts the available memory resources. Several approaches that trade additional computation for reduced memory footprint have been proposed to mitigate the memory bottleneck, including checkpointing and compression strategies. In this work, we propose a close-to-ideal scalable compression approach using autoencoders to eliminate the need for checkpointing and substantial memory storage, thereby reducing both the time-to-solution and memory requirements. We compare our approach with checkpointing and an off-the-shelf compression approach on an earth-scale ill-posed seismic inverse problem. The results verify the expected close-to-ideal speedup for both the gradient and Hessian-vector product using the proposed autoencoder compression approach. To highlight the usefulness of the proposed approach, we combine the autoencoder compression with the data-informed active subspace (DIAS) prior to show how the DIAS method can be affordably extended to large scale problems without the need of checkpointing and large memory.

The Langevin algorithms are frequently used to sample the posterior distributions in Bayesian inference. In many practical problems, however, the posterior distributions often consist of non-differentiable components, posing challenges for the standard Langevin algorithms, as they require to evaluate the gradient of the energy function in each iteration. To this end, a popular remedy is to utilize the proximity operator, and as a result one needs to solve a proximity subproblem in each iteration. The conventional practice is to solve the subproblems accurately, which can be exceedingly expensive, as the subproblem needs to be solved in each iteration. We propose an approximate primal-dual fixed-point algorithm for solving the subproblem, which only seeks an approximate solution of the subproblem and therefore reduces the computational cost considerably. We provide theoretical analysis of the proposed method and also demonstrate its performance with numerical examples.

We focus on learning unknown dynamics from data using ODE-nets templated on implicit numerical initial value problem solvers. First, we perform Inverse Modified error analysis of the ODE-nets using unrolled implicit schemes for ease of interpretation. It is shown that training an ODE-net using an unrolled implicit scheme returns a close approximation of an Inverse Modified Differential Equation (IMDE). In addition, we establish a theoretical basis for hyper-parameter selection when training such ODE-nets, whereas current strategies usually treat numerical integration of ODE-nets as a black box. We thus formulate an adaptive algorithm which monitors the level of error and adapts the number of (unrolled) implicit solution iterations during the training process, so that the error of the unrolled approximation is less than the current learning loss. This helps accelerate training, while maintaining accuracy. Several numerical experiments are performed to demonstrate the advantages of the proposed algorithm compared to nonadaptive unrollings, and validate the theoretical analysis. We also note that this approach naturally allows for incorporating partially known physical terms in the equations, giving rise to what is termed ``gray box" identification.

Cochran's $Q$ statistic is routinely used for testing heterogeneity in meta-analysis. Its expected value (under an incorrect null distribution) is part of several popular estimators of the between-study variance, $\tau^2$. Those applications generally do not account for the studies' use of estimated variances in the inverse-variance weights that define $Q$ (more explicitly, $Q_{IV}$). Importantly, those weights make approximating the distribution of $Q_{IV}$ rather complicated. As an alternative, we are investigating a $Q$ statistic, $Q_F$, whose constant weights use only the studies' arm-level sample sizes. For log-odds-ratio, log-relative-risk, and risk difference as the measure of effect, these simulations study approximations to the distributions of $Q_F$ and $Q_{IV}$, as the basis for tests of heterogeneity. We present the results in 132 Figures, 153 pages in total.

There is a growing literature on the study of large-width properties of deep Gaussian neural networks (NNs), i.e. deep NNs with Gaussian-distributed parameters or weights, and Gaussian stochastic processes. Motivated by some empirical and theoretical studies showing the potential of replacing Gaussian distributions with Stable distributions, namely distributions with heavy tails, in this paper we investigate large-width properties of deep Stable NNs, i.e. deep NNs with Stable-distributed parameters. For sub-linear activation functions, a recent work has characterized the infinitely wide limit of a suitable rescaled deep Stable NN in terms of a Stable stochastic process, both under the assumption of a ``joint growth" and under the assumption of a ``sequential growth" of the width over the NN's layers. Here, assuming a ``sequential growth" of the width, we extend such a characterization to a general class of activation functions, which includes sub-linear, asymptotically linear and super-linear functions. As a novelty with respect to previous works, our results rely on the use of a generalized central limit theorem for heavy tails distributions, which allows for an interesting unified treatment of infinitely wide limits for deep Stable NNs. Our study shows that the scaling of Stable NNs and the stability of their infinitely wide limits may depend on the choice of the activation function, bringing out a critical difference with respect to the Gaussian setting.

In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.

北京阿比特科技有限公司