亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The nonlinear space-fractional problems often allow multiple stationary solutions, which can be much more complicated than the corresponding integer-order problems. In this paper, we systematically compute the solution landscapes of nonlinear constant/variable-order space-fractional problems. A fast approximation algorithm is developed to deal with the variable-order spectral fractional Laplacian by approximating the variable-indexing Fourier modes, and then combined with saddle dynamics to construct the solution landscape of variable-order space-fractional phase field model. Numerical experiments are performed to substantiate the accuracy and efficiency of fast approximation algorithm and elucidate essential features of the stationary solutions of space-fractional phase field model. Furthermore, we demonstrate that the solution landscapes of spectral fractional Laplacian problems can be reconfigured by varying the diffusion coefficients in the corresponding integer-order problems.

相關內容

We propose new Markov Chain Monte Carlo algorithms to sample probability distributions on submanifolds, which generalize previous methods by allowing the use of set-valued maps in the proposal step of the MCMC algorithms. The motivation for this generalization is that the numerical solvers used to project proposed moves to the submanifold of interest may find several solutions. We show that the new algorithms indeed sample the target probability measure correctly, thanks to some carefully enforced reversibility property. We demonstrate the interest of the new MCMC algorithms on illustrative numerical examples.

Chernoff bounds are a powerful application of the Markov inequality to produce strong bounds on the tails of probability distributions. They are often used to bound the tail probabilities of sums of Poisson trials, or in regression to produce conservative confidence intervals for the parameters of such trials. The bounds provide expressions for the tail probabilities that can be inverted for a given probability/confidence to provide tail intervals. The inversions involve the solution of transcendental equations and it is often convenient to substitute approximations that can be exactly solved e.g. by the quadratic equation. In this paper we introduce approximations for the Chernoff bounds whose inversion can be exactly solved with a quadratic equation, but which are closer approximations than those adopted previously.

In this paper, we derive improved a priori error estimates for families of hybridizable interior penalty discontinuous Galerkin (H-IP) methods using a variable penalty for second-order elliptic problems. The strategy is to use a penalization function of the form $\mathcal{O}(1/h^{1+\delta})$, where $h$ denotes the mesh size and $\delta$ is a user-dependent parameter. We then quantify its direct impact on the convergence analysis, namely, the (strong) consistency, discrete coercivity, and boundedness (with $h^{\delta}$-dependency), and we derive updated error estimates for both discrete energy- and $L^{2}$-norms. The originality of the error analysis relies specifically on the use of conforming interpolants of the exact solution. All theoretical results are supported by numerical evidence.

The paradigm of differentiable programming has significantly enhanced the scope of machine learning via the judicious use of gradient-based optimization. However, standard differentiable programming methods (such as autodiff) typically require that the machine learning models be differentiable, limiting their applicability. Our goal in this paper is to use a new, principled approach to extend gradient-based optimization to functions well modeled by splines, which encompass a large family of piecewise polynomial models. We derive the form of the (weak) Jacobian of such functions and show that it exhibits a block-sparse structure that can be computed implicitly and efficiently. Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis.

In this paper, an important discovery has been found for nonconforming immersed finite element (IFE) methods using the integral values on edges as degrees of freedom for solving elliptic interface problems. We show that those IFE methods without penalties are not guaranteed to converge optimally if the tangential derivative of the exact solution and the jump of the coefficient are not zero on the interface. A nontrivial counter example is also provided to support our theoretical analysis. To recover the optimal convergence rates, we develop a new nonconforming IFE method with additional terms locally on interface edges. The new method is parameter-free which removes the limitation of the conventional partially penalized IFE method. We show the IFE basis functions are unisolvent on arbitrary triangles which is not considered in the literature. Furthermore, different from multipoint Taylor expansions, we derive the optimal approximation capabilities of both the Crouzeix-Raviart and the rotated-$Q_1$ IFE spaces via a unified approach which can handle the case of variable coefficients easily. Finally, optimal error estimates in both $H^1$- and $L^2$- norms are proved and confirmed with numerical experiments.

We consider a model of energy minimization arising in the study of the mechanical behavior caused by cell contraction within a fibrous biological medium. The macroscopic model is based on the theory of non rank-one convex nonlinear elasticity for phase transitions. We study appropriate numerical approximations based on the discontinuous Galerkin treatment of higher gradients and used succesfully in numerical simulations of experiments. We show that the discrete minimizers converge in the limit to minimizers of the continuous problem. This is achieved by employing the theory of $\Gamma$-convergence of the approximate energy functionals to the continuous model when the discretization parameter tends to zero. The analysis is involved due to the structure of numerical approximations which are defined in spaces with lower regularity than the space where the minimizers of the continuous variational problem are sought. This fact leads to the development of a new approach to $\Gamma$-convergence, appropriate for discontinuous finite element discretizations, which can be applied to quite general energy minimization problems. Furthermore, the adoption of exponential terms penalising the interpenetration of matter requires a new framework based on Orlicz spaces for discontinuous Galerkin methods which is developed in this paper as well.

Traditional finite element approaches are well-known to introduce spurious oscillations when applied to advection-dominated problems. We explore alleviation of this issue from the perspective of a generalized finite element formulation, which enables stabilization through an enrichment process. The presented work uses solution-tailored enrichments for the numerical solution of the one-dimensional, unsteady Burgers equation. Mainly, generalizable exponential and hyperbolic tangent enrichments effectively capture local, steep boundary layer/shock features. Results show natural alleviation of oscillations and return smooth numerical solutions over coarse grids. Additionally, significantly improved error levels are observed compared to Lagrangian finite element methods.

In this paper, we study a general low-rank matrix recovery problem with linear measurements corrupted by some noise. The objective is to understand under what conditions on the restricted isometry property (RIP) of the problem local search methods can find the ground truth with a small error. By analyzing the landscape of the non-convex problem, we first propose a global guarantee on the maximum distance between an arbitrary local minimizer and the ground truth under the assumption that the RIP constant is smaller than $1/2$. We show that this distance shrinks to zero as the intensity of the noise reduces. Our new guarantee is sharp in terms of the RIP constant and is much stronger than the existing results. We then present a local guarantee for problems with an arbitrary RIP constant, which states that any local minimizer is either considerably close to the ground truth or far away from it. Next, we prove the strict saddle property, which guarantees the global convergence of the perturbed gradient descent method in polynomial time. The developed results demonstrate how the noise intensity and the RIP constant of the problem affect the landscape of the problem.

Many representative graph neural networks, $e.g.$, GPR-GNN and ChebyNet, approximate graph convolutions with graph spectral filters. However, existing work either applies predefined filter weights or learns them without necessary constraints, which may lead to oversimplified or ill-posed filters. To overcome these issues, we propose $\textit{BernNet}$, a novel graph neural network with theoretical support that provides a simple but effective scheme for designing and learning arbitrary graph spectral filters. In particular, for any filter over the normalized Laplacian spectrum of a graph, our BernNet estimates it by an order-$K$ Bernstein polynomial approximation and designs its spectral property by setting the coefficients of the Bernstein basis. Moreover, we can learn the coefficients (and the corresponding filter weights) based on observed graphs and their associated signals and thus achieve the BernNet specialized for the data. Our experiments demonstrate that BernNet can learn arbitrary spectral filters, including complicated band-rejection and comb filters, and it achieves superior performance in real-world graph modeling tasks.

In this paper, from a theoretical perspective, we study how powerful graph neural networks (GNNs) can be for learning approximation algorithms for combinatorial problems. To this end, we first establish a new class of GNNs that can solve strictly a wider variety of problems than existing GNNs. Then, we bridge the gap between GNN theory and the theory of distributed local algorithms to theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios and that no GNN can perform better than with these ratios. This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems. Furthermore, we prove that adding coloring or weak-coloring to each node feature improves these approximation ratios. This indicates that preprocessing and feature engineering theoretically strengthen model capabilities.

北京阿比特科技有限公司