亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we develop finite difference schemes for elliptic problems with piecewise continuous coefficients that have (possibly huge) jumps across fixed internal interfaces. In contrast with such problems involving one smooth non-intersecting interface, that have been extensively studied, there are very few papers addressing elliptic interface problems with intersecting interfaces of coefficient jumps. It is well known that if the values of the permeability in the four subregions around a point of intersection of two such internal interfaces are all different, the solution has a point singularity that significantly affects the accuracy of the approximation in the vicinity of the intersection point. In the present paper we propose a fourth-order 9-point finite difference scheme on uniform Cartesian meshes for an elliptic problem whose coefficient is piecewise constant in four rectangular subdomains of the overall two-dimensional rectangular domain. Moreover, for the special case when the intersecting point of the two lines of coefficient jumps is a grid point, such a compact scheme, involving relatively simple formulas for computation of the stencil coefficients, can even reach sixth order of accuracy. Furthermore, we show that the resulting linear system for the special case has an M-matrix, and prove the theoretical sixth order convergence rate using the discrete maximum principle. Our numerical experiments demonstrate the fourth (for the general case) and sixth (for the special case) accuracy orders of the proposed schemes. In the general case, we derive a compact third-order finite difference scheme, also yielding a linear system with an M-matrix. In addition, using the discrete maximum principle, we prove the third order convergence rate of the scheme for the general elliptic cross-interface problem.

相關內容

Our objective is to calculate the derivatives of data corrupted by noise. This is a challenging task as even small amounts of noise can result in significant errors in the computation. This is mainly due to the randomness of the noise, which can result in high-frequency fluctuations. To overcome this challenge, we suggest an approach that involves approximating the data by eliminating high-frequency terms from the Fourier expansion of the given data with respect to the polynomial-exponential basis. This truncation method helps to regularize the issue, while the use of the polynomial-exponential basis ensures accuracy in the computation. We demonstrate the effectiveness of our approach through numerical examples in one and two dimensions.

In this paper, we propose a two-level block preconditioned Jacobi-Davidson (BPJD) method for efficiently solving discrete eigenvalue problems resulting from finite element approximations of $2m$th ($m = 1, 2$) order symmetric elliptic eigenvalue problems. Our method works effectively to compute the first several eigenpairs, including both multiple and clustered eigenvalues with corresponding eigenfunctions, particularly. The method is highly parallelizable by constructing a new and efficient preconditioner using an overlapping domain decomposition (DD). It only requires computing a couple of small scale parallel subproblems and a quite small scale eigenvalue problem per iteration. Our theoretical analysis reveals that the convergence rate of the method is bounded by $c(H)(1-C\frac{\delta^{2m-1}}{H^{2m-1}})^{2}$, where $H$ is the diameter of subdomains and $\delta$ is the overlapping size among subdomains. The constant $C$ is independent of the mesh size $h$ and the internal gaps among the target eigenvalues, demonstrating that our method is optimal and cluster robust. Meanwhile, the $H$-dependent constant $c(H)$ decreases monotonically to $1$, as $H \to 0$, which means that more subdomains lead to the better convergence rate. Numerical results supporting our theory are given.

This paper derives the CUR-type factorization for tensors in the Tucker format based on a new variant of the discrete empirical interpolation method known as L-DEIM. This novel sampling technique allows us to construct an efficient algorithm for computing the structure-preserving decomposition, which significantly reduces the computational cost. For large-scale datasets, we incorporate the random sampling technique with the L-DEIM procedure to further improve efficiency. Moreover, we propose randomized algorithms for computing a hybrid decomposition, which yield interpretable factorization and provide a smaller approximation error than the tensor CUR factorization. We provide comprehensive analysis of probabilistic errors associated with our proposed algorithms, and present numerical results that demonstrate the effectiveness of our methods.

Interior-point methods offer a highly versatile framework for convex optimization that is effective in theory and practice. A key notion in their theory is that of a self-concordant barrier. We give a suitable generalization of self-concordance to Riemannian manifolds and show that it gives the same structural results and guarantees as in the Euclidean setting, in particular local quadratic convergence of Newton's method. We analyze a path-following method for optimizing compatible objectives over a convex domain for which one has a self-concordant barrier, and obtain the standard complexity guarantees as in the Euclidean setting. We provide general constructions of barriers, and show that on the space of positive-definite matrices and other symmetric spaces, the squared distance to a point is self-concordant. To demonstrate the versatility of our framework, we give algorithms with state-of-the-art complexity guarantees for the general class of scaling and non-commutative optimization problems, which have been of much recent interest, and we provide the first algorithms for efficiently finding high-precision solutions for computing minimal enclosing balls and geometric medians in nonpositive curvature.

We provide an interior point method based on quasi-Newton iterations, which only requires first-order access to a strongly self-concordant barrier function. To achieve this, we extend the techniques of Dunagan-Harvey [STOC '07] to maintain a preconditioner, while using only first-order information. We measure the quality of this preconditioner in terms of its relative excentricity to the unknown Hessian matrix, and we generalize these techniques to convex functions with a slowly-changing Hessian. We combine this with an interior point method to show that, given first-order access to an appropriate barrier function for a convex set $K$, we can solve well-conditioned linear optimization problems over $K$ to $\varepsilon$ precision in time $\widetilde{O}\left(\left(\mathcal{T}+n^{2}\right)\sqrt{n\nu}\log\left(1/\varepsilon\right)\right)$, where $\nu$ is the self-concordance parameter of the barrier function, and $\mathcal{T}$ is the time required to make a gradient query. As a consequence we show that: $\bullet$ Linear optimization over $n$-dimensional convex sets can be solved in time $\widetilde{O}\left(\left(\mathcal{T}n+n^{3}\right)\log\left(1/\varepsilon\right)\right)$. This parallels the running time achieved by state of the art algorithms for cutting plane methods, when replacing separation oracles with first-order oracles for an appropriate barrier function. $\bullet$ We can solve semidefinite programs involving $m\geq n$ matrices in $\mathbb{R}^{n\times n}$ in time $\widetilde{O}\left(mn^{4}+m^{1.25}n^{3.5}\log\left(1/\varepsilon\right)\right)$, improving over the state of the art algorithms, in the case where $m=\Omega\left(n^{\frac{3.5}{\omega-1.25}}\right)$. Along the way we develop a host of tools allowing us to control the evolution of our potential functions, using techniques from matrix analysis and Schur convexity.

In this paper we study the variational method and integral equation methods for a conical diffraction problem for imperfectly conducting gratings modeled by the impedance boundary value problem of the Helmholtz equation in periodic structures. We justify the strong ellipticity of the sesquilinear form corresponding to the variational formulation and prove the uniqueness of solutions at any frequency. Convergence of the finite element method using the transparent boundary condition (Dirichlet-to-Neumann mapping) is verified. The boundary integral equation method is also discussed.

We obtain new upper and lower bounds on the number of unit perimeter triangles spanned by points in the plane. We also establish improved bounds in the special case where the point set is a section of the integer grid.

Nonsmooth composite optimization with orthogonality constraints has a broad spectrum of applications in statistical learning and data science. However, this problem is generally challenging to solve due to its non-convex and non-smooth nature. Existing solutions are limited by one or more of the following restrictions: (i) they are full gradient methods that require high computational costs in each iteration; (ii) they are not capable of solving general nonsmooth composite problems; (iii) they are infeasible methods and can only achieve the feasibility of the solution at the limit point; (iv) they lack rigorous convergence guarantees; (v) they only obtain weak optimality of critical points. In this paper, we propose \textit{\textbf{OBCD}}, a new Block Coordinate Descent method for solving general nonsmooth composite problems under Orthogonality constraints. \textit{\textbf{OBCD}} is a feasible method with low computation complexity footprints. In each iteration, our algorithm updates $k$ rows of the solution matrix ($k\geq2$ is a parameter) to preserve the constraints. Then, it solves a small-sized nonsmooth composite optimization problem under orthogonality constraints either exactly or approximately. We demonstrate that any exact block-$k$ stationary point is always an approximate block-$k$ stationary point, which is equivalent to the critical stationary point. We are particularly interested in the case where $k=2$ as the resulting subproblem reduces to a one-dimensional nonconvex problem. We propose a breakpoint searching method and a fifth-order iterative method to solve this problem efficiently and effectively. We also propose two novel greedy strategies to find a good working set to further accelerate the convergence of \textit{\textbf{OBCD}}. Finally, we have conducted extensive experiments on several tasks to demonstrate the superiority of our approach.

Backward Stochastic Differential Equations (BSDEs) have been widely employed in various areas of social and natural sciences, such as the pricing and hedging of financial derivatives, stochastic optimal control problems, optimal stopping problems and gene expression. Most BSDEs cannot be solved analytically and thus numerical methods must be applied to approximate their solutions. There have been a variety of numerical methods proposed over the past few decades as well as many more currently being developed. For the most part, they exist in a complex and scattered manner with each requiring a variety of assumptions and conditions. The aim of the present work is thus to systematically survey various numerical methods for BSDEs, and in particular, compare and categorize them, for further developments and improvements. To achieve this goal, we focus primarily on the core features of each method based on an extensive collection of 333 references: the main assumptions, the numerical algorithm itself, key convergence properties and advantages and disadvantages, to provide an up-to-date coverage of numerical methods for BSDEs, with insightful summaries of each and a useful comparison and categorization.

Interval Markov Decision Processes (IMDPs) are finite-state uncertain Markov models, where the transition probabilities belong to intervals. Recently, there has been a surge of research on employing IMDPs as abstractions of stochastic systems for control synthesis. However, due to the absence of algorithms for synthesis over IMDPs with continuous action-spaces, the action-space is assumed discrete a-priori, which is a restrictive assumption for many applications. Motivated by this, we introduce continuous-action IMDPs (caIMDPs), where the bounds on transition probabilities are functions of the action variables, and study value iteration for maximizing expected cumulative rewards. Specifically, we decompose the max-min problem associated to value iteration to $|\mathcal{Q}|$ max problems, where $|\mathcal{Q}|$ is the number of states of the caIMDP. Then, exploiting the simple form of these max problems, we identify cases where value iteration over caIMDPs can be solved efficiently (e.g., with linear or convex programming). We also gain other interesting insights: e.g., in certain cases where the action set $\mathcal{A}$ is a polytope, synthesis over a discrete-action IMDP, where the actions are the vertices of $\mathcal{A}$, is sufficient for optimality. We demonstrate our results on a numerical example. Finally, we include a short discussion on employing caIMDPs as abstractions for control synthesis.

北京阿比特科技有限公司