亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The purpose of this paper is to develop a unified a posteriori method for verifying the positivity of solutions of elliptic boundary value problems by assuming neither $H^2$-regularity nor $ L^{\infty} $-error estimation, but only $ H^1_0 $-error estimation. In [J. Comput. Appl. Math, Vol. 370, (2020) 112647], we proposed two approaches to verify the positivity of solutions of several semilinear elliptic boundary value problems. However, some cases require $ L^{\infty} $-error estimation and, therefore, narrow applicability. In this paper, we extend one of the approaches and combine it with a priori error bounds for Laplacian eigenvalues to obtain a unified method that has wide application. We describe how to evaluate some constants required to verify the positivity of desired solutions. We apply our method to several problems, including those to which the previous method is not applicable.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌入式系統(tong)編譯器、體系結構和綜合國際會議。 Publisher:ACM。 SIT:

Recently Physically Informed Neural Networks have gained more and more popularity to solve partial differential equations, given the fact they escape the course of dimensionality. First Physically Informed Neural Networks are viewed as an underdetermined point matching collocation method then we expose the connection between Galerkin Least Square (GALS) and PINNs, to develop an a priori error estimate, in the context of elliptic problems. In particular techniques that belong to the realm of the least square finite elements and Rademacher complexity analysis will be used to obtain the above mentioned error estimate.

We propose and explore a new, general-purpose method for the implicit time integration of elastica. Key to our approach is the use of a mixed variational principle. In turn its finite element discretization leads to an efficient alternating projections solver with a superset of the desirable properties of many previous fast solution strategies. This framework fits a range of elastic constitutive models and remains stable across a wide span of timestep sizes, material parameters (including problems that are quasi-static and approximately rigid). It is efficient to evaluate and easily applicable to volume, surface, and rods models. We demonstrate the efficacy of our approach on a number of simulated examples across all three codomains.

Applying the concept of S-convergence, based on averaging in the spirit of Strong Law of Large Numbers, the vanishing viscosity solutions of the Euler system are studied. We show how to efficiently compute a viscosity solution of the Euler system as the S-limit of numerical solutions obtained by the Viscosity Finite Volume method. Theoretical results are illustrated by numerical simulations of the Kelvin--Helmholtz instability problem.

Approximate Bayesian Computation (ABC) enables statistical inference in complex models whose likelihoods are difficult to calculate but easy to simulate from. ABC constructs a kernel-type approximation to the posterior distribution through an accept/reject mechanism which compares summary statistics of real and simulated data. To obviate the need for summary statistics, we directly compare empirical distributions with a Kullback-Leibler (KL) divergence estimator obtained via classification. In particular, we blend flexible machine learning classifiers within ABC to automate fake/real data comparisons. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. Our theoretical results show that the rate at which our ABC posterior distributions concentrate around the true parameter depends on the estimation error of the classifier. We derive limiting posterior shape results and find that, with a properly scaled exponential kernel, asymptotic normality holds. We demonstrate the usefulness of our approach on simulated examples as well as real data in the context of stock volatility estimation.

We analyse a class of time discretizations for solving the Gross-Pitaevskii equation at low-regularity on an arbitrary Lipschitz domain $\Omega \subset \mathbb{R}^d$, $d \le 3$, with a non-smooth potential. We show that these schemes, together with their optimal local error structure, allow for convergence under lower regularity assumptions on both the solution and the potential than is required by classical methods, such as splitting or exponential integrator methods. Moreover, we show convergence in the case of periodic boundary conditions, in any fractional positive Sobolev space $H^{r}$, $r \ge 0$ beyond the more typical $L^2$-error analysis.

Casting nonlocal problems in variational form and discretizing them with the finite element (FE) method facilitates the use of nonlocal vector calculus to prove well-posedeness, convergence, and stability of such schemes. Employing an FE method also facilitates meshing of complicated domain geometries and coupling with FE methods for local problems. However, nonlocal weak problems involve the computation of a double-integral, which is computationally expensive and presents several challenges. In particular, the inner integral of the variational form associated with the stiffness matrix is defined over the intersections of FE mesh elements with a ball of radius $\delta$, where $\delta$ is the range of nonlocal interaction. Identifying and parameterizing these intersections is a nontrivial computational geometry problem. In this work, we propose a quadrature technique where the inner integration is performed using quadrature points distributed over the full ball, without regard for how it intersects elements, and weights are computed based on the generalized moving least squares method. Thus, as opposed to all previously employed methods, our technique does not require element-by-element integration and fully circumvents the computation of element-ball intersections. This paper considers one- and two-dimensional implementations of piecewise linear continuous FE approximations, focusing on the case where the element size h and the nonlocal radius $\delta$ are proportional, as is typical of practical computations. When boundary conditions are treated carefully and the outer integral of the variational form is computed accurately, the proposed method is asymptotically compatible in the limit of $h \sim \delta \to 0$, featuring at least first-order convergence in L^2 for all dimensions, using both uniform and nonuniform grids.

We study the numerical approximation by space-time finite element methods of a multi-physics system coupling hyperbolic elastodynamics with parabolic transport and modelling poro- and thermoelasticity. The equations are rewritten as a first-order system in time. Discretizations by continuous Galerkin methods in space and time with inf-sup stable pairs of finite elements for the spatial approximation of the unknowns are investigated. Optimal order error estimates of energy-type are proven. Superconvergence at the time nodes is addressed briefly. The error analysis can be extended to discontinuous and enriched Galerkin space discretizations. The error estimates are confirmed by numerical experiments.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

北京阿比特科技有限公司