亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we derive explicit second-order necessary and sufficient optimality conditions of a local minimizer to an optimal control problem for a quasilinear second-order partial differential equation with a piecewise smooth but not differentiable nonlinearity in the leading term. The key argument rests on the analysis of level sets of the state. Specifically, we show that if a function vanishes on the boundary and its the gradient is different from zero on a level set, then this set decomposes into finitely many closed simple curves. Moreover, the level sets depend continuously on the functions defining these sets. We also prove the continuity of the integrals on the level sets. In particular, Green's first identity is shown to be applicable on an open set determined by two functions with nonvanishing gradients. In the second part to this paper, the explicit sufficient second-order conditions will be used to derive error estimates for a finite-element discretization of the control problem.

相關內容

In this paper we consider the finite element approximation of Maxwell's problem and analyse the prescription of essential boundary conditions in a weak sense using Nitsche's method. To avoid indefiniteness of the problem, the original equations are augmented with the gradient of a scalar field that allows one to impose the zero divergence of the magnetic induction, even if the exact solution for this scalar field is zero. Two finite element approximations are considered, namely, one in which the approximation spaces are assumed to satisfy the appropriate inf-sup condition that render the standard Galerkin method stable, and another augmented and stabilised one that permits the use of finite element interpolations of arbitrary order. Stability and convergence results are provided for the two finite element formulations considered.

In this paper, we propose a modification of an acoustic-transport operator splitting Lagrange-projection method for simulating compressible flows with gravity. The original method involves two steps that respectively account for acoustic and transport effects. Our work proposes a simple modification of the transport step, and the resulting modified scheme turns out to be a flux-splitting method. This new numerical method is less computationally expensive, more memory efficient, and easier to implement than the original one. We prove stability properties for this new scheme by showing that under classical CFL conditions, the method is positivity preserving for mass, energy and entropy satisfying. The flexible flux-splitting structure of the method enables straightforward extensions of the method to multi-dimensional problems (with respect to space) and high-order discretizations that are presented in this work. We also propose an interpretation of the flux-splitting solver as a relaxation approximation. Both the stability and the accuracy of the new method are tested against one-dimensional and two-dimensional numerical experiments that involve highly compressible flows and low-Mach regimes.

In this article, we introduce mixture representations for likelihood ratio ordered distributions. Essentially, the ratio of two probability densities, or mass functions, is monotone if and only if one can be expressed as a mixture of one-sided truncations of the other. To illustrate the practical value of the mixture representations, we address the problem of density estimation for likelihood ratio ordered distributions. In particular, we propose a nonparametric Bayesian solution which takes advantage of the mixture representations. The prior distribution is constructed from Dirichlet process mixtures and has large support on the space of pairs of densities satisfying the monotone ratio constraint. Posterior consistency holds under reasonable conditions on the prior specification and the true unknown densities. To our knowledge, this is the first posterior consistency result in the literature on order constrained inference. With a simple modification to the prior distribution, we can test the equality of two distributions against the alternative of likelihood ratio ordering. We develop a Markov chain Monte Carlo algorithm for posterior inference and demonstrate the method in a biomedical application.

Generative diffusion models have achieved spectacular performance in many areas of generative modeling. While the fundamental ideas behind these models come from non-equilibrium physics, in this paper we show that many aspects of these models can be understood using the tools of equilibrium statistical mechanics. Using this reformulation, we show that generative diffusion models undergo second-order phase transitions corresponding to symmetry breaking phenomena. We argue that this lead to a form of instability that lies at the heart of their generative capabilities and that can be described by a set of mean field critical exponents. We conclude by analyzing recent work connecting diffusion models and associative memory networks in view of the thermodynamic formulations.

In this paper we consider PIDEs with gradient-independent Lipschitz continuous nonlinearities and prove that deep neural networks with ReLU activation function can approximate solutions of such semilinear PIDEs without curse of dimensionality in the sense that the required number of parameters in the deep neural networks increases at most polynomially in both the dimension $ d $ of the corresponding PIDE and the reciprocal of the prescribed accuracy $\epsilon $.

When multiple self-adaptive systems share the same environment and have common goals, they may coordinate their adaptations at runtime to avoid conflicts and to satisfy their goals. There are two approaches to coordination. (1) Logically centralized, where a supervisor has complete control over the individual self-adaptive systems. Such approach is infeasible when the systems have different owners or administrative domains. (2) Logically decentralized, where coordination is achieved through direct interactions. Because the individual systems have control over the information they share, decentralized coordination accommodates multiple administrative domains. However, existing techniques do not account simultaneously for both local concerns, e.g., preferences, and shared concerns, e.g., conflicts, which may lead to goals not being achieved as expected. Our idea to address this shortcoming is to express both types of concerns within the same constraint optimization problem. We propose CoADAPT, a decentralized coordination technique introducing two types of constraints: preference constraints, expressing local concerns, and consistency constraints, expressing shared concerns. At runtime, the problem is solved in a decentralized way using distributed constraint optimization algorithms implemented by each self-adaptive system. As a first step in realizing CoADAPT, we focus in this work on the coordination of adaptation planning strategies, traditionally addressed only with centralized techniques. We show the feasibility of CoADAPT in an exemplar from cloud computing and analyze experimentally its scalability.

This paper presents two methods for approximating a proper subset of the entries of a Hessian using only function evaluations. These approximations are obtained using the techniques called \emph{generalized simplex Hessian} and \emph{generalized centered simplex Hessian}. We show how to choose the matrices of directions involved in the computation of these two techniques depending on the entries of the Hessian of interest. We discuss the number of function evaluations required in each case and develop a general formula to approximate all order-$P$ partial derivatives. Since only function evaluations are required to compute the methods discussed in this paper, they are suitable for use in derivative-free optimization methods.

In this paper we consider the numerical approximation of infinite horizon problems via the dynamic programming approach. The value function of the problem solves a Hamilton-Jacobi-Bellman (HJB) equation that is approximated by a fully discrete method. It is known that the numerical problem is difficult to handle by the so called curse of dimensionality. To mitigate this issue we apply a reduction of the order by means of a new proper orthogonal decomposition (POD) method based on time derivatives. We carry out the error analysis of the method using recently proved optimal bounds for the fully discrete approximations. Moreover, the use of snapshots based on time derivatives allow us to bound some terms of the error that could not be bounded in a standard POD approach. Some numerical experiments show the good performance of the method in practice.

In this paper, we propose the novel p-branch-and-bound method for solving two-stage stochastic programming problems whose deterministic equivalents are represented by non-convex mixed-integer quadratically constrained quadratic programming (MIQCQP) models. The precision of the solution generated by the p-branch-and-bound method can be arbitrarily adjusted by altering the value of the precision factor p. The proposed method combines two key techniques. The first one, named p-Lagrangian decomposition, generates a mixed-integer relaxation of a dual problem with a separable structure for a primal non-convex MIQCQP problem. The second one is a version of the classical dual decomposition approach that is applied to solve the Lagrangian dual problem and ensures that integrality and non-anticipativity conditions are met in the optimal solution. The p-branch-and-bound method's efficiency has been tested on randomly generated instances and demonstrated superior performance over commercial solver Gurobi. This paper also presents a comparative analysis of the p-branch-and-bound method efficiency considering two alternative solution methods for the dual problems as a subroutine. These are the proximal bundle method and Frank-Wolfe progressive hedging. The latter algorithm relies on the interpolation of linearisation steps similar to those taken in the Frank-Wolfe method as an inner loop in the classic progressive hedging.

Feature attribution is a fundamental task in both machine learning and data analysis, which involves determining the contribution of individual features or variables to a model's output. This process helps identify the most important features for predicting an outcome. The history of feature attribution methods can be traced back to General Additive Models (GAMs), which extend linear regression models by incorporating non-linear relationships between dependent and independent variables. In recent years, gradient-based methods and surrogate models have been applied to unravel complex Artificial Intelligence (AI) systems, but these methods have limitations. GAMs tend to achieve lower accuracy, gradient-based methods can be difficult to interpret, and surrogate models often suffer from stability and fidelity issues. Furthermore, most existing methods do not consider users' contexts, which can significantly influence their preferences. To address these limitations and advance the current state-of-the-art, we define a novel feature attribution framework called Context-Aware Feature Attribution Through Argumentation (CA-FATA). Our framework harnesses the power of argumentation by treating each feature as an argument that can either support, attack or neutralize a prediction. Additionally, CA-FATA formulates feature attribution as an argumentation procedure, and each computation has explicit semantics, which makes it inherently interpretable. CA-FATA also easily integrates side information, such as users' contexts, resulting in more accurate predictions.

北京阿比特科技有限公司