亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The central problem in electronic structure theory is the computation of the eigenvalues of the electronic Hamiltonian -- an unbounded, self-adjoint operator acting on a Hilbert space of antisymmetric functions. Coupled cluster (CC) methods, which are based on a non-linear parameterisation of the sought-after eigenfunction and result in non-linear systems of equations, are the method of choice for high accuracy quantum chemical simulations but their numerical analysis is underdeveloped. The existing numerical analysis relies on a local, strong monotonicity property of the CC function that is valid only in a perturbative regime, i.e., when the sought-after ground state CC solution is sufficiently close to zero. In this article, we introduce a new well-posedness analysis for the single reference coupled cluster method based on the invertibility of the CC derivative. Under the minimal assumption that the sought-after eigenfunction is intermediately normalisable and the associated eigenvalue is isolated and non-degenerate, we prove that the continuous (infinite-dimensional) CC equations are always locally well-posed. Under the same minimal assumptions and provided that the discretisation is fine enough, we prove that the discrete Full-CC equations are locally well-posed, and we derive residual-based error estimates with guaranteed positive constants. Preliminary numerical experiments indicate that the constants that appear in our estimates are a significant improvement over those obtained from the local monotonicity approach.

相關內容

We consider identification and inference about a counterfactual outcome mean when there is unmeasured confounding using tools from proximal causal inference (Miao et al. [2018], Tchetgen Tchetgen et al. [2020]). Proximal causal inference requires existence of solutions to at least one of two integral equations. We motivate the existence of solutions to the integral equations from proximal causal inference by demonstrating that, assuming the existence of a solution to one of the integral equations, $\sqrt{n}$-estimability of a linear functional (such as its mean) of that solution requires the existence of a solution to the other integral equation. Solutions to the integral equations may not be unique, which complicates estimation and inference. We construct a consistent estimator for the solution set for one of the integral equations and then adapt the theory of extremum estimators to find from the estimated set a consistent estimator for a uniquely defined solution. A debiased estimator for the counterfactual mean is shown to be root-$n$ consistent, regular, and asymptotically semiparametrically locally efficient under additional regularity conditions.

Blind source separation (BSS) aims to recover an unobserved signal $S$ from its mixture $X=f(S)$ under the condition that the effecting transformation $f$ is invertible but unknown. As this is a basic problem with many practical applications, a fundamental issue is to understand how the solutions to this problem behave when their supporting statistical prior assumptions are violated. In the classical context of linear mixtures, we present a general framework for analysing such violations and quantifying their impact on the blind recovery of $S$ from $X$. Modelling $S$ as a multidimensional stochastic process, we introduce an informative topology on the space of possible causes underlying a mixture $X$, and show that the behaviour of a generic BSS-solution in response to general deviations from its defining structural assumptions can be profitably analysed in the form of explicit continuity guarantees with respect to this topology. This allows for a flexible and convenient quantification of general model uncertainty scenarios and amounts to the first comprehensive robustness framework for BSS. Our approach is entirely constructive, and we demonstrate its utility with novel theoretical guarantees for a number of statistical applications.

In this paper, we will show the $L^p$-resolvent estimate for the finite element approximation of the Stokes operator for $p \in \left( \frac{2N}{N+2}, \frac{2N}{N-2} \right)$, where $N \ge 2$ is the dimension of the domain. It is expected that this estimate can be applied to error estimates for finite element approximation of the non-stationary Navier--Stokes equations, since studies in this direction are successful in numerical analysis of nonlinear parabolic equations. To derive the resolvent estimate, we introduce the solution of the Stokes resolvent problem with a discrete external force. We then obtain local energy error estimate according to a novel localization technique and establish global $L^p$-type error estimates. The restriction for $p$ is caused by the treatment of lower-order terms appearing in the local energy error estimate. Our result may be a breakthrough in the $L^p$-theory of finite element methods for the non-stationary Navier--Stokes equations.

Computing accurate splines of degree greater than three is still a challenging task in today's applications. In this type of interpolation, high-order derivatives are needed on the given mesh. As these derivatives are rarely known and are often not easy to approximate accurately, high-degree splines are difficult to obtain using standard approaches. In Beaudoin (1998), Beaudoin and Beauchemin (2003), and Pepin et al. (2019), a new method to compute spline approximations of low or high degree from equidistant interpolation nodes based on the discrete Fourier transform is analyzed. The accuracy of this method greatly depends on the accuracy of the boundary conditions. An algorithm for the computation of the boundary conditions can be found in Beaudoin (1998), and Beaudoin and Beauchemin (2003). However, this algorithm lacks robustness since the approximation of the boundary conditions is strongly dependant on the choice of $\theta$ arbitrary parameters, $\theta$ being the degree of the spline. The goal of this paper is therefore to propose two new robust algorithms, independent of arbitrary parameters, for the computation of the boundary conditions in order to obtain accurate splines of any degree. Numerical results will be presented to show the efficiency of these new approaches.

New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better. In particular, data-driven learning approaches (i.e., Machine Learning (ML)) have been a true revolution in the advancement of multiple technologies in various application domains. But at the same time there is growing concern about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights. Although there are mechanisms in the adoption process to minimize these risks (e.g., safety regulations), these do not exclude the possibility of harm occurring, and if this happens, victims should be able to seek compensation. Liability regimes will therefore play a key role in ensuring basic protection for victims using or interacting with these systems. However, the same characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self and continuous learning capabilities, may lead to considerable difficulties when it comes to proving causation. This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties. Specifically, we address the cases of cleaning robots, delivery drones and robots in education. The outcome of the proposed analysis suggests the need to revise liability regimes to alleviate the burden of proof on victims in cases involving AI technologies.

We provide a unified operational framework for the study of causality, non-locality and contextuality, in a fully device-independent and theory-independent setting. We define causaltopes, our chosen portmanteau of "causal polytopes", for arbitrary spaces of input histories and arbitrary choices of input contexts. We show that causaltopes are obtained by slicing simpler polytopes of conditional probability distributions with a set of causality equations, which we fully characterise. We provide efficient linear programs to compute the maximal component of an empirical model supported by any given sub-causaltope, as well as the associated causal fraction. We introduce a notion of causal separability relative to arbitrary causal constraints. We provide efficient linear programs to compute the maximal causally separable component of an empirical model, and hence its causally separable fraction, as the component jointly supported by certain sub-causaltopes. We study causal fractions and causal separability for several novel examples, including a selection of quantum switches with entangled or contextual control. In the process, we demonstrate the existence of "causal contextuality", a phenomenon where causal inseparability is clearly correlated to, or even directly implied by, non-locality and contextuality.

When solving compressible multi-material flow problems, an unresolved challenge is the computation of advective fluxes across material interfaces that separate drastically different thermodynamic states and relations. A popular idea in this regard is to locally construct bimaterial Riemann problems, and to apply their exact solutions in flux computation. For general equations of state, however, finding the exact solution of a Riemann problem is expensive as it requires nested loops. Multiplied by the large number of Riemann problems constructed during a simulation, the computational cost often becomes prohibitive. The work presented in this paper aims to accelerate the solution of bimaterial Riemann problems without introducing approximations or offline precomputation tasks. The basic idea is to exploit some special properties of the Riemann problem equations, and to recycle previous solutions as much as possible. Following this idea, four acceleration methods are developed, including (1) a change of integration variable through rarefaction fans, (2) storing and reusing integration trajectory data, (3) step size adaptation, and (4) constructing an R-tree on the fly to generate initial guesses. The performance of these acceleration methods are assessed using four example problems in underwater explosion, laser-induced cavitation, and hypervelocity impact. These problems exhibit strong shock waves, large interface deformation, contact of multiple (>2) interfaces, and interaction between gases and condensed matters. In these challenging cases, the solution of bimaterial Riemann problems is accelerated by 37 to 83 times. As a result, the total cost of advective flux computation, which includes the exact Riemann problem solution at material interfaces and the numerical flux calculation over the entire computational domain, is accelerated by 18 to 79 times.

This paper addresses different aspects of "coupled" model descriptions in computational electromagnetics. This includes domain decomposition, multiscale problems, multiple or hybrid discrete field formulation and multi-physics problems. Theoretical issues of accuracy, stability and numerical efficiency of the resulting formulations are addressed along with advantages and disadvantages of the various approaches. Examples for multi-method, multi-domain, multi-formulation and multi-physics coupled formulations stem from numerical testing of electromagnetic compatibility in complex scenarios and numerical dosimetry of biological organisms in electromagnetic exposure situations and from simulations of large systems in electromagnetic power transmission.

We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses Bregman projections onto the solution space of a Newton equation. In the special case of euclidean projections, the method is known as nonlinear Kaczmarz method. Furthermore, if the component functions are nonnegative, we are in the setting of optimization under the interpolation assumption and the method reduces to SGD with the recently proposed stochastic Polyak step size. For general Bregman projections, our method is a stochastic mirror descent with a novel adaptive step size. We prove that in the convex setting each iteration of our method results in a smaller Bregman distance to exact solutions as compared to the standard Polyak step. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem needs to be solved in each iteration. This can typically be done with globalized Newton iterations. Convergence is proved in two classical settings of nonlinearity: for convex nonnegative functions and locally for functions which fulfill the tangential cone condition. Finally, we show examples in which the proposed method outperforms similar methods with the same memory requirements.

Recently, recovering an unknown signal from quadratic measurements has gained popularity because it includes many interesting applications as special cases such as phase retrieval, fusion frame phase retrieval, and positive operator-valued measure. In this paper, by employing the least squares approach to reconstruct the signal, we establish the non-asymptotic statistical property showing that the gap between the estimator and the true signal is vanished in the noiseless case and is bounded in the noisy case by an error rate of $O(\sqrt{p\log(1+2n)/n})$, where $n$ and $p$ are the number of measurements and the dimension of the signal, respectively. We develop a gradient regularized Newton method (GRNM) to solve the least squares problem and prove that it converges to a unique local minimum at a superlinear rate under certain mild conditions. In addition to the deterministic results, GRNM can reconstruct the true signal exactly for the noiseless case and achieve the above error rate with a high probability for the noisy case. Numerical experiments demonstrate the GRNM performs nicely in terms of high order of recovery accuracy, faster computational speed, and strong recovery capability.

北京阿比特科技有限公司