亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a globally convergent computational technique for the nonlinear inverse problem of reconstructing the zero-order coefficient in a parabolic equation using partial boundary data. This technique is called the "reduced dimensional method". Initially, we use the polynomial-exponential basis to approximate the inverse problem as a system of 1D nonlinear equations. We then employ a Picard iteration based on the quasi-reversibility method and a Carleman weight function. We will rigorously prove that the sequence derived from this iteration converges to the accurate solution for that 1D system without requesting a good initial guess of the true solution. The key tool for the proof is a Carleman estimate. We will also show some numerical examples.

相關內容

The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.

This paper studies two hybrid discontinuous Galerkin (HDG) discretizations for the velocity-density formulation of the compressible Stokes equations with respect to several desired structural properties, namely provable convergence, the preservation of non-negativity and mass constraints for the density, and gradient-robustness. The later property dramatically enhances the accuracy in well-balanced situations, such as the hydrostatic balance where the pressure gradient balances the gravity force. One of the studied schemes employs an H(div)-conforming velocity ansatz space which ensures all mentioned properties, while a fully discontinuous method is shown to satisfy all properties but the gradient-robustness. Also higher-order schemes for both variants are presented and compared in three numerical benchmark problems. The final example shows the importance also for non-hydrostatic well-balanced states for the compressible Navier-Stokes equations.

In this paper, we develop an arbitrary-order locking-free enriched Galerkin method for the linear elasticity problem using the stress-displacement formulation in both two and three dimensions. The method is based on the mixed discontinuous Galerkin method in [30], but with a different stress approximation space that enriches the arbitrary order continuous Galerkin space with some piecewise symmetric-matrix valued polynomials. We prove that the method is well-posed and provide a parameter-robust error estimate, which confirms the locking-free property of the EG method. We present some numerical examples in two and three dimensions to demonstrate the effectiveness of the proposed method.

Solutions to many important partial differential equations satisfy bounds constraints, but approximations computed by finite element or finite difference methods typically fail to respect the same conditions. Chang and Nakshatrala enforce such bounds in finite element methods through the solution of variational inequalities rather than linear variational problems. Here, we provide a theoretical justification for this method, including higher-order discretizations. We prove an abstract best approximation result for the linear variational inequality and estimates showing that bounds-constrained polynomials provide comparable approximation power to standard spaces. For any unconstrained approximation to a function, there exists a constrained approximation which is comparable in the $W^{1,p}$ norm. In practice, one cannot efficiently represent and manipulate the entire family of bounds-constrained polynomials, but applying bounds constraints to the coefficients of a polynomial in the Bernstein basis guarantees those constraints on the polynomial. Although our theoretical results do not guaruntee high accuracy for this subset of bounds-constrained polynomials, numerical results indicate optimal orders of accuracy for smooth solutions and sharp resolution of features in convection-diffusion problems, all subject to bounds constraints.

Learning tasks play an increasingly prominent role in quantum information and computation. They range from fundamental problems such as state discrimination and metrology over the framework of quantum probably approximately correct (PAC) learning, to the recently proposed shadow variants of state tomography. However, the many directions of quantum learning theory have so far evolved separately. We propose a general mathematical formalism for describing quantum learning by training on classical-quantum data and then testing how well the learned hypothesis generalizes to new data. In this framework, we prove bounds on the expected generalization error of a quantum learner in terms of classical and quantum information-theoretic quantities measuring how strongly the learner's hypothesis depends on the specific data seen during training. To achieve this, we use tools from quantum optimal transport and quantum concentration inequalities to establish non-commutative versions of decoupling lemmas that underlie recent information-theoretic generalization bounds for classical machine learning. Our framework encompasses and gives intuitively accessible generalization bounds for a variety of quantum learning scenarios such as quantum state discrimination, PAC learning quantum states, quantum parameter estimation, and quantumly PAC learning classical functions. Thereby, our work lays a foundation for a unifying quantum information-theoretic perspective on quantum learning.

We consider the general problem of Bayesian binary regression and we introduce a new class of distributions, the Perturbed Unified Skew Normal (pSUN, henceforth), which generalizes the Unified Skew-Normal (SUN) class. We show that the new class is conjugate to any binary regression model, provided that the link function may be expressed as a scale mixture of Gaussian densities. We discuss in detail the popular logit case, and we show that, when a logistic regression model is combined with a Gaussian prior, posterior summaries such as cumulants and normalizing constants can be easily obtained through the use of an importance sampling approach, opening the way to straightforward variable selection procedures. For more general priors, the proposed methodology is based on a simple Gibbs sampler algorithm. We also claim that, in the p > n case, the proposed methodology shows better performances - both in terms of mixing and accuracy - compared to the existing methods. We illustrate the performance through several simulation studies and two data analyses.

Generalized linear models (GLMs) are routinely used for modeling relationships between a response variable and a set of covariates. The simple form of a GLM comes with easy interpretability, but also leads to concerns about model misspecification impacting inferential conclusions. A popular semi-parametric solution adopted in the frequentist literature is quasi-likelihood, which improves robustness by only requiring correct specification of the first two moments. We develop a robust approach to Bayesian inference in GLMs through quasi-posterior distributions. We show that quasi-posteriors provide a coherent generalized Bayes inference method, while also approximating so-called coarsened posteriors. In so doing, we obtain new insights into the choice of coarsening parameter. Asymptotically, the quasi-posterior converges in total variation to a normal distribution and has important connections with the loss-likelihood bootstrap posterior. We demonstrate that it is also well-calibrated in terms of frequentist coverage. Moreover, the loss-scale parameter has a clear interpretation as a dispersion, and this leads to a consolidated method of moments estimator.

Causal representation learning algorithms discover lower-dimensional representations of data that admit a decipherable interpretation of cause and effect; as achieving such interpretable representations is challenging, many causal learning algorithms utilize elements indicating prior information, such as (linear) structural causal models, interventional data, or weak supervision. Unfortunately, in exploratory causal representation learning, such elements and prior information may not be available or warranted. Alternatively, scientific datasets often have multiple modalities or physics-based constraints, and the use of such scientific, multimodal data has been shown to improve disentanglement in fully unsupervised settings. Consequently, we introduce a causal representation learning algorithm (causalPIMA) that can use multimodal data and known physics to discover important features with causal relationships. Our innovative algorithm utilizes a new differentiable parametrization to learn a directed acyclic graph (DAG) together with a latent space of a variational autoencoder in an end-to-end differentiable framework via a single, tractable evidence lower bound loss function. We place a Gaussian mixture prior on the latent space and identify each of the mixtures with an outcome of the DAG nodes; this novel identification enables feature discovery with causal relationships. Tested against a synthetic and a scientific dataset, our results demonstrate the capability of learning an interpretable causal structure while simultaneously discovering key features in a fully unsupervised setting.

Neyman (1923/1990) introduced the randomization model, which contains the notation of potential outcomes to define causal effects and a framework for large-sample inference based on the design of the experiment. However, the existing theory for this framework is far from complete especially when the number of treatment levels diverges and the treatment group sizes vary. We provide a unified discussion of statistical inference under the randomization model with general treatment group sizes. We formulate the estimator in terms of a linear permutational statistic and use results based on Stein's method to derive various Berry--Esseen bounds on the linear and quadratic functions of the estimator. These new Berry--Esseen bounds serve as basis for design-based causal inference with possibly diverging treatment levels and a diverging number of causal parameters of interest. We also fill an important gap by proposing novel variance estimators for experiments with possibly many treatment levels without replications. Equipped with the newly developed results, design-based causal inference in general settings becomes more convenient with stronger theoretical guarantees.

We consider the problem of obtaining effective representations for the solutions of linear, vector-valued stochastic differential equations (SDEs) driven by non-Gaussian pure-jump L\'evy processes, and we show how such representations lead to efficient simulation methods. The processes considered constitute a broad class of models that find application across the physical and biological sciences, mathematics, finance and engineering. Motivated by important relevant problems in statistical inference, we derive new, generalised shot-noise simulation methods whenever a normal variance-mean (NVM) mixture representation exists for the driving L\'evy process, including the generalised hyperbolic, normal-Gamma, and normal tempered stable cases. Simple, explicit conditions are identified for the convergence of the residual of a truncated shot-noise representation to a Brownian motion in the case of the pure L\'evy process, and to a Brownian-driven SDE in the case of the L\'evy-driven SDE. These results provide Gaussian approximations to the small jumps of the process under the NVM representation. The resulting representations are of particular importance in state inference and parameter estimation for L\'evy-driven SDE models, since the resulting conditionally Gaussian structures can be readily incorporated into latent variable inference methods such as Markov chain Monte Carlo (MCMC), Expectation-Maximisation (EM), and sequential Monte Carlo.

北京阿比特科技有限公司