亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Verification solutions for uncertainty quantification are presented for time dependent transport problems where $c$, the scattering ratio, is uncertain. The method of polynomial chaos expansions is employed for quick and accurate calculation of the quantities of interest and uncollided solutions are used to treat part of the uncertainty calculation analytically. We find that approximately six moments in the polynomial expansion are required to represent the solutions to these problems accurately. Additionally, the results show that if the uncertainty interval spans c=1, which means it is uncertain whether the system is multiplying or not, the confidence interval will grow in time. Finally, since the QoI is a strictly increasing function, the percentile values are known and can be used to verify the accuracy of the expansion. These results can be used to test UQ methods for time-dependent transport problems.

相關內容

Vecchia approximation has been widely used to accurately scale Gaussian-process (GP) inference to large datasets, by expressing the joint density as a product of conditional densities with small conditioning sets. We study fixed-domain asymptotic properties of Vecchia-based GP inference for a large class of covariance functions (including Mat\'ern covariances) with boundary conditioning. In this setting, we establish that consistency and asymptotic normality of maximum exact-likelihood estimators imply those of maximum Vecchia-likelihood estimators, and that exact GP prediction can be approximated accurately by Vecchia GP prediction, given that the size of conditioning sets grows polylogarithmically with the data size. Hence, Vecchia-based inference with quasilinear complexity is asymptotically equivalent to exact GP inference with cubic complexity. This also provides a general new result on the screening effect. Our findings are illustrated by numerical experiments, which also show that Vecchia approximation can be more accurate than alternative approaches such as covariance tapering and reduced-rank approximations.

Partitioned methods for coupled problems rely on data transfers between subdomains to synchronize the subdomain equations and enable their independent solution. By treating each subproblem as a separate entity, these methods enable code reuse, increase concurrency and provide a convenient framework for plug-and-play multiphysics simulations. However, accuracy and stability of partitioned methods depends critically on the type of information exchanged between the subproblems. The exchange mechanisms can vary from minimally intrusive remap across interfaces to more accurate but also more intrusive and expensive estimates of the necessary information based on monolithic formulations of the coupled system. These transfer mechanisms are separated by accuracy, performance and intrusiveness gaps that tend to limit the scope of the resulting partitioned methods to specific simulation scenarios. Data-driven system identification techniques provide an opportunity to close these gaps by enabling the construction of accurate, computationally efficient and minimally intrusive data transfer surrogates. This approach shifts the principal computational burden to an offline phase, leaving the application of the surrogate as the sole additional cost during the online simulation phase. In this paper we formulate and demonstrate such a \emph{dynamic flux surrogate-based} partitioned method for a model advection-diffusion transmission problem by using Dynamic Mode Decomposition (DMD) to learn the dynamics of the interface flux from data. The accuracy of the resulting DMD flux surrogate is comparable to that of a dual Schur complement reconstruction, yet its application cost is significantly lower. Numerical results confirm the attractive properties of the new partitioned approach.

We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n \geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant converges to the optimal Hardy constant at a rate proportional to $1/| \log h |^2$. This result holds in dimension $n=1$, in any dimension $n \geq 3$ if the domain is the unit ball and the finite element discretization exploits the rotational symmetry of the problem, and in dimension $n=3$ for general finite element discretizations of the unit ball. In the first two cases, our estimates show excellent quantitative agreement with values of the discrete Hardy constant obtained computationally.

We consider a quantum and classical version multi-party function computation problem with $n$ players, where players $2, \dots, n$ need to communicate appropriate information to player 1, so that a "generalized" inner product function with an appropriate promise can be calculated. The communication complexity of a protocol is the total number of bits that need to be communicated. When $n$ is prime and for our chosen function, we exhibit a quantum protocol (with complexity $(n-1) \log n$ bits) and a classical protocol (with complexity $(n-1)^2 (\log n^2$) bits). In the quantum protocol, the players have access to entangled qudits but the communication is still classical. Furthermore, we present an integer linear programming formulation for determining a lower bound on the classical communication complexity. This demonstrates that our quantum protocol is strictly better than classical protocols.

We present a subspace method to solve large-scale trace ratio problems. This method is matrix-free, only needing the action of the two matrices in the trace ratio. At each iteration, a smaller trace ratio problem is addressed in the search subspace. Additionally, our algorithm is endowed with a restarting strategy, that ensures the monotonicity of the trace ratio value throughout the iterations. We also investigate the behavior of the approximate solution from a theoretical viewpoint, extending existing results on Ritz values and vectors, as the angle between the search subspace and the exact solution to the trace ratio approaches zero. In the context of multigroup classification, numerical experiments show that the new subspace method tends to be more efficient than iterative approaches that need a (partial) eigenvalue decomposition in every step.

When extending inferences from a randomized trial to a new target population, an assumption of transportability of difference effect measures (e.g., conditional average treatment effects) -- or even stronger assumptions of transportability in expectation or distribution of potential outcomes -- is invoked to identify the marginal causal mean difference in the target population. However, many clinical investigators believe that relative effect measures conditional on covariates, such as conditional risk ratios and mean ratios, are more likely to be ``transportable'' across populations compared with difference effect measures. Here, we examine the identification and estimation of the marginal counterfactual mean difference and ratio under a transportability assumption for conditional relative effect measures. We obtain identification results for two scenarios that often arise in practice when individuals in the target population (1) only have access to the control treatment, or (2) have access to the control and other treatments but not necessarily the experimental treatment evaluated in the trial. We then propose multiply robust and nonparametric efficient estimators that allow for the use of data-adaptive methods (e.g., machine learning techniques) to model the nuisance parameters. We examine the performance of the methods in simulation studies and illustrate their use with data from two trials of paliperidone for patients with schizophrenia. We conclude that the proposed methods are attractive when background knowledge suggests that the transportability assumption for conditional relative effect measures is more plausible than alternative assumptions.

In social choice theory with ordinal preferences, a voting method satisfies the axiom of positive involvement if adding to a preference profile a voter who ranks an alternative uniquely first cannot cause that alternative to go from winning to losing. In this note, we prove a new impossibility theorem concerning this axiom: there is no ordinal voting method satisfying positive involvement that also satisfies the Condorcet winner and loser criteria, resolvability, and a common invariance property for Condorcet methods, namely that the choice of winners depends only on the ordering of majority margins by size.

Flow-based models are widely used in generative tasks, including normalizing flow, where a neural network transports from a data distribution $P$ to a normal distribution. This work develops a flow-based model that transports from $P$ to an arbitrary $Q$ where both distributions are only accessible via finite samples. We propose to learn the dynamic optimal transport between $P$ and $Q$ by training a flow neural network. The model is trained to optimally find an invertible transport map between $P$ and $Q$ by minimizing the transport cost. The trained optimal transport flow subsequently allows for performing many downstream tasks, including infinitesimal density ratio estimation (DRE) and distribution interpolation in the latent space for generative models. The effectiveness of the proposed model on high-dimensional data is demonstrated by strong empirical performance on high-dimensional DRE, OT baselines, and image-to-image translation.

For a sequence of random structures with $n$-element domains over a relational signature, we define its first order (FO) complexity as a certain subset in the Banach space $\ell^{\infty}/c_0$. The well-known FO zero-one law and FO convergence law correspond to FO complexities equal to $\{0,1\}$ and a subset of $\mathbb{R}$, respectively. We present a hierarchy of FO complexity classes, introduce a stochastic FO reduction that allows to transfer complexity results between different random structures, and deduce using this tool several new logical limit laws for binomial random structures. Finally, we introduce a conditional distribution on graphs, subject to a FO sentence $\varphi$, that generalises certain well-known random graph models, show instances of this distribution for every complexity class, and prove that the set of all $\varphi$ validating 0--1 law is not recursively enumerable.

A Gaussian process is proposed as a model for the posterior distribution of the local predictive ability of a model or expert, conditional on a vec- tor of covariates, from historical predictions in the form of log predictive scores. Assuming Gaussian expert predictions and a Gaussian data generat- ing process, a linear transformation of the predictive score follows a noncen- tral chi-squared distribution with one degree of freedom. Motivated by this we develop a non-central chi-squared Gaussian process regression to flexibly model local predictive ability, with the posterior distribution of the latent GP function and kernel hyperparameters sampled by Hamiltonian Monte Carlo. We show that a cube-root transformation of the log scores is approximately Gaussian with homoscedastic variance, which makes it possible to estimate the model much faster by marginalizing the latent GP function analytically. Linear pools based on learned local predictive ability are applied to predict daily bike usage in Washington DC.

北京阿比特科技有限公司