亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Optimal transport (OT) theory and the related $p$-Wasserstein distance ($W_p$, $p\geq 1$) are widely-applied in statistics and machine learning. In spite of their popularity, inference based on these tools is sensitive to outliers or it can perform poorly when the underlying model has heavy-tails. To cope with these issues, we introduce a new class of procedures. (i) We consider a robust version of the primal OT problem (ROBOT) and show that it defines the {robust Wasserstein distance}, $W^{(\lambda)}$, which depends on a tuning parameter $\lambda > 0$. (ii) We illustrate the link between $W_1$ and $W^{(\lambda)}$ and study its key measure theoretic aspects. (iii) We derive some concentration inequalities for $W^{(\lambda)}$. (iii) We use $W^{(\lambda)}$ to define minimum distance estimators, we provide their statistical guarantees and we illustrate how to apply concentration inequalities for the selection of $\lambda$. (v) We derive the {dual} form of the ROBOT and illustrate its applicability to machine learning problems (generative adversarial networks and domain adaptation). Numerical exercises provide evidence of the benefits yielded by our methods.

相關內容

It is often claimed that the theory of function levels proposed by Frege in Grundgesetze der Arithmetik anticipates the hierarchy of types that underlies Church's simple theory of types. This claim roughly states that Frege presupposes a type of functions in the sense of simple type theory in the expository language of Grundgesetze. However, this view makes it hard to accommodate function names of two arguments and view functions as incomplete entities. I propose and defend an alternative interpretation of first-level function names in Grundgesetze into simple type-theoretic open terms rather than into closed terms of a function type. This interpretation offers a still unhistorical but more faithful type-theoretic approximation of Frege's theory of levels and can be naturally extended to accommodate second-level functions. It is made possible by two key observations that Frege's Roman markers behave essentially like open terms and that Frege lacks a clear criterion for distinguishing between Roman markers and function names.

We consider the numerical approximation of variational problems with orthotropic growth, that is those where the integrand depends strongly on the coordinate directions with possibly different growth in each direction. Under realistic regularity assumptions we derive optimal error estimates. These estimates depend on the existence of an orthotropically stable interpolation operator. Over certain meshes we construct an orthotropically stable interpolant that is also a projection. Numerical experiments illustrate and explore the limits of our theory.

An initial-boundary value problem with a Caputo time derivative of fractional order $\alpha\in(0,1)$ is considered, solutions of which typically exhibit a singular behaviour at an initial time. For this problem, we give a simple and general numerical-stability analysis using barrier functions, which yields sharp pointwise-in-time error bounds on quasi-graded temporal meshes with arbitrary degree of grading. L1-type and Alikhanov-type discretization in time are considered. In particular, those results imply that milder (compared to the optimal) grading yields optimal convergence rates in positive time. Semi-discretizations in time and full discretizations are addressed. The theoretical findings are illustrated by numerical experiments.

We develop a theory for the representation of opaque solids as volumetric models. Starting from a stochastic representation of opaque solids as random indicator functions, we prove the conditions under which such solids can be modeled using exponential volumetric transport. We also derive expressions for the volumetric attenuation coefficient as a functional of the probability distributions of the underlying indicator functions. We generalize our theory to account for isotropic and anisotropic scattering at different parts of the solid, and for representations of opaque solids as implicit surfaces. We derive our volumetric representation from first principles, which ensures that it satisfies physical constraints such as reciprocity and reversibility. We use our theory to explain, compare, and correct previous volumetric representations, as well as propose meaningful extensions that lead to improved performance in 3D reconstruction tasks.

Busy-waiting is an important, low-level synchronization pattern that is used to implement higher-level abstractions for synchronization. Its termination depends on cooperation by other threads as well as a fair thread scheduler. We present a general approach for modularly verifying busy-waiting concurrent programs based on higher-order separation logic. The approach combines two strands of prior work. First, the Jacobs and Piessens (2011) higher-order-programming perspective for verifying concurrent modules. Second, the Reinhard and Jacobs (2021) ghost signals approach to verify busy-waiting. The latter uses classical specifications for synchronization constructs where the module creates and discharges obligations. Such specifications, however, fix particular client patterns and would in general require "obligation transfer" to handle more intricate wait dependencies. This precludes clients from performing lock handoffs, an important mechanism to control (un)fairness in the design of locks. Our contribution -- inspired by D'Osualdo, Sutherland, Farzan and Gardner (2021)'s TaDA Live -- is to require the client to create and discharge obligations as necessary to satisfy the module's liveness requirements. However, instead of building these liveness requirements into the logic, we express them by having the module's operations take auxiliary code as arguments whose job it is to generate the call permissions the module needs for its busy-waiting. In the paper we present specifications and proofs in Iris. We validated our approach by developing a (non-foundational) machine-checked proof of a cohort lock -- to the best of our knowledge the first of its kind -- using an encoding of our approach in the VeriFast program verifier for C and Java. This fair lock is implemented on top of another fair lock module and involves lock handoff, thus exercising the asserted contributions.

The Galerkin method is often employed for numerical integration of evolutionary equations, such as the Navier-Stokes equation or the magnetic induction equation. Application of the method requires solving an equation of the form $P(Av-f)=0$ at each time step, where $v$ is an element of a finite-dimensional space $V$ with a basis satisfying boundary conditions, $P$ is the orthogonal projection on this space and $A$ is a linear operator. Usually the coefficients of $v$ expanded in the basis are found by calculating the matrix of $PA$ acting on $V$ and solving the respective system of linear equations. For physically realistic boundary conditions (such as the no-slip boundary conditions for the velocity, or for a dielectric outside the fluid volume for the magnetic field) the basis is often not orthogonal and solving the problem can be computationally demanding. We propose an algorithm giving an opportunity to reduce the computational cost for such a problem. Suppose there exists a space $W$ that contains $V$, the difference between the dimensions of $W$ and $V$ is small relative to the dimension of $V$, and solving the problem $P(Aw-f)=0$, where $w$ is an element of $W$, requires less operations than solving the original problem. The equation $P(Av-f)=0$ is then solved in two steps: we solve the problem $P(Aw-f)=0$ in $W$, find a correction $h=v-w$ that belongs to a complement to $V$ in $W$, and obtain the solution $w+h$. When the dimension of the complement is small the proposed algorithm is more efficient than the traditional one.

Many interesting physical problems described by systems of hyperbolic conservation laws are stiff, and thus impose a very small time-step because of the restrictive CFL stability condition. In this case, one can exploit the superior stability properties of implicit time integration which allows to choose the time-step only from accuracy requirements, and thus avoid the use of small time-steps. We discuss an efficient framework to devise high order implicit schemes for stiff hyperbolic systems without tailoring it to a specific problem. The nonlinearity of high order schemes, due to space- and time-limiting procedures which control nonphysical oscillations, makes the implicit time integration difficult, e.g.~because the discrete system is nonlinear also on linear problems. This nonlinearity of the scheme is circumvented as proposed in (Puppo et al., Comm.~Appl.~Math.~\& Comput., 2023) for scalar conservation laws, where a first order implicit predictor is computed to freeze the nonlinear coefficients of the essentially non-oscillatory space reconstruction, and also to assist limiting in time. In addition, we propose a novel conservative flux-centered a-posteriori time-limiting procedure using numerical entropy indicators to detect troubled cells. The numerical tests involve classical and artificially devised stiff problems using the Euler's system of gas-dynamics.

As in many fields of medical research, survival analysis has witnessed a growing interest in the application of deep learning techniques to model complex, high-dimensional, heterogeneous, incomplete, and censored medical data. Current methods often make assumptions about the relations between data that may not be valid in practice. In response, we introduce SAVAE (Survival Analysis Variational Autoencoder), a novel approach based on Variational Autoencoders. SAVAE contributes significantly to the field by introducing a tailored ELBO formulation for survival analysis, supporting various parametric distributions for covariates and survival time (as long as the log-likelihood is differentiable). It offers a general method that consistently performs well on various metrics, demonstrating robustness and stability through different experiments. Our proposal effectively estimates time-to-event, accounting for censoring, covariate interactions, and time-varying risk associations. We validate our model in diverse datasets, including genomic, clinical, and demographic data, with varying levels of censoring. This approach demonstrates competitive performance compared to state-of-the-art techniques, as assessed by the Concordance Index and the Integrated Brier Score. SAVAE also offers an interpretable model that parametrically models covariates and time. Moreover, its generative architecture facilitates further applications such as clustering, data imputation, and the generation of synthetic patient data through latent space inference from survival data.

As a crossover frontier of physics and mechanics, quantum computing is showing its great potential in computational mechanics. However, quantum hardware noise remains a critical barrier to achieving accurate simulation results due to the limitation of the current hardware level. In this paper, we integrate error-mitigated quantum computing in data-driven computational mechanics, where the zero-noise extrapolation (ZNE) technique is employed to improve the accuracy of quantum computing. Numerical examples including multiscale simulation of a composite L-shaped beam are conducted with the quantum computer simulator Qpanda, and the results validate the effectiveness of the proposed method. We believe this work presents a promising step towards using the power of quantum computing in computational mechanics.

Deep neural networks (DNNs) often fail silently with over-confident predictions on out-of-distribution (OOD) samples, posing risks in real-world deployments. Existing techniques predominantly emphasize either the feature representation space or the gradient norms computed with respect to DNN parameters, yet they overlook the intricate gradient distribution and the topology of classification regions. To address this gap, we introduce GRadient-aware Out-Of-Distribution detection in interpolated manifolds (GROOD), a novel framework that relies on the discriminative power of gradient space to distinguish between in-distribution (ID) and OOD samples. To build this space, GROOD relies on class prototypes together with a prototype that specifically captures OOD characteristics. Uniquely, our approach incorporates a targeted mix-up operation at an early intermediate layer of the DNN to refine the separation of gradient spaces between ID and OOD samples. We quantify OOD detection efficacy using the distance to the nearest neighbor gradients derived from the training set, yielding a robust OOD score. Experimental evaluations substantiate that the introduction of targeted input mix-upamplifies the separation between ID and OOD in the gradient space, yielding impressive results across diverse datasets. Notably, when benchmarked against ImageNet-1k, GROOD surpasses the established robustness of state-of-the-art baselines. Through this work, we establish the utility of leveraging gradient spaces and class prototypes for enhanced OOD detection for DNN in image classification.

北京阿比特科技有限公司