亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We formulate a uniform tail bound for empirical processes indexed by a class of functions, in terms of the individual deviations of the functions rather than the worst-case deviation in the considered class. The tail bound is established by introducing an initial ``deflation'' step to the standard generic chaining argument. The resulting tail bound is the sum of the complexity of the ``deflated function class'' in terms of a generalization of Talagrand's $\gamma$ functional, and the deviation of the function instance, both of which are formulated based on the natural seminorm induced by the corresponding Cram\'{e}r functions. Leveraging another less demanding natural seminorm, we also show similar bounds, though with implicit dependence on the sample size, in the more general case where finite exponential moments cannot be assumed. We also provide approximations of the tail bounds in terms of the more prevalent Orlicz norms or their ``incomplete'' versions under suitable moment conditions.

相關內容

We consider quantum circuit models where the gates are drawn from arbitrary gate ensembles given by probabilistic distributions over certain gate sets and circuit architectures, which we call stochastic quantum circuits. Of main interest in this work is the speed of convergence of stochastic circuits with different gate ensembles and circuit architectures to unitary t-designs. A key motivation for this theory is the varying preference for different gates and circuit architectures in different practical scenarios. In particular, it provides a versatile framework for devising efficient circuits for implementing $t$-designs and relevant applications including random circuit and scrambling experiments, as well as benchmarking the performance of gates and circuit architectures. We examine various important settings in depth. A key aspect of our study is an "ironed gadget" model, which allows us to systematically evaluate and compare the convergence efficiency of entangling gates and circuit architectures. Particularly notable results include i) gadgets of two-qubit gates with KAK coefficients $\left(\frac{\pi}{4}-\frac{1}{8}\arccos(\frac{1}{5}),\frac{\pi}{8},\frac{1}{8}\arccos(\frac{1}{5})\right)$ (which we call $\chi$ gates) directly form exact 2- and 3-designs; ii) the iSWAP gate family achieves the best efficiency for convergence to 2-designs under mild conjectures with numerical evidence, even outperforming the Haar-random gate, for generic many-body circuits; iii) iSWAP + complete graph achieve the best efficiency for convergence to 2-designs among all graph circuits. A variety of numerical results are provided to complement our analysis. We also derive robustness guarantees for our analysis against gate perturbations. Additionally, we provide cursory analysis on gates with higher locality and found that the Margolus gate outperforms various other well-known gates.

Generalized linear mixed models (GLMMs) are a widely used tool in statistical analysis. The main bottleneck of many computational approaches lies in the inversion of the high dimensional precision matrices associated with the random effects. Such matrices are typically sparse; however, the sparsity pattern resembles a multi partite random graph, which does not lend itself well to default sparse linear algebra techniques. Notably, we show that, for typical GLMMs, the Cholesky factor is dense even when the original precision is sparse. We thus turn to approximate iterative techniques, in particular to the conjugate gradient (CG) method. We combine a detailed analysis of the spectrum of said precision matrices with results from random graph theory to show that CG-based methods applied to high-dimensional GLMMs typically achieve a fixed approximation error with a total cost that scales linearly with the number of parameters and observations. Numerical illustrations with both real and simulated data confirm the theoretical findings, while at the same time illustrating situations, such as nested structures, where CG-based methods struggle.

Statistical learning under distribution shift is challenging when neither prior knowledge nor fully accessible data from the target distribution is available. Distributionally robust learning (DRL) aims to control the worst-case statistical performance within an uncertainty set of candidate distributions, but how to properly specify the set remains challenging. To enable distributional robustness without being overly conservative, in this paper, we propose a shape-constrained approach to DRL, which incorporates prior information about the way in which the unknown target distribution differs from its estimate. More specifically, we assume the unknown density ratio between the target distribution and its estimate is isotonic with respect to some partial order. At the population level, we provide a solution to the shape-constrained optimization problem that does not involve the isotonic constraint. At the sample level, we provide consistency results for an empirical estimator of the target in a range of different settings. Empirical studies on both synthetic and real data examples demonstrate the improved accuracy of the proposed shape-constrained approach.

In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.

We propose a novel, highly efficient, second-order accurate, long-time unconditionally stable numerical scheme for a class of finite-dimensional nonlinear models that are of importance in geophysical fluid dynamics. The scheme is highly efficient in the sense that only a (fixed) symmetric positive definite linear problem (with varying right hand sides) is involved at each time-step. The solutions to the scheme are uniformly bounded for all time. We show that the scheme is able to capture the long-time dynamics of the underlying geophysical model, with the global attractors as well as the invariant measures of the scheme converge to those of the original model as the step size approaches zero. In our numerical experiments, we take an indirect approach, using long-term statistics to approximate the invariant measures. Our results suggest that the convergence rate of the long-term statistics, as a function of terminal time, is approximately first order using the Jensen-Shannon metric and half-order using the L1 metric. This implies that very long time simulation is needed in order to capture a few significant digits of long time statistics (climate) correct. Nevertheless, the second order scheme's performance remains superior to that of the first order one, requiring significantly less time to reach a small neighborhood of statistical equilibrium for a given step size.

The gradient bounds of generalized barycentric coordinates play an essential role in the $H^1$ norm approximation error estimate of generalized barycentric interpolations. Similarly, the $H^k$ norm, $k>1$, estimate needs upper bounds of high-order derivatives, which are not available in the literature. In this paper, we derive such upper bounds for the Wachspress generalized barycentric coordinates on simple convex $d$-dimensional polytopes, $d\ge 1$. The result can be used to prove optimal convergence for Wachspress-based polytopal finite element approximation of, for example, fourth-order elliptic equations. Another contribution of this paper is to compare various shape-regularity conditions for simple convex polytopes, and to clarify their relations using knowledge from convex geometry.

We present a scalable Bayesian framework for the analysis of confocal fluorescence spectroscopy data, addressing key limitations in traditional fluorescence correlation spectroscopy methods. Our framework captures molecular motion, microscope optics, and photon detection with high fidelity, enabling statistical inference of molecule trajectories from raw photon count data, introducing a superresolution parameter which further enhances trajectory estimation beyond the native time resolution of data acquisition. To handle the high dimensionality of the arising posterior distribution, we develop a family of Hamiltonian Monte Carlo (HMC) algorithms that leverages the unique characteristics inherent to spectroscopy data analysis. Here, due to the highly-coupled correlation structure of the target posterior distribution, HMC requires the numerical solution of a stiff ordinary differential equation containing a two-scale discrete Laplacian. By considering the spectral properties of this operator, we produce a CFL-type integrator stability condition for the standard St\"ormer-Verlet integrator used in HMC. To circumvent this instability we introduce a semi-implicit (IMEX) method which treats the stiff and non-stiff parts differently, while leveraging the sparse structure of the discrete Laplacian for computational efficiency. Detailed numerical experiments demonstrate that this method improves upon fully explicit approaches, allowing larger HMC step sizes and maintaining second-order accuracy in position and energy. Our framework provides a foundation for extensions to more complex models such as surface constrained molecular motion or motion with multiple diffusion modes.

We propose to condition a generative model by a given image classifier uncertainty in order to analyze and explain its behavior. Preliminary experiments on synthetic data and a corrupted version of MNIST dataset illustrate the idea.

Point processes are widely used statistical models for uncovering the temporal patterns in dependent event data. In many applications, the event time cannot be observed exactly, calling for the incorporation of time uncertainty into the modeling of point process data. In this work, we introduce a framework to model time-uncertain point processes possibly on a network. We start by deriving the formulation in the continuous-time setting under a few assumptions motivated by application scenarios. After imposing a time grid, we obtain a discrete-time model that facilitates inference and can be computed by first-order optimization methods such as Gradient Descent or Variation inequality (VI) using batch-based Stochastic Gradient Descent (SGD). The parameter recovery guarantee is proved for VI inference at an $O(1/k)$ convergence rate using $k$ SGD steps. Our framework handles non-stationary processes by modeling the inference kernel as a matrix (or tensor on a network) and it covers the stationary process, such as the classical Hawkes process, as a special case. We experimentally show that the proposed approach outperforms previous General Linear model (GLM) baselines on simulated and real data and reveals meaningful causal relations on a Sepsis-associated Derangements dataset.

We develop a Markov model of curling matches, parametrised by the probability of winning an end and the probability distribution of scoring ends. In practical applications, these end-winning probabilities can be estimated econometrically, and are shown to depend on which team holds the hammer, as well as the offensive and defensive strengths of the respective teams. Using a maximum entropy argument, based on the idea of characteristic scoring patterns in curling, we predict that the points distribution of scoring ends should follow a constrained geometric distribution. We provide analytical results detailing when it is optimal to blank the end in preference to scoring one point and losing possession of the hammer. Statistical and simulation analysis of international curling matches is also performed.

北京阿比特科技有限公司