亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Gaussian processes (GPs) are widely-used tools in spatial statistics and machine learning and the formulae for the mean function and covariance kernel of a GP $v$ that is the image of another GP $u$ under a linear transformation $T$ acting on the sample paths of $u$ are well known, almost to the point of being folklore. However, these formulae are often used without rigorous attention to technical details, particularly when $T$ is an unbounded operator such as a differential operator, which is common in several modern applications. This note provides a self-contained proof of the claimed formulae for the case of a closed, densely-defined operator $T$ acting on the sample paths of a square-integrable stochastic process. Our proof technique relies upon Hille's theorem for the Bochner integral of a Banach-valued random variable.

相關內容

Processing 是(shi)一門開源編(bian)程語(yu)言和與之配套的(de)集(ji)成開發環境(IDE)的(de)名稱(cheng)。Processing 在電子(zi)藝術(shu)和視(shi)覺設計社區被用來(lai)教授編(bian)程基礎,并運用于大量的(de)新媒(mei)體和互(hu)動藝術(shu)作品中。

There are many physical processes that have inherent discontinuities in their mathematical formulations. This paper is motivated by the specific case of collisions between two rigid or deformable bodies and the intrinsic nature of that discontinuity. The impulse response to a collision is discontinuous with the lack of any response when no collision occurs, which causes difficulties for numerical approaches that require differentiability which are typical in machine learning, inverse problems, and control. We theoretically and numerically demonstrate that the derivative of the collision time with respect to the parameters becomes infinite as one approaches the barrier separating colliding from not colliding, and use lifting to complexify the solution space so that solutions on the other side of the barrier are directly attainable as precise values. Subsequently, we mollify the barrier posed by the unbounded derivatives, so that one can tunnel back and forth in a smooth and reliable fashion facilitating the use of standard numerical approaches. Moreover, we illustrate that standard approaches fail in numerous ways mostly due to a lack of understanding of the mathematical nature of the problem (e.g. typical backpropagation utilizes many rules of differentiation, but ignores L'Hopital's rule).

This work focuses on developing methods for approximating the solution operators of a class of parametric partial differential equations via neural operators. Neural operators have several challenges, including the issue of generating appropriate training data, cost-accuracy trade-offs, and nontrivial hyperparameter tuning. The unpredictability of the accuracy of neural operators impacts their applications in downstream problems of inference, optimization, and control. A framework is proposed based on the linear variational problem that gives the correction to the prediction furnished by neural operators. The operator associated with the corrector problem is referred to as the corrector operator. Numerical results involving a nonlinear diffusion model in two dimensions with PCANet-type neural operators show almost two orders of increase in the accuracy of approximations when neural operators are corrected using the proposed scheme. Further, topology optimization involving a nonlinear diffusion model is considered to highlight the limitations of neural operators and the efficacy of the correction scheme. Optimizers with neural operator surrogates are seen to make significant errors (as high as 80 percent). However, the errors are much lower (below 7 percent) when neural operators are corrected following the proposed method.

Multiscale partial differential equations (PDEs) arise in various applications, and several schemes have been developed to solve them efficiently. Homogenization theory is a powerful methodology that eliminates the small-scale dependence, resulting in simplified equations that are computationally tractable. In the field of continuum mechanics, homogenization is crucial for deriving constitutive laws that incorporate microscale physics in order to formulate balance laws for the macroscopic quantities of interest. However, obtaining homogenized constitutive laws is often challenging as they do not in general have an analytic form and can exhibit phenomena not present on the microscale. In response, data-driven learning of the constitutive law has been proposed as appropriate for this task. However, a major challenge in data-driven learning approaches for this problem has remained unexplored: the impact of discontinuities and corner interfaces in the underlying material. These discontinuities in the coefficients affect the smoothness of the solutions of the underlying equations. Given the prevalence of discontinuous materials in continuum mechanics applications, it is important to address the challenge of learning in this context; in particular to develop underpinning theory to establish the reliability of data-driven methods in this scientific domain. The paper addresses this unexplored challenge by investigating the learnability of homogenized constitutive laws for elliptic operators in the presence of such complexities. Approximation theory is presented, and numerical experiments are performed which validate the theory for the solution operator defined by the cell-problem arising in homogenization for elliptic PDEs.

The Plackett--Luce model is a popular approach for ranking data analysis, where a utility vector is employed to determine the probability of each outcome based on Luce's choice axiom. In this paper, we investigate the asymptotic theory of utility vector estimation by maximizing different types of likelihood, such as the full-, marginal-, and quasi-likelihood. We provide a rank-matching interpretation for the estimating equations of these estimators and analyze their asymptotic behavior as the number of items being compared tends to infinity. In particular, we establish the uniform consistency of these estimators under conditions characterized by the topology of the underlying comparison graph sequence and demonstrate that the proposed conditions are sharp for common sampling scenarios such as the nonuniform random hypergraph model and the hypergraph stochastic block model; we also obtain the asymptotic normality of these estimators and discuss the trade-off between statistical efficiency and computational complexity for practical uncertainty quantification. Both results allow for nonuniform and inhomogeneous comparison graphs with varying edge sizes and different asymptotic orders of edge probabilities. We verify our theoretical findings by conducting detailed numerical experiments.

Let $A$ be a $n\times n$ real matrix. The piecewise linear equation system $z-A\vert z\vert =b$ is called an absolute value equation (AVE). It is well-known to be equivalent to the linear complementarity problem. Unique solvability of the AVE is known to be characterized in terms of a generalized Perron root called the sign-real spectral radius of $A$. For mere, possibly non-unique, solvability no such characterization exists. We narrow this gap in the theory. That is, we define the concept of the aligned spectrum of $A$ and prove, under some mild genericity assumptions on $A$, that the mapping degree of the piecewise linear function $F_A:\mathbb{R}^n\to\mathbb{R}^n\,, z\mapsto z-A\lvert z\rvert$ is congruent to $(k+1)\mod 2$, where $k$ is the number of aligned values of $A$ which are larger than $1$. We also derive an exact--but more technical--formula for the degree of $F_A$ in terms of the aligned spectrum. Finally, we derive the analogous quantities and results for the LCP.

Ordinary and partial differential equations (DE) are used extensively in scientific and mathematical domains to model physical systems. Current literature has focused primarily on deep neural network (DNN) based methods for solving a specific DE or a family of DEs. Research communities with a history of using DE models may view DNN-based differential equation solvers (DNN-DEs) as a faster and transferable alternative to current numerical methods. However, there is a lack of systematic surveys detailing the use of DNN-DE methods across physical application domains and a generalized taxonomy to guide future research. This paper surveys and classifies previous works and provides an educational tutorial for senior practitioners, professionals, and graduate students in engineering and computer science. First, we propose a taxonomy to navigate domains of DE systems studied under the umbrella of DNN-DE. Second, we examine the theory and performance of the Physics Informed Neural Network (PINN) to demonstrate how the influential DNN-DE architecture mathematically solves a system of equations. Third, to reinforce the key ideas of solving and discovery of DEs using DNN, we provide a tutorial using DeepXDE, a Python package for developing PINNs, to develop DNN-DEs for solving and discovering a classic DE, the linear transport equation.

The idea of an optimal test statistic in the context of simultaneous hypothesis testing was given by Sun and Tony Cai (2009) which is the conditional probability of a hypothesis being null given the data. Since we do not have a simplified expression of the statistic, it is impossible to implement the optimal test in more general dependency setup. This note simplifies the expression of optimal test statistic of Sun and Tony Cai (2009) under the multivariate normal model. We have considered the model of Xie et. al.(2011), where the test statistics are generated from a multivariate normal distribution conditional to the unobserved states of the hypotheses and the states are i.i.d. Bernoulli random variables. While the equivalence of LFDR and optimal test statistic was established under very stringent conditions of Xie et. al.(2016), the expression obtained in this paper is valid for any covariance matrix and for any fixed 0<p<1. The optimal procedure is implemented with the help of this expression and the performances have been compared with Benjamini Hochberg method and marginal procedure.

Most algorithms constructing bases of finite-dimensional vector spaces return basis vectors which, apart from orthogonality, do not show any special properties. While every basis is sufficient to define the vector space, not all bases are equally suited to unravel properties of the problem to be solved. In this paper a normal form for bases of finite-dimensional vector spaces is introduced which may prove very useful in the context of understanding the structure of the problem in which the basis appears in a step towards the solution. This normal form may be viewed as a new normal form for matrices of full column rank.

In the case where the dimension of the data grows at the same rate as the sample size we prove a central limit theorem for the difference of a linear spectral statistic of the sample covariance and a linear spectral statistic of the matrix that is obtained from the sample covariance matrix by deleting a column and the corresponding row. Unlike previous works, we do neither require that the population covariance matrix is diagonal nor that moments of all order exist. Our proof methodology incorporates subtle enhancements to existing strategies, which meet the challenges introduced by determining the mean and covariance structure for the difference of two such eigenvalue statistics. Moreover, we also establish the asymptotic independence of the difference-type spectral statistic and the usual linear spectral statistic of sample covariance matrices.

Reliable probabilistic primality tests are fundamental in public-key cryptography. In adversarial scenarios, a composite with a high probability of passing a specific primality test could be chosen. In such cases, we need worst-case error estimates for the test. However, in many scenarios the numbers are randomly chosen and thus have significantly smaller error probability. Therefore, we are interested in average case error estimates. In this paper, we establish such bounds for the strong Lucas primality test, as only worst-case, but no average case error bounds, are currently available. This allows us to use this test with more confidence. We examine an algorithm that draws odd $k$-bit integers uniformly and independently, runs $t$ independent iterations of the strong Lucas test with randomly chosen parameters, and outputs the first number that passes all $t$ consecutive rounds. We attain numerical upper bounds on the probability on returing a composite. Furthermore, we consider a modified version of this algorithm that excludes integers divisible by small primes, resulting in improved bounds. Additionally, we classify the numbers that contribute most to our estimate.

北京阿比特科技有限公司