亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present here a large collection of harmonic and quadratic harmonic sums, that can be useful in applied questions, e.g., probabilistic ones. We find closed-form formulae, that we were not able to locate in the literature.

相關內容

Motivated by the important statistical role of sparsity, the paper uncovers four reparametrizations for covariance matrices in which sparsity is associated with conditional independence graphs in a notional Gaussian model. The intimate relationship between the Iwasawa decomposition of the general linear group and the open cone of positive definite matrices allows a unifying perspective. Specifically, the positive definite cone can be reconstructed without loss or redundancy from the exponential map applied to four Lie subalgebras determined by the Iwasawa decomposition of the general linear group. This accords geometric interpretations to the reparametrizations and the corresponding notion of sparsity. Conditions that ensure legitimacy of the reparametrizations for statistical models are identified. While the focus of this work is on understanding population-level structure, there are strong methodological implications. In particular, since the population-level sparsity manifests in a vector space, imposition of sparsity on relevant sample quantities produces a covariance estimate that respects the positive definite cone constraint.

In this paper, we study the Boltzmann equation with uncertainties and prove that the spectral convergence of the semi-discretized numerical system holds in a combined velocity and random space, where the Fourier-spectral method is applied for approximation in the velocity space whereas the generalized polynomial chaos (gPC)-based stochastic Galerkin (SG) method is employed to discretize the random variable. Our proof is based on a delicate energy estimate for showing the well-posedness of the numerical solution as well as a rigorous control of its negative part in our well-designed functional space that involves high-order derivatives of both the velocity and random variables. This paper rigorously justifies the statement proposed in [Remark 4.4, J. Hu and S. Jin, J. Comput. Phys., 315 (2016), pp. 150-168].

We prove explicit uniform two-sided bounds for the phase functions of Bessel functions and of their derivatives. As a consequence, we obtain new enclosures for the zeros of Bessel functions and their derivatives in terms of inverse values of some elementary functions. These bounds are valid, with a few exceptions, for all zeros and all Bessel functions with non-negative indices. We provide numerical evidence showing that our bounds either improve or closely match the best previously known ones.

We introduce a novel continual learning method based on multifidelity deep neural networks. This method learns the correlation between the output of previously trained models and the desired output of the model on the current training dataset, limiting catastrophic forgetting. On its own the multifidelity continual learning method shows robust results that limit forgetting across several datasets. Additionally, we show that the multifidelity method can be combined with existing continual learning methods, including replay and memory aware synapses, to further limit catastrophic forgetting. The proposed continual learning method is especially suited for physical problems where the data satisfy the same physical laws on each domain, or for physics-informed neural networks, because in these cases we expect there to be a strong correlation between the output of the previous model and the model on the current training domain.

In a recent study, Kumar and Lopez-Pamies (J. Mech. Phys. Solids 150: 104359, 2021) have provided a complete quantitative explanation of the famed poker-chip experiments of Gent and Lindley (Proc. R. Soc. Lond. Ser. A 249: 195--205, 1959) on natural rubber. In a nutshell, making use of the fracture theory of Kumar, Francfort, and Lopez-Pamies (J. Mech. Phys. Solids 112: 523--551, 2018), they have shown that the nucleation of cracks in poker-chip experiments in natural rubber is governed by the strength -- in particular, the hydrostatic strength -- of the rubber, while the propagation of the nucleated cracks is governed by the Griffith competition between the bulk elastic energy of the rubber and its intrinsic fracture energy. The main objective of this paper is to extend the theoretical study of the poker-chip experiment by Kumar and Lopez-Pamies to synthetic elastomers that, as opposed to natural rubber: ($i$) may feature a hydrostatic strength that is larger than their uniaxial and biaxial tensile strengths and ($ii$) do not exhibit strain-induced crystallization. A parametric study, together with direct comparisons with recent poker-chip experiments on a silicone elastomer, show that these two different material characteristics have a profound impact on where and when cracks nucleate, as well as on where and when they propagate. In conjunction with the results put forth earlier for natural rubber, the results presented in this paper provide a complete description and explanation of the poker-chip experiments of elastomers at large. As a second objective, this paper also introduces a new fully explicit constitutive prescription for the driving force that describes the material strength in the fracture theory of Kumar, Francfort, and Lopez-Pamies.

We address the problem of constructing approximations based on orthogonal polynomials that preserve an arbitrary set of moments of a given function without loosing the spectral convergence property. To this aim, we compute the constrained polynomial of best approximation for a generic basis of orthogonal polynomials. The construction is entirely general and allows us to derive structure preserving numerical methods for partial differential equations that require the conservation of some moments of the solution, typically representing relevant physical quantities of the problem. These properties are essential to capture with high accuracy the long-time behavior of the solution. We illustrate with the aid of several numerical applications to Fokker-Planck equations the generality and the performances of the present approach.

Complex interval arithmetic is a powerful tool for the analysis of computational errors. The naturally arising rectangular, polar, and circular (together called primitive) interval types are not closed under simple arithmetic operations and their use yields overly relaxed bounds. The later introduced polygonal type, on the other hand, allows for arbitrarily precise representaion of the above operations for a higher computational cost. We propose the polyarcular interval type as an effective extension of the previous types. The polyarcular interval can represent all primitive intervals and most of their arithmetic combinations precisely and has a approximation capability competing with that of the polygonal interval. In particular, in antenna tolerance analysis it can achieve perfect accuracy for lower computational cost then the polygonal type, which we show in a relevant case study. In this paper, we present a rigorous analysis of the arithmetic properties of all five interval types, involving a new algebro-geometric method of boundary analysis.

It is well-known that the Fourier-Galerkin spectral method has been a popular approach for the numerical approximation of the deterministic Boltzmann equation with spectral accuracy rigorously proved. In this paper, we will show that such a spectral convergence of the Fourier-Galerkin spectral method also holds for the Boltzmann equation with uncertainties arising from both collision kernel and initial condition. Our proof is based on newly-established spaces and norms that are carefully designed and take the velocity variable and random variables with their high regularities into account altogether. For future studies, this theoretical result will provide a solid foundation for further showing the convergence of the full-discretized system where both the velocity and random variables are discretized simultaneously.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.

北京阿比特科技有限公司