亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Symmetry is a cornerstone of much of mathematics, and many probability distributions possess symmetries characterized by their invariance to a collection of group actions. Thus, many mathematical and statistical methods rely on such symmetry holding and ostensibly fail if symmetry is broken. This work considers under what conditions a sequence of probability measures asymptotically gains such symmetry or invariance to a collection of group actions. Considering the many symmetries of the Gaussian distribution, this work effectively proposes a non-parametric type of central limit theorem. That is, a Lipschitz function of a high dimensional random vector will be asymptotically invariant to the actions of certain compact topological groups. Applications of this include a partial law of the iterated logarithm for uniformly random points in an $\ell_p^n$-ball and an asymptotic equivalence between classical parametric statistical tests and their randomization counterparts even when invariance assumptions are violated.

相關內容

Distributed quantum computing, particularly distributed quantum machine learning, has gained substantial prominence for its capacity to harness the collective power of distributed quantum resources, transcending the limitations of individual quantum nodes. Meanwhile, the critical concern of privacy within distributed computing protocols remains a significant challenge, particularly in standard classical federated learning (FL) scenarios where data of participating clients is susceptible to leakage via gradient inversion attacks by the server. This paper presents innovative quantum protocols with quantum communication designed to address the FL problem, strengthen privacy measures, and optimize communication efficiency. In contrast to previous works that leverage expressive variational quantum circuits or differential privacy techniques, we consider gradient information concealment using quantum states and propose two distinct FL protocols, one based on private inner-product estimation and the other on incremental learning. These protocols offer substantial advancements in privacy preservation with low communication resources, forging a path toward efficient quantum communication-assisted FL protocols and contributing to the development of secure distributed quantum machine learning, thus addressing critical privacy concerns in the quantum computing era.

This work puts forth low-complexity Riemannian subspace descent algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold. Different from the existing Riemannian gradient descent variants, the proposed approach utilizes carefully chosen subspaces that allow the update to be written as a product of the Cholesky factor of the iterate and a sparse matrix. The resulting updates avoid the costly matrix operations like matrix exponentiation and dense matrix multiplication, which are generally required in almost all other Riemannian optimization algorithms on SPD manifold. We further identify a broad class of functions, arising in diverse applications, such as kernel matrix learning, covariance estimation of Gaussian distributions, maximum likelihood parameter estimation of elliptically contoured distributions, and parameter estimation in Gaussian mixture model problems, over which the Riemannian gradients can be calculated efficiently. The proposed uni-directional and multi-directional Riemannian subspace descent variants incur per-iteration complexities of $\O(n)$ and $\O(n^2)$ respectively, as compared to the $\O(n^3)$ or higher complexity incurred by all existing Riemannian gradient descent variants. The superior runtime and low per-iteration complexity of the proposed algorithms is also demonstrated via numerical tests on large-scale covariance estimation and matrix square root problems.

We extend the use of piecewise orthogonal collocation to computing periodic solutions of renewal equations, which are particularly important in modeling population dynamics. We prove convergence through a rigorous error analysis. Finally, we show some numerical experiments confirming the theoretical results, and a couple of applications in view of bifurcation analysis.

PDDSparse is a new hybrid parallelisation scheme for solving large-scale elliptic boundary value problems on supercomputers, which can be described as a Feynman-Kac formula for domain decomposition. At its core lies a stochastic linear, sparse system for the solutions on the interfaces, whose entries are generated via Monte Carlo simulations. Assuming small statistical errors, we show that the random system matrix ${\tilde G}(\omega)$ is near a nonsingular M-matrix $G$, i.e. ${\tilde G}(\omega)+E=G$ where $||E||/||G||$ is small. Using nonstandard arguments, we bound $||G^{-1}||$ and the condition number of $G$, showing that both of them grow moderately with the degrees of freedom of the discretisation. Moreover, the truncated Neumann series of $G^{-1}$ -- which is straightforward to calculate -- is the basis for an excellent preconditioner for ${\tilde G}(\omega)$. These findings are supported by numerical evidence.

While many phenomena in physics and engineering are formally high-dimensional, their long-time dynamics often live on a lower-dimensional manifold. The present work introduces an autoencoder framework that combines implicit regularization with internal linear layers and $L_2$ regularization (weight decay) to automatically estimate the underlying dimensionality of a data set, produce an orthogonal manifold coordinate system, and provide the mapping functions between the ambient space and manifold space, allowing for out-of-sample projections. We validate our framework's ability to estimate the manifold dimension for a series of datasets from dynamical systems of varying complexities and compare to other state-of-the-art estimators. We analyze the training dynamics of the network to glean insight into the mechanism of low-rank learning and find that collectively each of the implicit regularizing layers compound the low-rank representation and even self-correct during training. Analysis of gradient descent dynamics for this architecture in the linear case reveals the role of the internal linear layers in leading to faster decay of a "collective weight variable" incorporating all layers, and the role of weight decay in breaking degeneracies and thus driving convergence along directions in which no decay would occur in its absence. We show that this framework can be naturally extended for applications of state-space modeling and forecasting by generating a data-driven dynamic model of a spatiotemporally chaotic partial differential equation using only the manifold coordinates. Finally, we demonstrate that our framework is robust to hyperparameter choices.

Finite volume method (FVM) is a widely used mesh-based technique, renowned for its computational efficiency and accuracy but it bears significant drawbacks, particularly in mesh generation and handling complex boundary interfaces or conditions. On the other hand, smoothed particle hydrodynamics (SPH) method, a popular meshless alternative, inherently circumvents the mesh generation and yields smoother numerical outcomes but at the expense of computational efficiency. Therefore, numerous researchers have strategically amalgamated the strengths of both methods to investigate complex flow phenomena and this synergy has yielded precise and computationally efficient outcomes. However, algorithms involving the weak coupling of these two methods tend to be intricate, which has issues pertaining to versatility, implementation, and mutual adaptation to hardware and coding structures. Thus, achieving a robust and strong coupling of FVM and SPH in a unified framework is imperative. Due to differing boundary algorithms between these methods in Wang's work, the crucial step for establishing a strong coupling of both methods within a unified SPH framework lies in incorporating the FVM boundary algorithm into the Eulerian SPH method. In this paper, we propose a straightforward algorithm in the Eulerian SPH method, algorithmically equivalent to that in FVM, grounded in the principle of zero-order consistency. Moreover, several numerical examples, including fully and weakly compressible flows with various boundary conditions in the Eulerian SPH method, validate the stability and accuracy of the proposed algorithm.

Quadratization of polynomial and nonpolynomial systems of ordinary differential equations is advantageous in a variety of disciplines, such as systems theory, fluid mechanics, chemical reaction modeling and mathematical analysis. A quadratization reveals new variables and structures of a model, which may be easier to analyze, simulate, control, and provides a convenient parametrization for learning. This paper presents novel theory, algorithms and software capabilities for quadratization of non-autonomous ODEs. We provide existence results, depending on the regularity of the input function, for cases when a quadratic-bilinear system can be obtained through quadratization. We further develop existence results and an algorithm that generalizes the process of quadratization for systems with arbitrary dimension that retain the nonlinear structure when the dimension grows. For such systems, we provide dimension-agnostic quadratization. An example is semi-discretized PDEs, where the nonlinear terms remain symbolically identical when the discretization size increases. As an important aspect for practical adoption of this research, we extended the capabilities of the QBee software towards both non-autonomous systems of ODEs and ODEs with arbitrary dimension. We present several examples of ODEs that were previously reported in the literature, and where our new algorithms find quadratized ODE systems with lower dimension than the previously reported lifting transformations. We further highlight an important area of quadratization: reduced-order model learning. This area can benefit significantly from working in the optimal lifting variables, where quadratic models provide a direct parametrization of the model that also avoids additional hyperreduction for the nonlinear terms. A solar wind example highlights these advantages.

The incompressibility method is a counting argument in the framework of algorithmic complexity that permits discovering properties that are satisfied by most objects of a class. This paper gives a preliminary insight into Kolmogorov's complexity of groupoids and some algebras. The incompressibility method shows that almost all the groupoids are asymmetric and simple: Only trivial or constant homomorphisms are possible. However, highly random groupoids allow subgroupoids with interesting restrictions that reveal intrinsic structural properties. We also study the issue of the algebraic varieties and wonder which equational identities allow randomness.

This paper explores an iterative coupling approach to solve linear thermo-poroelasticity problems, with its application as a high-fidelity discretization utilizing finite elements during the training of projection-based reduced order models. One of the main challenges in addressing coupled multi-physics problems is the complexity and computational expenses involved. In this study, we introduce a decoupled iterative solution approach, integrated with reduced order modeling, aimed at augmenting the efficiency of the computational algorithm. The iterative coupling technique we employ builds upon the established fixed-stress splitting scheme that has been extensively investigated for Biot's poroelasticity. By leveraging solutions derived from this coupled iterative scheme, the reduced order model employs an additional Galerkin projection onto a reduced basis space formed by a small number of modes obtained through proper orthogonal decomposition. The effectiveness of the proposed algorithm is demonstrated through numerical experiments, showcasing its computational prowess.

Regularization is a long-standing challenge for ill-posed linear inverse problems, and a prototype is the Fredholm integral equation of the first kind with additive Gaussian measurement noise. We introduce a new RKHS regularization adaptive to measurement data and the underlying linear operator. This RKHS arises naturally in a variational approach, and its closure is the function space in which we can identify the true solution. Also, we introduce a small noise analysis to compare regularization norms by sharp convergence rates in the small noise limit. Our analysis shows that the RKHS- and $L^2$-regularizers yield the same convergence rate when their optimal hyper-parameters are selected using the true solution, and the RKHS-regularizer has a smaller multiplicative constant. However, in computational practice, the RKHS regularizer significantly outperforms the $L^2$-and $l^2$-regularizers in producing consistently converging estimators when the noise level decays or the observation mesh refines.

北京阿比特科技有限公司