亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The problem of approximate joint diagonalization of a collection of matrices arises in a number of diverse engineering and signal processing problems. This problem is usually cast as an optimization problem, and it is the main goal of this publication to provide a theoretical study of the corresponding cost-functional. As our main result, we prove that this functional tends to infinity in the vicinity of rank-deficient matrices with probability one, thereby proving that the optimization problem is well posed. Secondly, we provide unified expressions for its higher-order derivatives in multilinear form, and explicit expressions for the gradient and the Hessian of the functional in standard form, thereby opening for new improved numerical schemes for the solution of the joint diagonalization problem. A special section is devoted to the important case of self-adjoint matrices.

相關內容

This paper deals with the application of probabilistic time integration methods to semi-explicit partial differential-algebraic equations of parabolic type and its semi-discrete counterparts, namely semi-explicit differential-algebraic equations of index 2. The proposed methods iteratively construct a probability distribution over the solution of deterministic problems, enhancing the information obtained from the numerical simulation. Within this paper, we examine the efficacy of the randomized versions of the implicit Euler method, the midpoint scheme, and exponential integrators of first and second order. By demonstrating the consistency and convergence properties of these solvers, we illustrate their utility in capturing the sensitivity of the solution to numerical errors. Our analysis establishes the theoretical validity of randomized time integration for constrained systems and offers insights into the calibration of probabilistic integrators for practical applications.

We give a streamlined short proof of Newman's theorem in communication complexity by applying the classical and the approximate Carath\'eodory's theorems.

We present accurate and mathematically consistent formulations of a diffuse-interface model for two-phase flow problems involving rapid evaporation. The model addresses challenges including discontinuities in the density field by several orders of magnitude, leading to high velocity and pressure jumps across the liquid-vapor interface, along with dynamically changing interface topologies. To this end, we integrate an incompressible Navier-Stokes solver combined with a conservative level-set formulation and a regularized, i.e., diffuse, representation of discontinuities into a matrix-free adaptive finite element framework. The achievements are three-fold: First, we propose mathematically consistent definitions for the level-set transport velocity in the diffuse interface region by extrapolating the velocity from the liquid or gas phase. They exhibit superior prediction accuracy for the evaporated mass and the resulting interface dynamics compared to a local velocity evaluation, especially for strongly curved interfaces. Second, we show that accurate prediction of the evaporation-induced pressure jump requires a consistent, namely a reciprocal, density interpolation across the interface, which satisfies local mass conservation. Third, the combination of diffuse interface models for evaporation with standard Stokes-type constitutive relations for viscous flows leads to significant pressure artifacts in the diffuse interface region. To mitigate these, we propose to introduce a correction term for such constitutive model types. Through selected analytical and numerical examples, the aforementioned properties are validated. The presented model promises new insights in simulation-based prediction of melt-vapor interactions in thermal multiphase flows such as in laser-based powder bed fusion of metals.

Thermal states play a fundamental role in various areas of physics, and they are becoming increasingly important in quantum information science, with applications related to semi-definite programming, quantum Boltzmann machine learning, Hamiltonian learning, and the related task of estimating the parameters of a Hamiltonian. Here we establish formulas underlying the basic geometry of parameterized thermal states, and we delineate quantum algorithms for estimating the values of these formulas. More specifically, we prove formulas for the Fisher--Bures and Kubo--Mori information matrices of parameterized thermal states, and our quantum algorithms for estimating their matrix elements involve a combination of classical sampling, Hamiltonian simulation, and the Hadamard test. These results have applications in developing a natural gradient descent algorithm for quantum Boltzmann machine learning, which takes into account the geometry of thermal states, and in establishing fundamental limitations on the ability to estimate the parameters of a Hamiltonian, when given access to thermal-state samples. For the latter task, and for the special case of estimating a single parameter, we sketch an algorithm that realizes a measurement that is asymptotically optimal for the estimation task. We finally stress that the natural gradient descent algorithm developed here can be used for any machine learning problem that employs the quantum Boltzmann machine ansatz.

This paper investigates the estimation of the interaction function for a class of McKean-Vlasov stochastic differential equations. The estimation is based on observations of the associated particle system at time $T$, considering the scenario where both the time horizon $T$ and the number of particles $N$ tend to infinity. Our proposed method recovers polynomial rates of convergence for the resulting estimator. This is achieved under the assumption of exponentially decaying tails for the interaction function. Additionally, we conduct a thorough analysis of the transform of the associated invariant density as a complex function, providing essential insights for our main results.

Radial basis functions (RBFs) play an important role in function interpolation, in particular in an arbitrary set of interpolation nodes. The accuracy of the interpolation depends on a parameter called the shape parameter. There are many approaches in literature on how to appropriately choose it as to increase the accuracy of interpolation while avoiding instability issues. However, finding the optimal shape parameter value in general remains a challenge. In this work, we present a novel approach to determine the shape parameter in RBFs. First, we construct an optimisation problem to obtain a shape parameter that leads to an interpolation matrix with bounded condition number, then, we introduce a data-driven method that controls the condition of the interpolation matrix to avoid numerically unstable interpolations, while keeping a very good accuracy. In addition, a fall-back procedure is proposed to enforce a strict upper bound on the condition number, as well as a learning strategy to improve the performance of the data-driven method by learning from previously run simulations. We present numerical test cases to assess the performance of the proposed methods in interpolation tasks and in a RBF based finite difference (RBF-FD) method, in one and two-space dimensions.

Given large data sets and sufficient compute, is it beneficial to design neural architectures for the structure and symmetries of each problem? Or is it more efficient to learn them from data? We study empirically how equivariant and non-equivariant networks scale with compute and training samples. Focusing on a benchmark problem of rigid-body interactions and on general-purpose transformer architectures, we perform a series of experiments, varying the model size, training steps, and dataset size. We find evidence for three conclusions. First, equivariance improves data efficiency, but training non-equivariant models with data augmentation can close this gap given sufficient epochs. Second, scaling with compute follows a power law, with equivariant models outperforming non-equivariant ones at each tested compute budget. Finally, the optimal allocation of a compute budget onto model size and training duration differs between equivariant and non-equivariant models.

Although quantile regression has emerged as a powerful tool for understanding various quantiles of a response variable conditioned on a set of covariates, the development of quantile regression for count responses has received far less attention. This paper proposes a new Bayesian approach to quantile regression for count data, which provides a more flexible and interpretable alternative to the existing approaches. The proposed approach associates the continuous latent variable with the discrete response and nonparametrically estimates the joint distribution of the latent variable and a set of covariates. Then, by regressing the estimated continuous conditional quantile on the covariates, the posterior distributions of the covariate effects on the conditional quantiles are obtained through general Bayesian updating via simple optimization. The simulation study and real data analysis demonstrate that the proposed method overcomes the existing limitations and enhances quantile estimation and interpretation of variable relationships, making it a valuable tool for practitioners handling count data.

We propose an implementable, neural network-based structure preserving probabilistic numerical approximation for a generalized obstacle problem describing the value of a zero-sum differential game of optimal stopping with asymmetric information. The target solution depends on three variables: the time, the spatial (or state) variable, and a variable from a standard $(I-1)$-simplex which represents the probabilities with which the $I$ possible configurations of the game are played. The proposed numerical approximation preserves the convexity of the continuous solution as well as the lower and upper obstacle bounds. We show convergence of the fully-discrete scheme to the unique viscosity solution of the continuous problem and present a range of numerical studies to demonstrate its applicability.

This article presents a new algorithm to compute all the roots of two families of polynomials that are of interest for the Mandelbrot set $\mathcal{M}$ : the roots of those polynomials are respectively the parameters $c\in\mathcal{M}$ associated with periodic critical dynamics for $f_c(z)=z^2+c$ (hyperbolic centers) or with pre-periodic dynamics (Misiurewicz-Thurston parameters). The algorithm is based on the computation of discrete level lines that provide excellent starting points for the Newton method. In practice, we observe that these polynomials can be split in linear time of the degree. This article is paired with a code library [Mandel] that implements this algorithm. Using this library and about 723 000 core-hours on the HPC center Rom\'eo (Reims), we have successfully found all hyperbolic centers of period $\leq 41$ and all Misiurewicz-Thurston parameters whose period and pre-period sum to $\leq 35$. Concretely, this task involves splitting a tera-polynomial, i.e. a polynomial of degree $\sim10^{12}$, which is orders of magnitude ahead of the previous state of the art. It also involves dealing with the certifiability of our numerical results, which is an issue that we address in detail, both mathematically and along the production chain. The certified database is available to the scientific community. For the smaller periods that can be represented using only hardware arithmetic (floating points FP80), the implementation of our algorithm can split the corresponding polynomials of degree $\sim10^{9}$ in less than one day-core. We complement these benchmarks with a statistical analysis of the separation of the roots, which confirms that no other polynomial in these families can be split without using higher precision arithmetic.

北京阿比特科技有限公司