亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We prove that the native space of a Wu function is a dense subspace of a Sobolev space. An explicit characterization of the native spaces of Wu functions is given. Three definitions of Wu functions are introduced and proven to be equivalent. Based on these new equivalent definitions and the so called $f$-form tricks, we can generalize the Wu functions into the even-dimensional spaces $\R^{2k}$, while the original Wu functions are only defined in the odd-dimensional spaces $\R^{2k+1}$. Such functions in even-dimensional spaces are referred to as the `missing Wu functions'. Furthermore we can generalize the Wu functions into `fractional'-dimensional spaces. We call all these Wu functions the generalized Wu functions. The closed form of the generalized Wu functions are given in terms of hypergeometric functions. Finally we prove that the Wu functions and the missing Wu functions can be written as linear combinations of the generalized Wendland functions.

相關內容

Penalizing complexity (PC) priors is a principled framework for designing priors that reduce model complexity. PC priors penalize the Kullback-Leibler Divergence (KLD) between the distributions induced by a ``simple'' model and that of a more complex model. However, in many common cases, it is impossible to construct a prior in this way because the KLD is infinite. Various approximations are used to mitigate this problem, but the resulting priors then fail to follow the designed principles. We propose a new class of priors, the Wasserstein complexity penalization (WCP) priors, by replacing KLD with the Wasserstein distance in the PC prior framework. These priors avoid the infinite model distance issues and can be derived by following the principles exactly, making them more interpretable. Furthermore, principles and recipes to construct joint WCP priors for multiple parameters analytically and numerically are proposed and we show that they can be easily obtained, either numerically or analytically, for a general class of models. The methods are illustrated through several examples for which PC priors have previously been applied.

The problem of recovering a moment-determinate multivariate function $f$ via its moment sequence is studied. Under mild conditions on $f$, the point-wise and $L_1$-rates of convergence for the proposed constructions are established. The cases where $f$ is the indicator function of a set, and represents a discrete probability mass function are also investigated. Calculations of the approximants and simulation studies are conducted to graphically illustrate the behavior of the approximations in several simple examples. Analytical and simulated errors of proposed approximations are recorded in Tables 1-3.

We consider the coupled system of the Landau--Lifshitz--Gilbert equation and the conservation of linear momentum law to describe magnetic processes in ferromagnetic materials including magnetoelastic effects in the small-strain regime. For this nonlinear system of time-dependent partial differential equations, we present a decoupled integrator based on first-order finite elements in space and an implicit one-step method in time. We prove unconditional convergence of the sequence of discrete approximations towards a weak solution of the system as the mesh size and the time-step size go to zero. Compared to previous numerical works on this problem, for our method, we prove a discrete energy law that mimics that of the continuous problem and, passing to the limit, yields an energy inequality satisfied by weak solutions. Moreover, our method does not employ a nodal projection to impose the unit length constraint on the discrete magnetisation, so that the stability of the method does not require weakly acute meshes. Furthermore, our integrator and its analysis hold for a more general setting, including body forces and traction, as well as a more general representation of the magnetostrain. Numerical experiments underpin the theory and showcase the applicability of the scheme for the simulation of the dynamical processes involving magnetoelastic materials at submicrometer length scales.

We consider the approximation of weakly T-coercive operators. The main property to ensure the convergence thereof is the regularity of the approximation (in the vocabulary of discrete approximation schemes). In a previous work the existence of discrete operators $T_n$ which converge to $T$ in a discrete norm was shown to be sufficient to obtain regularity. Although this framework proved usefull for many applications for some instances the former assumption is too strong. Thus in the present article we report a weaker criterium for which the discrete operators $T_n$ only have to converge point-wise, but in addition a weak T-coercivity condition has to be satisfied on the discrete level. We apply the new framework to prove the convergence of certain $H^1$-conforming finite element discretizations of the damped time-harmonic Galbrun's equation, which is used to model the oscillations of stars. A main ingredient in the latter analysis is the uniformly stable invertibility of the divergence operator on certain spaces, which is related to the topic of divergence free elements for the Stokes equation.

This work puts forth low-complexity Riemannian subspace descent algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold. Different from the existing Riemannian gradient descent variants, the proposed approach utilizes carefully chosen subspaces that allow the update to be written as a product of the Cholesky factor of the iterate and a sparse matrix. The resulting updates avoid the costly matrix operations like matrix exponentiation and dense matrix multiplication, which are generally required in almost all other Riemannian optimization algorithms on SPD manifold. We further identify a broad class of functions, arising in diverse applications, such as kernel matrix learning, covariance estimation of Gaussian distributions, maximum likelihood parameter estimation of elliptically contoured distributions, and parameter estimation in Gaussian mixture model problems, over which the Riemannian gradients can be calculated efficiently. The proposed uni-directional and multi-directional Riemannian subspace descent variants incur per-iteration complexities of $\O(n)$ and $\O(n^2)$ respectively, as compared to the $\O(n^3)$ or higher complexity incurred by all existing Riemannian gradient descent variants. The superior runtime and low per-iteration complexity of the proposed algorithms is also demonstrated via numerical tests on large-scale covariance estimation and matrix square root problems.

A linear code is said to be self-orthogonal if it is contained in its dual. Self-orthogonal codes are of interest because of their important applications, such as for constructing linear complementary dual (LCD) codes and quantum codes. In this paper, we construct several new families of ternary self-orthogonal codes by employing weakly regular plateaued functions. Their parameters and weight distributions are completely determined. Then we apply these self-orthogonal codes to construct several new families of ternary LCD codes. As a consequence, we obtain many (almost) optimal ternary self-orthogonal codes and LCD codes.

We introduce a family of identities that express general linear non-unitary evolution operators as a linear combination of unitary evolution operators, each solving a Hamiltonian simulation problem. This formulation can exponentially enhance the accuracy of the recently introduced linear combination of Hamiltonian simulation (LCHS) method [An, Liu, and Lin, Physical Review Letters, 2023]. For the first time, this approach enables quantum algorithms to solve linear differential equations with both optimal state preparation cost and near-optimal scaling in matrix queries on all parameters.

While many phenomena in physics and engineering are formally high-dimensional, their long-time dynamics often live on a lower-dimensional manifold. The present work introduces an autoencoder framework that combines implicit regularization with internal linear layers and $L_2$ regularization (weight decay) to automatically estimate the underlying dimensionality of a data set, produce an orthogonal manifold coordinate system, and provide the mapping functions between the ambient space and manifold space, allowing for out-of-sample projections. We validate our framework's ability to estimate the manifold dimension for a series of datasets from dynamical systems of varying complexities and compare to other state-of-the-art estimators. We analyze the training dynamics of the network to glean insight into the mechanism of low-rank learning and find that collectively each of the implicit regularizing layers compound the low-rank representation and even self-correct during training. Analysis of gradient descent dynamics for this architecture in the linear case reveals the role of the internal linear layers in leading to faster decay of a "collective weight variable" incorporating all layers, and the role of weight decay in breaking degeneracies and thus driving convergence along directions in which no decay would occur in its absence. We show that this framework can be naturally extended for applications of state-space modeling and forecasting by generating a data-driven dynamic model of a spatiotemporally chaotic partial differential equation using only the manifold coordinates. Finally, we demonstrate that our framework is robust to hyperparameter choices.

We study the stability and sensitivity of an absorbing layer for the Boltzmann equation by examining the Bhatnagar-Gross-Krook (BGK) approximation and using the perfectly matched layer (PML) technique. To ensure stability, we discard some parameters in the model and calculate the total sensitivity indices of the remaining parameters using the ANOVA expansion of multivariate functions. We conduct extensive numerical experiments to study stability and compute the total sensitivity indices, which allow us to identify the essential parameters of the model.

Many imaging science tasks can be modeled as a discrete linear inverse problem. Solving linear inverse problems is often challenging, with ill-conditioned operators and potentially non-unique solutions. Embedding prior knowledge, such as smoothness, into the solution can overcome these challenges. In this work, we encode prior knowledge using a non-negative patch dictionary, which effectively learns a basis from a training set of natural images. In this dictionary basis, we desire solutions that are non-negative and sparse (i.e., contain many zero entries). With these constraints, standard methods for solving discrete linear inverse problems are not directly applicable. One such approach is the modified residual norm steepest descent (MRNSD), which produces non-negative solutions but does not induce sparsity. In this paper, we provide two methods based on MRNSD that promote sparsity. In our first method, we add an $\ell_1$-regularization term with a new, optimal step size. In our second method, we propose a new non-negative, sparsity-promoting mapping of the solution. We compare the performance of our proposed methods on a number of numerical experiments, including deblurring, image completion, computer tomography, and superresolution. Our results show that these methods effectively solve discrete linear inverse problems with non-negativity and sparsity constraints.

北京阿比特科技有限公司