亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study integration and $L^2$-approximation of functions of infinitely many variables in the following setting: The underlying function space is the countably infinite tensor product of univariate Hermite spaces and the probability measure is the corresponding product of the standard normal distribution. The maximal domain of the functions from this tensor product space is necessarily a proper subset of the sequence space $\mathbb{R}^\mathbb{N}$. We establish upper and lower bounds for the minimal worst case errors under general assumptions; these bounds do match for tensor products of well-studied Hermite spaces of functions with finite or with infinite smoothness. In the proofs we employ embedding results, and the upper bounds are attained constructively with the help of multivariate decomposition methods.

相關內容

We examine the behaviour of the Laplace and saddlepoint approximations in the high-dimensional setting, where the dimension of the model is allowed to increase with the number of observations. Approximations to the joint density, the marginal posterior density and the conditional density are considered. Our results show that under the mildest assumptions on the model, the error of the joint density approximation is $O(p^4/n)$ if $p = o(n^{1/4})$ for the Laplace approximation and saddlepoint approximation, and $O(p^3/n)$ if $p = o(n^{1/3})$ under additional assumptions on the second derivative of the log-likelihood. Stronger results are obtained for the approximation to the marginal posterior density.

Operator convex functions defined on the positive half-line play a prominent role in the theory of quantum information, where they are used to define quantum $f$-divergences. Such functions admit integral representations in terms of rational functions. Obtaining high-quality rational approximants of operator convex functions is particularly useful for solving optimization problems involving quantum $f$-divergences using semidefinite programming. In this paper we study the quality of rational approximations of operator convex (and operator monotone) functions. Our main theoretical results are precise global bounds on the error of local Pad\'e-like approximants, as well as minimax approximants, with respect to different weight functions. While the error of Pad\'e-like approximants depends inverse polynomially on the degree of the approximant, the error of minimax approximants has root exponential dependence and we give detailed estimates of the exponents in both cases. We also explain how minimax approximants can be obtained in practice using the differential correction algorithm.

Second-order polynomials generalize classical first-order ones in allowing for additional variables that range over functions rather than values. We are motivated by their applications in higher-order computational complexity theory, extending for example classical classes like P or PSPACE to operators in Analysis [doi:10.1137/S0097539794263452, doi:10.1145/2189778.2189780]. The degree subclassifies ordinary polynomial growth into linear, quadratic, cubic etc. In order to similarly classify second-order polynomials, define their degree to be an 'arctic' first-order polynomial (namely a term/expression over variable $D$ and operations $+$ and $\cdot$ and $\max$). This degree turns out to transform as nicely under (now two kinds of) polynomial composition as the ordinary one. We also establish a normal form and semantic uniqueness for second-order polynomials. Then we define the degree of a third-order polynomial to be an arctic second-order polynomial, and establish its transformation under three kinds of composition.

We study the problem of identification of linear dynamical system from a single trajectory, via excitations of isotropic Gaussian. In stark contrast with previously reported results, Ordinary Least Squares (OLS) estimator for even \emph{stable} dynamical system contains non-vanishing error in \emph{high dimensions}; which stems from the fact that realizations of non-diagonalizable dynamics can have strong \emph{spatial correlations} and a variance, of order $O(e^{n})$, where $n$ is the dimension of the underlying state space. Employing \emph{concentration of measure phenomenon}, in particular tensorization of \emph{Talagrands inequality} for random dynamical systems we show that observed trajectory of dynamical system of length-$N$ can have a variance of order $O(e^{nN})$. Consequently, showing some or most of the $n$ distances between an $N-$ dimensional random vector and an $(n-1)$ dimensional hyperplane in $\mathbb{R}^{N}$ can be close to zero with positive probability and these estimates become stronger in high dimensions and more iterations via \emph{Isoperimetry}. \emph{Negative second moment identity}, along with distance estimates give a control on all the singular values of \emph{Random matrix} of data, revealing limitations of OLS for stable non-diagonalizable and explosive diagonalizable systems.

The model-X knockoffs framework provides a flexible tool for achieving finite-sample false discovery rate (FDR) control in variable selection in arbitrary dimensions without assuming any dependence structure of the response on covariates. It also completely bypasses the use of conventional p-values, making it especially appealing in high-dimensional nonlinear models. Existing works have focused on the setting of independent and identically distributed observations. Yet time series data is prevalent in practical applications in various fields such as economics and social sciences. This motivates the study of model-X knockoffs inference for time series data. In this paper, we make some initial attempt to establish the theoretical and methodological foundation for the model-X knockoffs inference for time series data. We suggest the method of time series knockoffs inference (TSKI) by exploiting the ideas of subsampling and e-values to address the difficulty caused by the serial dependence. We also generalize the robust knockoffs inference to the time series setting and relax the assumption of known covariate distribution required by model-X knockoffs, because such an assumption is overly stringent for time series data. We establish sufficient conditions under which TSKI achieves the asymptotic FDR control. Our technical analysis reveals the effects of serial dependence and unknown covariate distribution on the FDR control. We conduct power analysis of TSKI using the Lasso coefficient difference knockoff statistic under linear time series models. The finite-sample performance of TSKI is illustrated with several simulation examples and an economic inflation study.

Continuous normalizing flows are widely used in generative tasks, where a flow network transports from a data distribution $P$ to a normal distribution. A flow model that can transport from $P$ to an arbitrary $Q$, where both $P$ and $Q$ are accessible via finite samples, would be of various application interests, particularly in the recently developed telescoping density ratio estimation (DRE) which calls for the construction of intermediate densities to bridge between $P$ and $Q$. In this work, we propose such a ``Q-malizing flow'' by a neural-ODE model which is trained to transport invertibly from $P$ to $Q$ (and vice versa) from empirical samples and is regularized by minimizing the transport cost. The trained flow model allows us to perform infinitesimal DRE along the time-parametrized $\log$-density by training an additional continuous-time flow network using classification loss, which estimates the time-partial derivative of the $\log$-density. Integrating the time-score network along time provides a telescopic DRE between $P$ and $Q$ that is more stable than a one-step DRE. The effectiveness of the proposed model is empirically demonstrated on mutual information estimation from high-dimensional data and energy-based generative models of image data.

For a given elliptic curve $E$ over a finite local ring, we denote by $E^{\infty}$ its subgroup at infinity. Every point $P \in E^{\infty}$ can be described solely in terms of its $x$-coordinate $P_x$, which can be therefore used to parameterize all its multiples $nP$. We refer to the coefficient of $(P_x)^i$ in the parameterization of $(nP)_x$ as the $i$-th multiplication polynomial. We show that this coefficient is a degree-$i$ rational polynomial without a constant term in $n$. We also prove that no primes greater than $i$ may appear in the denominators of its terms. As a consequence, for every finite field $\mathbb{F}_q$ and any $k\in\mathbb{N}^*$, we prescribe the group structure of a generic elliptic curve defined over $\mathbb{F}_q[X]/(X^k)$, and we show that their ECDLP on $E^{\infty}$ may be efficiently solved.

Generalized approximate message passing (GAMP) is a computationally efficient algorithm for estimating an unknown signal $w_0\in\mathbb{R}^N$ from a random linear measurement $y= Xw_0 + \epsilon\in\mathbb{R}^M$, where $X\in\mathbb{R}^{M\times N}$ is a known measurement matrix and $\epsilon$ is the noise vector. The salient feature of GAMP is that it can provide an unbiased estimator $\hat{r}^{\rm G}\sim\mathcal{N}(w_0, \hat{s}^2I_N)$, which can be used for various hypothesis-testing methods. In this study, we consider the bootstrap average of an unbiased estimator of GAMP for the elastic net. By numerically analyzing the state evolution of \emph{approximate message passing with resampling}, which has been proposed for computing bootstrap statistics of the elastic net estimator, we investigate when the bootstrap averaging reduces the variance of the unbiased estimator and the effect of optimizing the size of each bootstrap sample and hyperparameter of the elastic net regularization in the asymptotic setting $M, N\to\infty, M/N\to\alpha\in(0,\infty)$. The results indicate that bootstrap averaging effectively reduces the variance of the unbiased estimator when the actual data generation process is inconsistent with the sparsity assumption of the regularization and the sample size is small. Furthermore, we find that when $w_0$ is less sparse, and the data size is small, the system undergoes a phase transition. The phase transition indicates the existence of the region where the ensemble average of unbiased estimators of GAMP for the elastic net norm minimization problem yields the unbiased estimator with the minimum variance.

We prove a universal approximation property (UAP) for a class of ODENet and a class of ResNet, which are simplified mathematical models for deep learning systems with skip connections. The UAP can be stated as follows. Let $n$ and $m$ be the dimension of input and output data, and assume $m\leq n$. Then we show that ODENet of width $n+m$ with any non-polynomial continuous activation function can approximate any continuous function on a compact subset on $\mathbb{R}^n$. We also show that ResNet has the same property as the depth tends to infinity. Furthermore, we derive the gradient of a loss function explicitly with respect to a certain tuning variable. We use this to construct a learning algorithm for ODENet. To demonstrate the usefulness of this algorithm, we apply it to a regression problem, a binary classification, and a multinomial classification in MNIST.

Following up on a previous analysis of graph embeddings, we generalize and expand some results to the general setting of vector symbolic architectures (VSA) and hyperdimensional computing (HDC). Importantly, we explore the mathematical relationship between superposition, orthogonality, and tensor product. We establish the tensor product representation as the central representation, with a suite of unique properties. These include it being the most general and expressive representation, as well as being the most compressed representation that has errorrless unbinding and detection.

北京阿比特科技有限公司