亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Generalized sampling consists in the recovery of a function $f$, from the samples of the responses of a collection of linear shift-invariant systems to the input $f$. The reconstructed function is typically a member of a finitely generated integer-shift-invariant space that can reproduce polynomials up to a given degree $M$. While this property allows for an approximation power of order $(M+1)$, it comes with a tradeoff on the length of the support of the basis functions. Specifically, we prove that the sum of the length of the support of the generators is at least $(M+1)$. Following this result, we introduce the notion of shortest basis of degree $M$, which is motivated by our desire to minimize the computational costs. We then demonstrate that any basis of shortest support generates a Riesz basis. Finally, we introduce a recursive algorithm to construct the shortest-support basis for any multi-spline space. It provides a generalization of both polynomial and Hermite B-splines. This framework paves the way for novel applications such as fast derivative sampling with arbitrarily high approximation power.

相關內容

Gaussian processes that can be decomposed into a smooth mean function and a stationary autocorrelated noise process are considered and a fully automatic nonparametric method to simultaneous estimation of mean and auto-covariance functions of such processes is developed. Our empirical Bayes approach is data-driven, numerically efficient and allows for the construction of confidence sets for the mean function. Performance is demonstrated in simulations and real data analysis. The method is implemented in the R package eBsc that accompanies the paper.

Partially linear additive models generalize the linear models since they model the relation between a response variable and covariates by assuming that some covariates are supposed to have a linear relation with the response but each of the others enter with unknown univariate smooth functions. The harmful effect of outliers either in the residuals or in the covariates involved in the linear component has been described in the situation of partially linear models, that is, when only one nonparametric component is involved in the model. When dealing with additive components, the problem of providing reliable estimators when atypical data arise, is of practical importance motivating the need of robust procedures. Hence, we propose a family of robust estimators for partially linear additive models by combining $B-$splines with robust linear regression estimators. We obtain consistency results, rates of convergence and asymptotic normality for the linear components, under mild assumptions. A Monte Carlo study is carried out to compare the performance of the robust proposal with its classical counterpart under different models and contamination schemes. The numerical experiments show the advantage of the proposed methodology for finite samples. We also illustrate the usefulness of the proposed approach on a real data set.

The Chebyshev or $\ell_{\infty}$ estimator is an unconventional alternative to the ordinary least squares in solving linear regressions. It is defined as the minimizer of the $\ell_{\infty}$ objective function \begin{align*} \hat{\boldsymbol{\beta}} := \arg\min_{\boldsymbol{\beta}} \|\boldsymbol{Y} - \mathbf{X}\boldsymbol{\beta}\|_{\infty}. \end{align*} The asymptotic distribution of the Chebyshev estimator under fixed number of covariates were recently studied (Knight, 2020), yet finite sample guarantees and generalizations to high-dimensional settings remain open. In this paper, we develop non-asymptotic upper bounds on the estimation error $\|\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}^*\|_2$ for a Chebyshev estimator $\hat{\boldsymbol{\beta}}$, in a regression setting with uniformly distributed noise $\varepsilon_i\sim U([-a,a])$ where $a$ is either known or unknown. With relatively mild assumptions on the (random) design matrix $\mathbf{X}$, we can bound the error rate by $\frac{C_p}{n}$ with high probability, for some constant $C_p$ depending on the dimension $p$ and the law of the design. Furthermore, we illustrate that there exist designs for which the Chebyshev estimator is (nearly) minimax optimal. In addition we show that "Chebyshev's LASSO" has advantages over the regular LASSO in high dimensional situations, provided that the noise is uniform. Specifically, we argue that it achieves a much faster rate of estimation under certain assumptions on the growth rate of the sparsity level and the ambient dimension with respect to the sample size.

We consider a nonlocal evolution equation representing the continuum limit of a large ensemble of interacting particles on graphs forced by noise. The two principle ingredients of the continuum model are a nonlocal term and Q-Wiener process describing the interactions among the particles in the network and stochastic forcing respectively. The network connectivity is given by a square integrable function called a graphon. We prove that the initial value problem for the continuum model is well-posed. Further, we construct a semidiscrete (discrete in space and continuous in time) and a fully discrete schemes for the nonlocal model. The former is obtained by a discontinuous Galerkin method and the latter is based on further discretizing time using the Euler-Maruyama method. We prove convergence and estimate the rate of convergence in each case. For the semidiscrete scheme, the rate of convergence estimate is expressed in terms of the regularity of the graphon, Q-Wiener process, and the initial data. We work in generalized Lipschitz spaces, which allows to treat models with data of lower regularity. This is important for applications as many interesting types of connectivity including small-world and power-law are expressed by graphons that are not smooth. The error analysis of the fully discrete scheme, on the other hand, reveals that for some models common in applied science, one has a higher speed of convergence than that predicted by the standard estimates for the Euler-Maruyama method. The rate of convergence analysis is supplemented with detailed numerical experiments, which are consistent with our analytical results. As a by-product, this work presents a rigorous justification for taking continuum limit for a large class of interacting dynamical systems on graphs subject to noise.

Non-overlapping codes are a set of codewords in $\bigcup_{n \ge 2} \mathbb{Z}_q^n$, where $\mathbb{Z}_q = \{0,1,\dots,q-1\}$, such that, the prefix of each codeword is not a suffix of any codeword in the set, including itself; and for variable-length codes, a codeword does not contain any other codeword as a subword. In this paper, we investigate a generic method to generalize binary codes to $q$-ary for $q > 2$, and analyze this generalization on the two constructions given by Levenshtein (also by Gilbert; Chee, Kiah, Purkayastha, and Wang) and Bilotta, respectively. The generalization on the former construction gives large non-expandable fixed-length non-overlapping codes whose size can be explicitly determined; the generalization on the later construction is the first attempt to generate $q$-ary variable-length non-overlapping codes. More importantly, this generic method allows us to utilize the generating function approach to analyze the cardinality of the underlying $q$-ary non-overlapping codes. The generating function approach not only enables us to derive new results, e.g., recurrence relations on their cardinalities, new combinatorial interpretations for the constructions, and the limit superior of their cardinalities for some special cases, but also greatly simplifies the arguments for these results. Furthermore, we give an exact formula for the number of fixed-length words that do not contain the codewords in a variable-length non-overlapping code as subwords. This thereby solves an open problem by Bilotta and induces a recursive upper bound on the maximum size of variable-length non-overlapping codes.

Rate-splitting multiple access (RSMA) is a general multiple access scheme for downlink multi-antenna systems embracing both classical spatial division multiple access and more recent non-orthogonal multiple access. Finding a linear precoding strategy that maximizes the sum spectral efficiency of RSMA is a challenging yet significant problem. In this paper, we put forth a novel precoder design framework that jointly finds the linear precoders for the common and private messages for RSMA. Our approach is first to approximate the non-smooth minimum function part in the sum spectral efficiency of RSMA using a LogSumExp technique. Then, we reformulate the sum spectral efficiency maximization problem as a form of the log-sum of Rayleigh quotients to convert it into a tractable non-convex optimization problem. By interpreting the first-order optimality condition of the reformulated problem as an eigenvector-dependent nonlinear eigenvalue problem, we reveal that a leading eigenvector is a local optimal solution. To find the leading eigenvector, we propose a computationally efficient algorithm inspired by a power iteration method. Simulation results show that the proposed RSMA transmission strategy provides significant improvement in the sum spectral efficiency compared to the state-of-the-art RSMA transmission methods, while requiring considerably less computational complexity.

In this paper, we analyse singular values of a large $p\times n$ data matrix $\mathbf{X}_n= (\mathbf{x}_{n1},\ldots,\mathbf{x}_{nn})$ where the column $\mathbf{x}_{nj}$'s are independent $p$-dimensional vectors, possibly with different distributions. Such data matrices are common in high-dimensional statistics. Under a key assumption that the covariance matrices $\mathbf{\Sigma}_{nj}=\text{Cov}(\mathbf{x}_{nj})$ can be asymptotically simultaneously diagonalizable, and appropriate convergence of their spectra, we establish a limiting distribution for the singular values of $\mathbf{X}_n$ when both dimension $p$ and $n$ grow to infinity in a comparable magnitude. The matrix model goes beyond and includes many existing works on different types of sample covariance matrices, including the weighted sample covariance matrix, the Gram matrix model and the sample covariance matrix of linear times series models. Furthermore, we develop two applications of our general approach. First, we obtain the existence and uniqueness of a new limiting spectral distribution of realized covariance matrices for a multi-dimensional diffusion process with anisotropic time-varying co-volatility processes. Secondly, we derive the limiting spectral distribution for singular values of the data matrix for a recent matrix-valued auto-regressive model. Finally, for a generalized finite mixture model, the limiting spectral distribution for singular values of the data matrix is obtained.

In this paper, we investigate dynamic resource scheduling (i.e., joint user, subchannel, and power scheduling) for downlink multi-channel non-orthogonal multiple access (MC-NOMA) systems over time-varying fading channels. Specifically, we address the weighted average sum rate maximization problem with quality-of-service (QoS) constraints. In particular, to facilitate fast resource scheduling, we focus on developing a very low-complexity algorithm. To this end, by leveraging Lagrangian duality and the stochastic optimization theory, we first develop an opportunistic MC-NOMA scheduling algorithm whereby the original problem is decomposed into a series of subproblems, one for each time slot. Accordingly, resource scheduling works in an online manner by solving one subproblem per time slot, making it more applicable to practical systems. Then, we further develop a heuristic joint subchannel assignment and power allocation (Joint-SAPA) algorithm with very low computational complexity, called Joint-SAPA-LCC, that solves each subproblem. Finally, through simulation, we show that our Joint-SAPA-LCC algorithm provides good performance comparable to the existing Joint-SAPA algorithms despite requiring much lower computational complexity. We also demonstrate that our opportunistic MC-NOMA scheduling algorithm in which the Joint-SAPA-LCC algorithm is embedded works well while satisfying given QoS requirements.

In the estimation of the mean matrix in a multivariate normal distribution, the generalized Bayes estimators with closed forms are provided, and the sufficient conditions for their minimaxity are derived relative to both matrix and scalar quadratic loss functions. The generalized Bayes estimators of the covariance matrix are also given with closed forms, and the dominance properties are discussed for the Stein loss function.

We present a generalization of the Cauchy/Lorentzian, Geman-McClure, Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and L2 loss functions. By introducing robustness as a continous parameter, our loss function allows algorithms built around robust loss minimization to be generalized, which improves performance on basic vision tasks such as registration and clustering. Interpreting our loss as the negative log of a univariate density yields a general probability distribution that includes normal and Cauchy distributions as special cases. This probabilistic interpretation enables the training of neural networks in which the robustness of the loss automatically adapts itself during training, which improves performance on learning-based tasks such as generative image synthesis and unsupervised monocular depth estimation, without requiring any manual parameter tuning.

北京阿比特科技有限公司