亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the problem of recovering an unknown signal $\boldsymbol x$ given measurements obtained from a generalized linear model with a Gaussian sensing matrix. Two popular solutions are based on a linear estimator $\hat{\boldsymbol x}^{\rm L}$ and a spectral estimator $\hat{\boldsymbol x}^{\rm s}$. The former is a data-dependent linear combination of the columns of the measurement matrix, and its analysis is quite simple. The latter is the principal eigenvector of a data-dependent matrix, and a recent line of work has studied its performance. In this paper, we show how to optimally combine $\hat{\boldsymbol x}^{\rm L}$ and $\hat{\boldsymbol x}^{\rm s}$. At the heart of our analysis is the exact characterization of the joint empirical distribution of $(\boldsymbol x, \hat{\boldsymbol x}^{\rm L}, \hat{\boldsymbol x}^{\rm s})$ in the high-dimensional limit. This allows us to compute the Bayes-optimal combination of $\hat{\boldsymbol x}^{\rm L}$ and $\hat{\boldsymbol x}^{\rm s}$, given the limiting distribution of the signal $\boldsymbol x$. When the distribution of the signal is Gaussian, then the Bayes-optimal combination has the form $\theta\hat{\boldsymbol x}^{\rm L}+\hat{\boldsymbol x}^{\rm s}$ and we derive the optimal combination coefficient. In order to establish the limiting distribution of $(\boldsymbol x, \hat{\boldsymbol x}^{\rm L}, \hat{\boldsymbol x}^{\rm s})$, we design and analyze an Approximate Message Passing (AMP) algorithm whose iterates give $\hat{\boldsymbol x}^{\rm L}$ and approach $\hat{\boldsymbol x}^{\rm s}$. Numerical simulations demonstrate the improvement of the proposed combination with respect to the two methods considered separately.

相關內容

We develop new adaptive algorithms for variational inequalities with monotone operators, which capture many problems of interest, notably convex optimization and convex-concave saddle point problems. Our algorithms automatically adapt to unknown problem parameters such as the smoothness and the norm of the operator, and the variance of the stochastic evaluation oracle. We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings. The convergence guarantees of our algorithms improve over existing adaptive methods by a $\Omega(\sqrt{\ln T})$ factor, matching the optimal non-adaptive algorithms. Additionally, prior works require that the optimization domain is bounded. In this work, we remove this restriction and give algorithms for unbounded domains that are adaptive and universal. Our general proof techniques can be used for many variants of the algorithm using one or two operator evaluations per iteration. The classical methods based on the ExtraGradient/MirrorProx algorithm require two operator evaluations per iteration, which is the dominant factor in the running time in many settings.

The Erd\H{o}s distinct distance problem is a ubiquitous problem in discrete geometry. Less well known is Erd\H{o}s' distinct angle problem, the problem of finding the minimum number of distinct angles between $n$ non-collinear points in the plane. The standard problem is already well understood. However, it admits many of the same variants as the distinct distance problem, many of which are unstudied. We provide upper and lower bounds on a broad class of distinct angle problems. We show that the number of distinct angles formed by $n$ points in general position is $O(n^{\log_2(7)})$, providing the first non-trivial bound for this quantity. We introduce a new class of asymptotically optimal point configurations with no four cocircular points. Then, we analyze the sensitivity of asymptotically optimal point sets to perturbation, yielding a much broader class of asymptotically optimal configurations. In higher dimensions we show that a variant of Lenz's construction admits fewer distinct angles than the optimal configurations in two dimensions. We also show that the minimum size of a maximal subset of $n$ points in general position admitting only unique angles is $\Omega(n^{1/5})$ and $O(n^{\log_2(7)/3})$. We also provide bounds on the partite variants of the standard distinct angle problem.

In this paper we analyze a mixed displacement-pseudostress formulation for the elasticity eigenvalue problem. We propose a finite element method to approximate the pseudostress tensor with Raviart-Thomas elements and the displacement with piecewise polynomials. With the aid of the classic theory for compact operators, we prove that our method is convergent and does not introduce spurious modes. Also, we obtain error estimates for the proposed method. Finally, we report some numerical tests supporting the theoretical results.

I consider the estimation of the average treatment effect (ATE), in a population that can be divided into $G$ groups, and such that one has unbiased and uncorrelated estimators of the conditional average treatment effect (CATE) in each group. These conditions are for instance met in stratified randomized experiments. I assume that the outcome is homoscedastic, and that each CATE is bounded in absolute value by $B$ standard deviations of the outcome, for some known constant $B$. I derive, across all linear combinations of the CATEs' estimators, the estimator of the ATE with the lowest worst-case mean-squared error. This estimator assigns a weight equal to group $g$'s share in the population to the most precisely estimated CATEs, and a weight proportional to one over the estimator's variance to the least precisely estimated CATEs. Given $B$, this optimal estimator is feasible: the weights only depend on known quantities. I then allow for positive covariances known up to the outcome's variance between the estimators. This condition is met by differences-in-differences estimators in staggered adoption designs, if potential outcomes are homoscedastic and uncorrelated. Under those assumptions, I show that the minimax estimator is still feasible and can easily be computed. In realistic numerical examples, the minimax estimator can lead to substantial precision and worst-case MSE gains relative to the unbiased estimator.

We study approximation of multivariate periodic functions from Besov and Triebel--Lizorkin spaces of dominating mixed smoothness by the Smolyak algorithm constructed using a special class of quasi-interpolation operators of Kantorovich-type. These operators are defined similar to the classical sampling operators by replacing samples with the average values of a function on small intervals (or more generally with sampled values of a convolution of a given function with an appropriate kernel). In this paper, we estimate the rate of convergence of the corresponding Smolyak algorithm in the $L_q$-norm for functions from the Besov spaces $\mathbf{B}_{p,\theta}^s(\mathbb{T}^d)$ and the Triebel--Lizorkin spaces $\mathbf{F}_{p,\theta}^s(\mathbb{T}^d)$ for all $s>0$ and admissible $1\le p,\theta\le \infty$ as well as provide analogues of the Littlewood--Paley-type characterizations of these spaces in terms of families of quasi-interpolation operators.

Functional data analysis is a fast evolving branch of statistics. Estimation procedures for the popular functional linear model either suffer from lack of robustness or are computationally burdensome. To address these shortcomings, a flexible family of penalized lower-rank estimators based on a bounded loss function is proposed. The proposed class of estimators is shown to be consistent and can attain high rates of convergence with respect to prediction error under weak regularity conditions. These results can be generalized to higher dimensions under similar assumptions. The finite-sample performance of the proposed family of estimators is investigated by a Monte-Carlo study which shows that these estimators reach high efficiency while offering protection against outliers. The proposed estimators compare favorably to existing approaches robust as well as non-robust alternatives. The good performance of the method is also illustrated on a complex real dataset.

Semi-functional linear regression models postulate a linear relationship between a scalar response and a functional covariate, and also include a non-parametric component involving a univariate explanatory variable. It is of practical importance to obtain estimators for these models that are robust against high-leverage outliers, which are generally difficult to identify and may cause serious damage to least squares and Huber-type $M$-estimators. For that reason, robust estimators for semi-functional linear regression models are constructed combining $B$-splines to approximate both the functional regression parameter and the nonparametric component with robust regression estimators based on a bounded loss function and a preliminary residual scale estimator. Consistency and rates of convergence for the proposed estimators are derived under mild regularity conditions. The reported numerical experiments show the advantage of the proposed methodology over the classical least squares and Huber-type $M$-estimators for finite samples. The analysis of real examples illustrate that the robust estimators provide better predictions for non-outlying points than the classical ones, and that when potential outliers are removed from the training and test sets both methods behave very similarly.

We prove necessary density conditions for sampling in spectral subspaces of a second order uniformly elliptic differential operator on $R^d$ with slowly oscillating symbol. For constant coefficient operators, these are precisely Landaus necessary density conditions for bandlimited functions, but for more general elliptic differential operators it has been unknown whether such a critical density even exists. Our results prove the existence of a suitable critical sampling density and compute it in terms of the geometry defined by the elliptic operator. In dimension 1, functions in a spectral subspace can be interpreted as functions with variable bandwidth, and we obtain a new critical density for variable bandwidth. The methods are a combination of the spectral theory and the regularity theory of elliptic partial differential operators, some elements of limit operators, certain compactifications of $R^d $, and the theory of reproducing kernel Hilbert spaces.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.

北京阿比特科技有限公司