亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the problem of estimating the left and right singular subspaces for a collection of heterogeneous random graphs with a shared common structure. We analyze an algorithm that first estimates the orthogonal projection matrices corresponding to these subspaces for each individual graph, then computes the average of the projection matrices, and finally finds the matrices whose columns are the eigenvectors corresponding to the $d$ largest eigenvalues of the sample averages. We show that the algorithm yields an estimate of the left and right singular vectors whose row-wise fluctuations are normally distributed around the rows of the true singular vectors. We then consider a two-sample hypothesis test for the null hypothesis that two graphs have the same edge probabilities matrices against the alternative hypothesis that their edge probabilities matrices are different. Using the limiting distributions for the singular subspaces, we present a test statistic whose limiting distribution converges to a central $\chi^2$ (resp. non-central $\chi^2$) under the null (resp. alternative) hypothesis. Finally, we adapt the theoretical analysis for multiple networks to the setting of distributed PCA; in particular, we derive normal approximations for the rows of the estimated eigenvectors using distributed PCA when the data exhibit a spiked covariance matrix structure.

相關內容

In this paper, we focus on solving a distributed convex aggregative optimization problem in a network, where each agent has its own cost function which depends not only on its own decision variables but also on the aggregated function of all agents' decision variables. The decision variable is constrained within a feasible set. In order to minimize the sum of the cost functions when each agent only knows its local cost function, we propose a distributed Frank-Wolfe algorithm based on gradient tracking for the aggregative optimization problem where each node maintains two estimates, namely an estimate of the sum of agents' decision variable and an estimate of the gradient of global function. The algorithm is projection-free, but only involves solving a linear optimization to get a search direction at each step. We show the convergence of the proposed algorithm for convex and smooth objective functions over a time-varying network. Finally, we demonstrate the convergence and computational efficiency of the proposed algorithm via numerical simulations.

Inspired by several delay-bounded mission-critical applications, optimizing the end-to-end reliability of multi-hop networks is an important problem subject to end-to-end delay constraints on the packets. Towards that direction, Automatic Repeat Request (ARQ) based strategies have been recently proposed wherein the problem statement is to distribute a certain total number of ARQs (that capture end-to-end delay) across the nodes such that the end-to-end reliability is optimized. Although such strategies provide a fine control to trade end-to-end delay with end-to-end reliability, their performance degrades in slowly-varying channel conditions. Pointing at this drawback, in this work, we propose a Chase Combing Hybrid ARQ (CC-HARQ) based multi-hop network addressing the problem statement of how to distribute a certain total number of ARQs such that the end-to-end reliability is optimized. Towards solving the problem, first we identify that the objective function of the optimization problem is intractable due to the presence of Marcum-Q functions in it. As a result, we propose an approximation on the objective function and then prove a set of necessary and sufficient conditions on the near-optimal ARQ distribution. Subsequently, we propose a low-complexity algorithm to solve the problem for any network size. We show that CC-HARQ based strategies are particularly appealing in slow-fading channels wherein the existing ARQ strategies fail.

The Receiver Operating Characteristic (ROC) curve is a useful tool that measures the discriminating power of a continuous variable or the accuracy of a pharmaceutical or medical test to distinguish between two conditions or classes. In certain situations, the practitioner may be able to measure some covariates related to the diagnostic variable which can increase the discriminating power of the ROC curve. To protect against the existence of atypical data among the observations, a procedure to obtain robust estimators for the ROC curve in presence of covariates is introduced. The considered proposal focusses on a semiparametric approach which fits a location-scale regression model to the diagnostic variable and considers empirical estimators of the regression residuals distributions. Robust parametric estimators are combined with adaptive weighted empirical distribution estimators to down-weight the influence of outliers. The uniform consistency of the proposal is derived under mild assumptions. A Monte Carlo study is carried out to compare the performance of the robust proposed estimators with the classical ones both, in clean and contaminated samples. A real data set is also analysed.

We establish in this work approximation results of deep neural networks for smooth functions measured in Sobolev norms, motivated by recent development of numerical solvers for partial differential equations using deep neural networks. {Our approximation results are nonasymptotic in the sense that the error bounds are explicitly characterized in terms of both the width and depth of the networks simultaneously with all involved constants explicitly determined.} Namely, for $f\in C^s([0,1]^d)$, we show that deep ReLU networks of width $\mathcal{O}(N\log{N})$ and of depth $\mathcal{O}(L\log{L})$ can achieve a nonasymptotic approximation rate of $\mathcal{O}(N^{-2(s-1)/d}L^{-2(s-1)/d})$ with respect to the $\mathcal{W}^{1,p}([0,1]^d)$ norm for $p\in[1,\infty)$. If either the ReLU function or its square is applied as activation functions to construct deep neural networks of width $\mathcal{O}(N\log{N})$ and of depth $\mathcal{O}(L\log{L})$ to approximate $f\in C^s([0,1]^d)$, the approximation rate is $\mathcal{O}(N^{-2(s-n)/d}L^{-2(s-n)/d})$ with respect to the $\mathcal{W}^{n,p}([0,1]^d)$ norm for $p\in[1,\infty)$.

We propose \textbf{JAWS}, a series of wrapper methods for distribution-free uncertainty quantification tasks under covariate shift, centered on our core method \textbf{JAW}, the \textbf{JA}ckknife+ \textbf{W}eighted with likelihood-ratio weights. JAWS also includes computationally efficient \textbf{A}pproximations of JAW using higher-order influence functions: \textbf{JAWA}. Theoretically, we show that JAW relaxes the jackknife+'s assumption of data exchangeability to achieve the same finite-sample coverage guarantee even under covariate shift. JAWA further approaches the JAW guarantee in the limit of either the sample size or the influence function order under mild assumptions. Moreover, we propose a general approach to repurposing any distribution-free uncertainty quantification method and its guarantees to the task of risk assessment: a task that generates the estimated probability that the true label lies within a user-specified interval. We then propose \textbf{JAW-R} and \textbf{JAWA-R} as the repurposed versions of proposed methods for \textbf{R}isk assessment. Practically, JAWS outperform the state-of-the-art predictive inference baselines in a variety of biased real world data sets for both interval-generation and risk-assessment auditing tasks.

Suppose we are given integer $k \leq n$ and $n$ boxes labeled $1,\ldots, n$ by an adversary, each containing a number chosen from an unknown distribution. We have to choose an order to sequentially open these boxes, and each time we open the next box in this order, we learn its number. If we reject a number in a box, the box cannot be recalled. Our goal is to accept the $k$ largest of these numbers, without necessarily opening all boxes. This is the free order multiple-choice secretary problem. Free order variants were studied extensively for the secretary and prophet problems. Kesselheim, Kleinberg, and Niazadeh KKN (STOC'15) initiated a study of randomness-efficient algorithms (with the cheapest order in terms of used random bits) for the free order secretary problems. We present an algorithm for free order multiple-choice secretary, which is simultaneously optimal for the competitive ratio and used amount of randomness. I.e., we construct a distribution on orders with optimal entropy $\Theta(\log\log n)$ such that a deterministic multiple-threshold algorithm is $1-O(\sqrt{\log k/k})$-competitive. This improves in three ways the previous best construction by KKN, whose competitive ratio is $1 - O(1/k^{1/3}) - o(1)$. Our competitive ratio is (near)optimal for the multiple-choice secretary problem; it works for exponentially larger parameter $k$; and our algorithm is a simple deterministic multiple-threshold algorithm, while that in KKN is randomized. We also prove a corresponding lower bound on the entropy of optimal solutions for the multiple-choice secretary problem, matching entropy of our algorithm, where no such previous lower bound was known. We obtain our algorithmic results with a host of new techniques, and with these techniques we also improve significantly the previous results of KKN about constructing entropy-optimal distributions for the classic free order secretary.

When learning disconnected distributions, Generative adversarial networks (GANs) are known to face model misspecification. Indeed, a continuous mapping from a unimodal latent distribution to a disconnected one is impossible, so GANs necessarily generate samples outside of the support of the target distribution. This raises a fundamental question: what is the latent space partition that minimizes the measure of these areas? Building on a recent result of geometric measure theory, we prove that an optimal GANs must structure its latent space as a 'simplicial cluster' - a Voronoi partition where cells are convex cones - when the dimension of the latent space is larger than the number of modes. In this configuration, each Voronoi cell maps to a distinct mode of the data. We derive both an upper and a lower bound on the optimal precision of GANs learning disconnected manifolds. Interestingly, these two bounds have the same order of decrease: $\sqrt{\log m}$, $m$ being the number of modes. Finally, we perform several experiments to exhibit the geometry of the latent space and experimentally show that GANs have a geometry with similar properties to the theoretical one.

Causal inference for extreme events has many potential applications in fields such as climate science, medicine and economics. We study the extremal quantile treatment effect of a binary treatment on a continuous, heavy-tailed outcome. Existing methods are limited to the case where the quantile of interest is within the range of the observations. For applications in risk assessment, however, the most relevant cases relate to extremal quantiles that go beyond the data range. We introduce an estimator of the extremal quantile treatment effect that relies on asymptotic tail approximation, and use a new causal Hill estimator for the extreme value indices of potential outcome distributions. We establish asymptotic normality of the estimators and propose a consistent variance estimator to achieve valid statistical inference. We illustrate the performance of our method in simulation studies, and apply it to a real data set to estimate the extremal quantile treatment effect of college education on wage.

Inference of the marginal probability distribution is defined as the calculation of the probability of a subset of the variables and is relevant for handling missing data and hidden variables. While inference of the marginal probability distribution is crucial for various problems in machine learning and statistics, its exact computation is generally not feasible for categorical variables in Bayesian networks due to the NP-hardness of this task. We develop a divide-and-conquer approach using the graphical properties of Bayesian networks to split the computation of the marginal probability distribution into sub-calculations of lower dimensionality, thus reducing the overall computational complexity. Exploiting this property, we present an efficient and scalable algorithm for calculating the marginal probability distribution for categorical variables. The novel method is compared against state-of-the-art approximate inference methods in a benchmarking study, where it displays superior performance. As an immediate application, we demonstrate how our method can be used to classify incomplete data against Bayesian networks and use this approach for identifying the cancer subtype of kidney cancer patient samples.

Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.

北京阿比特科技有限公司