亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work is about recovering an analysis-sparse vector, i.e. sparse vector in some transform domain, from under-sampled measurements. In real-world applications, there often exist random analysis-sparse vectors whose distribution in the analysis domain are known. To exploit this information, a weighted $\ell_1$ analysis minimization is often considered. The task of choosing the weights in this case is however challenging and non-trivial. In this work, we provide an analytical method to choose the suitable weights. Specifically, we first obtain a tight upper-bound expression for the expected number of required measurements. This bound depends on two critical parameters: support distribution and expected sign of the analysis domain which are both accessible in advance. Then, we calculate the near-optimal weights by minimizing this expression with respect to the weights. Our strategy works for both noiseless and noisy settings. Numerical results demonstrate the superiority of our proposed method. Specifically, the weighted $\ell_1$ analysis minimization with our near-optimal weighting design considerably needs fewer measurements than its regular $\ell_1$ analysis counterpart.

相關內容

This paper studies the prediction of a target $\mathbf{z}$ from a pair of random variables $(\mathbf{x},\mathbf{y})$, where the ground-truth predictor is additive $\mathbb{E}[\mathbf{z} \mid \mathbf{x},\mathbf{y}] = f_\star(\mathbf{x}) +g_{\star}(\mathbf{y})$. We study the performance of empirical risk minimization (ERM) over functions $f+g$, $f \in \mathcal{F}$ and $g \in \mathcal{G}$, fit on a given training distribution, but evaluated on a test distribution which exhibits covariate shift. We show that, when the class $\mathcal{F}$ is "simpler" than $\mathcal{G}$ (measured, e.g., in terms of its metric entropy), our predictor is more resilient to \emph{heterogenous covariate shifts} in which the shift in $\mathbf{x}$ is much greater than that in $\mathbf{y}$. These results rely on a novel H\"older style inequality for the Dudley integral which may be of independent interest. Moreover, we corroborate our theoretical findings with experiments demonstrating improved resilience to shifts in "simpler" features across numerous domains.

In this paper, we consider a status update system, where an access point collects measurements from multiple sensors that monitor a common physical process, fuses them, and transmits the aggregated sample to the destination over an erasure channel. Under a typical information fusion scheme, the distortion of the fused sample is inversely proportional to the number of measurements received. Our goal is to minimize the long-term average age while satisfying the average energy and general age-based distortion requirements. Specifically, we focus on the setting in which the distortion requirement is stricter when the age of the update is older. We show that the optimal policy is a mixture of two stationary, deterministic, threshold-based policies, each of which is optimal for a parameterized problem that aims to minimize the weighted sum of the age and energy under the distortion constraint. We then derive analytically the associated optimal average age-cost function and characterize its performance in the large threshold regime, the results of which shed critical insights on the tradeoff among age, energy, and the distortion of the samples. We have also developed a closed-form solution for the special case when the distortion requirement is independent of the age, arguably the most important setting for practical applications.

We consider a reinforcement learning setting in which the deployment environment is different from the training environment. Applying a robust Markov decision processes formulation, we extend the distributionally robust $Q$-learning framework studied in Liu et al. [2022]. Further, we improve the design and analysis of their multi-level Monte Carlo estimator. Assuming access to a simulator, we prove that the worst-case expected sample complexity of our algorithm to learn the optimal robust $Q$-function within an $\epsilon$ error in the sup norm is upper bounded by $\tilde O(|S||A|(1-\gamma)^{-5}\epsilon^{-2}p_{\wedge}^{-6}\delta^{-4})$, where $\gamma$ is the discount rate, $p_{\wedge}$ is the non-zero minimal support probability of the transition kernels and $\delta$ is the uncertainty size. This is the first sample complexity result for the model-free robust RL problem. Simulation studies further validate our theoretical results.

Generative networks have experienced great empirical successes in distribution learning. Many existing experiments have demonstrated that generative networks can generate high-dimensional complex data from a low-dimensional easy-to-sample distribution. However, this phenomenon can not be justified by existing theories. The widely held manifold hypothesis speculates that real-world data sets, such as natural images and signals, exhibit low-dimensional geometric structures. In this paper, we take such low-dimensional data structures into consideration by assuming that data distributions are supported on a low-dimensional manifold. We prove statistical guarantees of generative networks under the Wasserstein-1 loss. We show that the Wasserstein-1 loss converges to zero at a fast rate depending on the intrinsic dimension instead of the ambient data dimension. Our theory leverages the low-dimensional geometric structures in data sets and justifies the practical power of generative networks. We require no smoothness assumptions on the data distribution which is desirable in practice.

Out-of-distribution (OOD) detection is concerned with identifying data points that do not belong to the same distribution as the model's training data. For the safe deployment of predictive models in a real-world environment, it is critical to avoid making confident predictions on OOD inputs as it can lead to potentially dangerous consequences. However, OOD detection largely remains an under-explored area in the audio (and speech) domain. This is despite the fact that audio is a central modality for many tasks, such as speaker diarization, automatic speech recognition, and sound event detection. To address this, we propose to leverage feature-space of the model with deep k-nearest neighbors to detect OOD samples. We show that this simple and flexible method effectively detects OOD inputs across a broad category of audio (and speech) datasets. Specifically, it improves the false positive rate (FPR@TPR95) by 17% and the AUROC score by 7% than other prior techniques.

Due to the success of the pre-trained language model (PLM), existing PLM-based summarization models show their powerful generative capability. However, these models are trained on general-purpose summarization datasets, leading to generated summaries failing to satisfy the needs of different readers. To generate summaries with topics, many efforts have been made on topic-focused summarization. However, these works generate a summary only guided by a prompt comprising topic words. Despite their success, these methods still ignore the disturbance of sentences with non-relevant topics and only conduct cross-interaction between tokens by attention module. To address this issue, we propose a topic-arc recognition objective and topic-selective graph network. First, the topic-arc recognition objective is used to model training, which endows the capability to discriminate topics for the model. Moreover, the topic-selective graph network can conduct topic-guided cross-interaction on sentences based on the results of topic-arc recognition. In the experiments, we conduct extensive evaluations on NEWTS and COVIDET datasets. Results show that our methods achieve state-of-the-art performance.

Time-varying stochastic optimization problems frequently arise in machine learning practice (e.g. gradual domain shift, object tracking, strategic classification). Although most problems are solved in discrete time, the underlying process is often continuous in nature. We exploit this underlying continuity by developing predictor-corrector algorithms for time-varying stochastic optimizations. We provide error bounds for the iterates, both in presence of pure and noisy access to the queries from the relevant derivatives of the loss function. Furthermore, we show (theoretically and empirically in several examples) that our method outperforms non-predictor corrector methods that do not exploit the underlying continuous process.

Prophet inequalities for rewards maximization are fundamental to optimal stopping theory with extensive applications to mechanism design and online optimization. We study the \emph{cost minimization} counterpart of the classical prophet inequality: a decision maker is facing a sequence of costs $X_1, X_2, \dots, X_n$ drawn from known distributions in an online manner and \emph{must} ``stop'' at some point and take the last cost seen. The goal is to compete with a ``prophet'' who can see the realizations of all $X_i$'s upfront and always select the minimum, obtaining a cost of $\mathbb{E}[\min_i X_i]$. If the $X_i$'s are not identically distributed, no strategy can achieve a bounded approximation, even for random arrival order and $n = 2$. This leads us to consider the case where the $X_i$'s are independent and identically distributed (I.I.D.). For the I.I.D. case, we show that if the distribution satisfies a mild condition, the optimal stopping strategy achieves a (distribution-dependent) constant-factor approximation to the prophet's cost. Moreover, for MHR distributions, this constant is at most $2$. All our results are tight. We also demonstrate an example distribution that does not satisfy the condition and for which the competitive ratio of any algorithm is infinite. Turning our attention to single-threshold strategies, we design a threshold that achieves a $O\left(polylog{n}\right)$-factor approximation, where the exponent in the logarithmic factor is a distribution-dependent constant, and we show a matching lower bound. Finally, we note that our results can be used to design approximately optimal posted price-style mechanisms for procurement auctions which may be of independent interest. Our techniques utilize the \emph{hazard rate} of the distribution in a novel way, allowing for a fine-grained analysis which could find further applications in prophet inequalities.

Estimating the Shannon entropy of a discrete distribution from which we have only observed a small sample is challenging. Estimating other information-theoretic metrics, such as the Kullback-Leibler divergence between two sparsely sampled discrete distributions, is even harder. Existing approaches to address these problems have shortcomings: they are biased, heuristic, work only for some distributions, and/or cannot be applied to all information-theoretic metrics. Here, we propose a fast, semi-analytical estimator for sparsely sampled distributions that is efficient, precise, and general. Its derivation is grounded in probabilistic considerations and uses a hierarchical Bayesian approach to extract as much information as possible from the few observations available. Our approach provides estimates of the Shannon entropy with precision at least comparable to the state of the art, and most often better. It can also be used to obtain accurate estimates of any other information-theoretic metric, including the notoriously challenging Kullback-Leibler divergence. Here, again, our approach performs consistently better than existing estimators.

In recent years, Face Image Quality Assessment (FIQA) has become an indispensable part of the face recognition system to guarantee the stability and reliability of recognition performance in an unconstrained scenario. For this purpose, the FIQA method should consider both the intrinsic property and the recognizability of the face image. Most previous works aim to estimate the sample-wise embedding uncertainty or pair-wise similarity as the quality score, which only considers the information from partial intra-class. However, these methods ignore the valuable information from the inter-class, which is for estimating to the recognizability of face image. In this work, we argue that a high-quality face image should be similar to its intra-class samples and dissimilar to its inter-class samples. Thus, we propose a novel unsupervised FIQA method that incorporates Similarity Distribution Distance for Face Image Quality Assessment (SDD-FIQA). Our method generates quality pseudo-labels by calculating the Wasserstein Distance (WD) between the intra-class similarity distributions and inter-class similarity distributions. With these quality pseudo-labels, we are capable of training a regression network for quality prediction. Extensive experiments on benchmark datasets demonstrate that the proposed SDD-FIQA surpasses the state-of-the-arts by an impressive margin. Meanwhile, our method shows good generalization across different recognition systems.

北京阿比特科技有限公司