亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the problem of list-decodable Gaussian covariance estimation. Given a multiset $T$ of $n$ points in $\mathbb R^d$ such that an unknown $\alpha<1/2$ fraction of points in $T$ are i.i.d. samples from an unknown Gaussian $\mathcal{N}(\mu, \Sigma)$, the goal is to output a list of $O(1/\alpha)$ hypotheses at least one of which is close to $\Sigma$ in relative Frobenius norm. Our main result is a $\mathrm{poly}(d,1/\alpha)$ sample and time algorithm for this task that guarantees relative Frobenius norm error of $\mathrm{poly}(1/\alpha)$. Importantly, our algorithm relies purely on spectral techniques. As a corollary, we obtain an efficient spectral algorithm for robust partial clustering of Gaussian mixture models (GMMs) -- a key ingredient in the recent work of [BDJ+22] on robustly learning arbitrary GMMs. Combined with the other components of [BDJ+22], our new method yields the first Sum-of-Squares-free algorithm for robustly learning GMMs. At the technical level, we develop a novel multi-filtering method for list-decodable covariance estimation that may be useful in other settings.

相關內容

In this work we propose tailored model order reduction for varying boundary optimal control problems governed by parametric partial differential equations. With varying boundary control, we mean that a specific parameter changes where the boundary control acts on the system. This peculiar formulation might benefit from model order reduction. Indeed, fast and reliable simulations of this model can be of utmost usefulness in many applied fields, such as geophysics and energy engineering. However, varying boundary control features very complicated and diversified parametric behaviour for the state and adjoint variables. The state solution, for example, changing the boundary control parameter, might feature transport phenomena. Moreover, the problem loses its affine structure. It is well known that classical model order reduction techniques fail in this setting, both in accuracy and in efficiency. Thus, we propose reduced approaches inspired by the ones used when dealing with wave-like phenomena. Indeed, we compare standard proper orthogonal decomposition with two tailored strategies: geometric recasting and local proper orthogonal decomposition. Geometric recasting solves the optimization system in a reference domain simplifying the problem at hand avoiding hyper-reduction, while local proper orthogonal decomposition builds local bases to increase the accuracy of the reduced solution in very general settings (where geometric recasting is unfeasible). We compare the various approaches on two different numerical experiments based on geometries of increasing complexity.

This paper presents a Bayesian reformulation of covariate-assisted principal (CAP) regression of Zhao et al. (2021), which aims to identify components in the covariance of response signal that are associated with covariates in a regression framework. We introduce a geometric formulation and reparameterization of individual covariance matrices in their tangent space. By mapping the covariance matrices to the tangent space, we leverage Euclidean geometry to perform posterior inference. This approach enables joint estimation of all parameters and uncertainty quantification within a unified framework, fusing dimension reduction for covariance matrices with regression model estimation. We validate the proposed method through simulation studies and apply it to analyze associations between covariates and brain functional connectivity, utilizing data from the Human Connectome Project.

In contrast to classical reinforcement learning, distributional reinforcement learning algorithms aim to learn the distribution of returns rather than their expected value. Since the nature of the return distribution is generally unknown a priori or arbitrarily complex, a common approach finds approximations within a set of representable, parametric distributions. Typically, this involves a projection of the unconstrained distribution onto the set of simplified distributions. We argue that this projection step entails a strong inductive bias when coupled with neural networks and gradient descent, thereby profoundly impacting the generalization behavior of learned models. In order to facilitate reliable uncertainty estimation through diversity, this work studies the combination of several different projections and representations in a distributional ensemble. We establish theoretical properties of such projection ensembles and derive an algorithm that uses ensemble disagreement, measured by the average $1$-Wasserstein distance, as a bonus for deep exploration. We evaluate our algorithm on the behavior suite benchmark and find that diverse projection ensembles lead to significant performance improvements over existing methods on a wide variety of tasks with the most pronounced gains in directed exploration problems.

Unsupervised hashing has received extensive research focus on the past decade, which typically aims at preserving a predefined metric (i.e. Euclidean metric) in the Hamming space. To this end, the encoding functions of the existing hashing are typically quasi-isometric, which devote to reducing the quantization loss from the target metric space to the discrete Hamming space. However, it is indeed problematic to directly minimize such error, since such mentioned two metric spaces are heterogeneous, and the quasi-isometric mapping is non-linear. The former leads to inconsistent feature distributions, while the latter leads to problematic optimization issues. In this paper, we propose a novel unsupervised hashing method, termed Sparsity-Induced Generative Adversarial Hashing (SiGAH), to encode large-scale high-dimensional features into binary codes, which well solves the two problems through a generative adversarial training framework. Instead of minimizing the quantization loss, our key innovation lies in enforcing the learned Hamming space to have similar data distribution to the target metric space via a generative model. In particular, we formulate a ReLU-based neural network as a generator to output binary codes and an MSE-loss based auto-encoder network as a discriminator, upon which a generative adversarial learning is carried out to train hash functions. Furthermore, to generate the synthetic features from the hash codes, a compressed sensing procedure is introduced into the generative model, which enforces the reconstruction boundary of binary codes to be consistent with that of original features. Finally, such generative adversarial framework can be trained via the Adam optimizer. Experimental results on four benchmarks, i.e., Tiny100K, GIST1M, Deep1M, and MNIST, have shown that the proposed SiGAH has superior performance over the state-of-the-art approaches.

Repeated measurements are common in many fields, where random variables are observed repeatedly across different subjects. Such data have an underlying hierarchical structure, and it is of interest to learn covariance/correlation at different levels. Most existing methods for sparse covariance/correlation matrix estimation assume independent samples. Ignoring the underlying hierarchical structure and correlation within the subject leads to erroneous scientific conclusions. In this paper, we study the problem of sparse and positive-definite estimation of between-subject and within-subject covariance/correlation matrices for repeated measurements. Our estimators are solutions to convex optimization problems that can be solved efficiently. We establish estimation error rates for the proposed estimators and demonstrate their favorable performance through theoretical analysis and comprehensive simulation studies. We further apply our methods to construct between-subject and within-subject covariance graphs of clinical variables from hemodialysis patients.

Ensuring the security of networked systems is a significant problem, considering the susceptibility of modern infrastructures and technologies to adversarial interference. A central component of this problem is how defensive resources should be allocated to mitigate the severity of potential attacks on the system. In this paper, we consider this in the context of a General Lotto game, where a defender and attacker deploys resources on the nodes of a network, and the objective is to secure as many links as possible. The defender secures a link only if it out-competes the attacker on both of its associated nodes. For bipartite networks, we completely characterize equilibrium payoffs and strategies for both the defender and attacker. Surprisingly, the resulting payoffs are the same for any bipartite graph. On arbitrary network structures, we provide lower and upper bounds on the defender's max-min value. Notably, the equilibrium payoff from bipartite networks serves as the lower bound. These results suggest that more connected networks are easier to defend against attacks. We confirm these findings with simulations that compute deterministic allocation strategies on large random networks. This also highlights the importance of randomization in the equilibrium strategies.

Researchers have integrated exploration techniques into multi-agent reinforcement learning (MARL) algorithms, drawing on their remarkable success in deep reinforcement learning. Nonetheless, exploration in MARL presents a more substantial challenge, as agents need to coordinate their efforts in order to achieve comprehensive state coverage. Reaching a unanimous agreement on which kinds of states warrant exploring can be a struggle for agents in this context. We introduce \textbf{M}ulti-agent \textbf{E}xploration based on \textbf{S}ub-state \textbf{E}ntropy (MESE) to address this limitation. This novel approach incentivizes agents to explore states cooperatively by directing them to achieve consensus via an extra team reward. Calculating the additional reward is based on the novelty of the current sub-state that merits cooperative exploration. MESE employs a conditioned entropy approach to select the sub-state, using particle-based entropy estimation to calculate the entropy. MESE is a plug-and-play module that can be seamlessly integrated into most existing MARL algorithms, which makes it a highly effective tool for reinforcement learning. Our experiments demonstrate that MESE can substantially improve the MAPPO's performance on various tasks in the StarCraft multi-agent challenge (SMAC).

As the use of machine learning models in real world high-stakes decision settings continues to grow, it is highly important that we are able to audit and control for any potential fairness violations these models may exhibit towards certain groups. To do so, one naturally requires access to sensitive attributes, such as demographics, gender, or other potentially sensitive features that determine group membership. Unfortunately, in many settings, this information is often unavailable. In this work we study the well known \emph{equalized odds} (EOD) definition of fairness. In a setting without sensitive attributes, we first provide tight and computable upper bounds for the EOD violation of a predictor. These bounds precisely reflect the worst possible EOD violation. Second, we demonstrate how one can provably control the worst-case EOD by a new post-processing correction method. Our results characterize when directly controlling for EOD with respect to the predicted sensitive attributes is -- and when is not -- optimal when it comes to controlling worst-case EOD. Our results hold under assumptions that are milder than previous works, and we illustrate these results with experiments on synthetic and real datasets.

Understanding causality should be a core requirement of any attempt to build real impact through AI. Due to the inherent unobservability of counterfactuals, large randomised trials (RCTs) are the standard for causal inference. But large experiments are generically expensive, and randomisation carries its own costs, e.g. when suboptimal decisions are trialed. Recent work has proposed more sample-efficient alternatives to RCTs, but these are not adaptable to the downstream application for which the causal effect is sought. In this work, we develop a task-specific approach to experimental design and derive sampling strategies customised to particular downstream applications. Across a range of important tasks, real-world datasets, and sample sizes, our method outperforms other benchmarks, e.g. requiring an order-of-magnitude less data to match RCT performance on targeted marketing tasks.

Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.

北京阿比特科技有限公司