亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For distributions over discrete product spaces $\prod_{i=1}^n \Omega_i'$, Glauber dynamics is a Markov chain that at each step, resamples a random coordinate conditioned on the other coordinates. We show that $k$-Glauber dynamics, which resamples a random subset of $k$ coordinates, mixes $k$ times faster in $\chi^2$-divergence, and assuming approximate tensorization of entropy, mixes $k$ times faster in KL-divergence. We apply this to Ising models $\mu_{J,h}(x)\propto \exp(\frac1 2\left\langle x,Jx \right\rangle + \langle h,x\rangle)$ with $\|J\|<1-c$ (the regime where fast mixing is known), where we show that we can implement each step of $\widetilde O(n/\|J\|_F)$-Glauber dynamics efficiently with a parallel algorithm, resulting in a parallel algorithm with running time $\widetilde O(\|J\|_F) = \widetilde O(\sqrt n)$.

相關內容

Existing contrastive learning methods rely on pairwise sample contrast $z_x^\top z_{x'}$ to learn data representations, but the learned features often lack clear interpretability from a human perspective. Theoretically, it lacks feature identifiability and different initialization may lead to totally different features. In this paper, we study a new method named tri-factor contrastive learning (triCL) that involves a 3-factor contrast in the form of $z_x^\top S z_{x'}$, where $S=\text{diag}(s_1,\dots,s_k)$ is a learnable diagonal matrix that automatically captures the importance of each feature. We show that by this simple extension, triCL can not only obtain identifiable features that eliminate randomness but also obtain more interpretable features that are ordered according to the importance matrix $S$. We show that features with high importance have nice interpretability by capturing common classwise features, and obtain superior performance when evaluated for image retrieval using a few features. The proposed triCL objective is general and can be applied to different contrastive learning methods like SimCLR and CLIP. We believe that it is a better alternative to existing 2-factor contrastive learning by improving its identifiability and interpretability with minimal overhead. Code is available at //github.com/PKU-ML/Tri-factor-Contrastive-Learning.

We construct and analyze finite element approximations of the Einstein tensor in dimension $N \ge 3$. We focus on the setting where a smooth Riemannian metric tensor $g$ on a polyhedral domain $\Omega \subset \mathbb{R}^N$ has been approximated by a piecewise polynomial metric $g_h$ on a simplicial triangulation $\mathcal{T}$ of $\Omega$ having maximum element diameter $h$. We assume that $g_h$ possesses single-valued tangential-tangential components on every codimension-1 simplex in $\mathcal{T}$. Such a metric is not classically differentiable in general, but it turns out that one can still attribute meaning to its Einstein curvature in a distributional sense. We study the convergence of the distributional Einstein curvature of $g_h$ to the Einstein curvature of $g$ under refinement of the triangulation. We show that in the $H^{-2}(\Omega)$-norm, this convergence takes place at a rate of $O(h^{r+1})$ when $g_h$ is an optimal-order interpolant of $g$ that is piecewise polynomial of degree $r \ge 1$. We provide numerical evidence to support this claim.

This paper studies the prediction of a target $\mathbf{z}$ from a pair of random variables $(\mathbf{x},\mathbf{y})$, where the ground-truth predictor is additive $\mathbb{E}[\mathbf{z} \mid \mathbf{x},\mathbf{y}] = f_\star(\mathbf{x}) +g_{\star}(\mathbf{y})$. We study the performance of empirical risk minimization (ERM) over functions $f+g$, $f \in F$ and $g \in G$, fit on a given training distribution, but evaluated on a test distribution which exhibits covariate shift. We show that, when the class $F$ is "simpler" than $G$ (measured, e.g., in terms of its metric entropy), our predictor is more resilient to heterogeneous covariate shifts} in which the shift in $\mathbf{x}$ is much greater than that in $\mathbf{y}$. Our analysis proceeds by demonstrating that ERM behaves qualitatively similarly to orthogonal machine learning: the rate at which ERM recovers the $f$-component of the predictor has only a lower-order dependence on the complexity of the class $G$, adjusted for partial non-indentifiability introduced by the additive structure. These results rely on a novel H\"older style inequality for the Dudley integral which may be of independent interest. Moreover, we corroborate our theoretical findings with experiments demonstrating improved resilience to shifts in "simpler" features across numerous domains.

Preference-based reinforcement learning (PbRL) is an approach that enables RL agents to learn from preference, which is particularly useful when formulating a reward function is challenging. Existing PbRL methods generally involve a two-step procedure: they first learn a reward model based on given preference data and then employ off-the-shelf reinforcement learning algorithms using the learned reward model. However, obtaining an accurate reward model solely from preference information, especially when the preference is from human teachers, can be difficult. Instead, we propose a PbRL algorithm that directly learns from preference without requiring any reward modeling. To achieve this, we adopt a contrastive learning framework to design a novel policy scoring metric that assigns a high score to policies that align with the given preferences. We apply our algorithm to offline RL tasks with actual human preference labels and show that our algorithm outperforms or is on par with the existing PbRL methods. Notably, on high-dimensional control tasks, our algorithm surpasses offline RL methods that learn with ground-truth reward information. Finally, we show that our algorithm can be successfully applied to fine-tune large language models.

We consider the problem of testing and learning quantum $k$-juntas: $n$-qubit unitary matrices which act non-trivially on just $k$ of the $n$ qubits and as the identity on the rest. As our main algorithmic results, we give (a) a $\widetilde{O}(\sqrt{k})$-query quantum algorithm that can distinguish quantum $k$-juntas from unitary matrices that are "far" from every quantum $k$-junta; and (b) a $O(4^k)$-query algorithm to learn quantum $k$-juntas. We complement our upper bounds for testing quantum $k$-juntas and learning quantum $k$-juntas with near-matching lower bounds of $\Omega(\sqrt{k})$ and $\Omega(\frac{4^k}{k})$, respectively. Our techniques are Fourier-analytic and make use of a notion of influence of qubits on unitaries.

We propose a novel and practical privacy notion called $f$-Membership Inference Privacy ($f$-MIP), which explicitly considers the capabilities of realistic adversaries under the membership inference attack threat model. Consequently, $f$-MIP offers interpretable privacy guarantees and improved utility (e.g., better classification accuracy). In particular, we derive a parametric family of $f$-MIP guarantees that we refer to as $\mu$-Gaussian Membership Inference Privacy ($\mu$-GMIP) by theoretically analyzing likelihood ratio-based membership inference attacks on stochastic gradient descent (SGD). Our analysis highlights that models trained with standard SGD already offer an elementary level of MIP. Additionally, we show how $f$-MIP can be amplified by adding noise to gradient updates. Our analysis further yields an analytical membership inference attack that offers two distinct advantages over previous approaches. First, unlike existing state-of-the-art attacks that require training hundreds of shadow models, our attack does not require any shadow model. Second, our analytical attack enables straightforward auditing of our privacy notion $f$-MIP. Finally, we quantify how various hyperparameters (e.g., batch size, number of model parameters) and specific data characteristics determine an attacker's ability to accurately infer a point's membership in the training set. We demonstrate the effectiveness of our method on models trained on vision and tabular datasets.

In the Trivially Perfect Editing problem one is given an undirected graph $G = (V,E)$ and an integer $k$ and seeks to add or delete at most $k$ edges in $G$ to obtain a trivially perfect graph. In a recent work, Dumas, Perez and Todinca [Algorithmica 2023] proved that this problem admits a kernel with $O(k^3)$ vertices. This result heavily relies on the fact that the size of trivially perfect modules can be bounded by $O(k^2)$ as shown by Drange and Pilipczuk [Algorithmica 2018]. To obtain their cubic vertex-kernel, Dumas, Perez and Todinca [Algorithmica 2023] then showed that a more intricate structure, so-called \emph{comb}, can be reduced to $O(k^2)$ vertices. In this work we show that the bound can be improved to $O(k)$ for both aforementioned structures and thus obtain a kernel with $O(k^2)$ vertices. Our approach relies on the straightforward yet powerful observation that any large enough structure contains unaffected vertices whose neighborhood remains unchanged by an editing of size $k$, implying strong structural properties.

Optimizing strongly convex functions subject to linear constraints is a fundamental problem with numerous applications. In this work, we propose a block (accelerated) randomized Bregman-Kaczmarz method that only uses a block of constraints in each iteration to tackle this problem. We consider a dual formulation of this problem in order to deal in an efficient way with the linear constraints. Using convex tools, we show that the corresponding dual function satisfies the Polyak-Lojasiewicz (PL) property, provided that the primal objective function is strongly convex and verifies additionally some other mild assumptions. However, adapting the existing theory on coordinate descent methods to our dual formulation can only give us sublinear convergence results in the dual space. In order to obtain convergence results in some criterion corresponding to the primal (original) problem, we transfer our algorithm to the primal space, which combined with the PL property allows us to get linear convergence rates. More specifically, we provide a theoretical analysis of the convergence of our proposed method under different assumptions on the objective and demonstrate in the numerical experiments its superior efficiency and speed up compared to existing methods for the same problem.

It is known that B\'{e}zier curves and surfaces may have multiple representations by different control polygons. The polygons may have different number of control points and may even be disjoint. Up to our knowledge, Pekerman et al. (2005) were the first to address the problem of testing two parametric polynomial curves for coincidence. Their approach is based on reduction of the input curves into canonical irreducible form. They claimed that their approach can be extended for testing tensor product surfaces but gave no further detail. In this paper we develop a new technique and provide a comprehensive solution to the problem of testing tensor product B\'ezier surfaces for coincidence. In (Vlachkova, 2017) an algorithm for testing B\'ezier curves was proposed based on subdivision. There a partial solution to the problem of testing tensor product B\'ezier surfaces was presented. Namely, the case where the irreducible surfaces are of same degree $(n,m)$, $n,m \in \mathbb{N}$, was resolved under certain additional condition. The other cases where one of the surfaces is of degree $(n,m)$ and the other is of degree either $(n,n+m)$, or $(n+m,m)$, or $(n+m,n+m)$ remained open. We have implemented our algorithm for testing tensor product B\'ezier surfaces for coincidence using Mathematica package. Experimental results and their analysis are presented.

Message passing Graph Neural Networks (GNNs) provide a powerful modeling framework for relational data. However, the expressive power of existing GNNs is upper-bounded by the 1-Weisfeiler-Lehman (1-WL) graph isomorphism test, which means GNNs that are not able to predict node clustering coefficients and shortest path distances, and cannot differentiate between different d-regular graphs. Here we develop a class of message passing GNNs, named Identity-aware Graph Neural Networks (ID-GNNs), with greater expressive power than the 1-WL test. ID-GNN offers a minimal but powerful solution to limitations of existing GNNs. ID-GNN extends existing GNN architectures by inductively considering nodes' identities during message passing. To embed a given node, ID-GNN first extracts the ego network centered at the node, then conducts rounds of heterogeneous message passing, where different sets of parameters are applied to the center node than to other surrounding nodes in the ego network. We further propose a simplified but faster version of ID-GNN that injects node identity information as augmented node features. Altogether, both versions of ID-GNN represent general extensions of message passing GNNs, where experiments show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks; 3% accuracy improvement on node and graph classification benchmarks; and 15% ROC AUC improvement on real-world link prediction tasks. Additionally, ID-GNNs demonstrate improved or comparable performance over other task-specific graph networks.

北京阿比特科技有限公司