亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we generalize the log-Sobolev inequalities to R\'enyi--Sobolev inequalities by replacing the entropy with the two-parameter entropy, which is a generalized version of entropy closely related to R\'enyi divergences. We derive the sharp nonlinear dimension-free version of this kind of inequalities. Interestingly, the resultant inequalities show a phase transition depending on the parameters. We then connect R\'enyi--Sobolev inequalities to the spectral graph theory. Our proofs in this paper are based on the information-theoretic characterization of the R\'enyi--Sobolev inequalities, as well as the method of types.

相關內容

Annotation of discourse relations is a known difficult task, especially for non-expert annotators. In this paper, we investigate novice annotators' uncertainty on the annotation of discourse relations on spoken conversational data. We find that dialogue context (single turn, pair of turns within speaker, and pair of turns across speakers) is a significant predictor of confidence scores. We compute distributed representations of discourse relations from co-occurrence statistics that incorporate information about confidence scores and dialogue context. We perform a hierarchical clustering analysis using these representations and show that weighting discourse relation representations with information about confidence and dialogue context coherently models our annotators' uncertainty about discourse relation labels.

We develop a new method for selecting the penalty parameter for $\ell_1$-penalized M-estimators in high dimensions, which we refer to as bootstrapping after cross-validation. We derive rates of convergence for the corresponding $\ell_1$-penalized M-estimator and also for the post-$\ell_1$-penalized M-estimator, which refits the non-zero parameters of the former estimator without penalty in the criterion function. We demonstrate via simulations that our method is not dominated by cross-validation in terms of estimation errors and outperforms cross-validation in terms of inference. As an illustration, we revisit Fryer Jr (2019), who investigated racial differences in police use of force, and confirm his findings.

In this paper, we study the spatial bandwidth for line-of-sight (LOS) channels with linear large-scale antenna arrays (LSAAs) in 3D space. We provide approximations to the spatial bandwidth at the center of the receiving array, of the form $C R^{-B}$, where $R$ is the radial distance, and $C$ and $B$ are directional-dependent and piecewise constant in $R$. The approximations are valid in the entire radiative region, that is, for $R$ greater than a few wavelengths. When the length of the receiving array is small relative to $R$, the product of the array length and the spatial bandwidth provides an estimate of the available spatial degree-of-freedom (DOF) in the channel. In a case study, we apply these approximations to the evaluation of spatial multiplexing regions under random orientation conditions. The goodness-of-fit of the approximations is demonstrated and some interesting findings about the DOF performance of the channel under 3D and 2D orientation restrictions are obtained, e.g., that, under some conditions, it is better to constrain the receiving array orientation to be uniform over the unit circle in the 2D ground plane rather than uniform over the 3D unit sphere.

Past works have shown that the Bernstein-von Mises theorem, on the asymptotic normality of posterior distributions, holds if the parameter dimension $d$ grows slower than the cube root of sample size $n$. Here, we prove the first Bernstein-von Mises theorem in the regime $d^2\ll n$. We establish this result for 1) exponential families and 2) logistic regression with Gaussian design. The proof builds on our recent work on the accuracy of the Laplace approximation to posterior distributions, in which we showed the approximation error in TV distance scales as $d/\sqrt n$.

In this paper, we introduce the concept of Density-Balanced Subset in a matroid, in which independent sets can be sampled so as to guarantee that (i) each element has the same probability to be sampled, and (ii) those events are negatively correlated. These Density-Balanced Subsets are subsets in the ground set of a matroid in which the traditional notion of uniform random sampling can be extended. We then provide an application of this concept to the Matroid-Constrained Maximum Coverage problem. In this problem, given a matroid $\mathcal{M} = (V, \mathcal{I})$ of rank $k$ on a ground set $V$ and a coverage function $f$ on $V$, the goal is to find an independent set $S \in \mathcal{I}$ maximizing $f(S)$. This problem is an important special case of the much-studied submodular function maximization problem subject to a matroid constraint; this is also a generalization of the maximum $k$-cover problem in a graph. In this paper, assuming that the coverage function has a bounded frequency $\mu$ (i.e., any element of the underlying universe of the coverage function appears in at most $\mu$ sets), we design a procedure, parameterized by some integer $\rho$, to extract in polynomial time an approximate kernel of size $\rho \cdot k$ that is guaranteed to contain a $1 - (\mu - 1)/\rho$ approximation of the optimal solution. This procedure can then be used to get a Fixed-Parameter Tractable Approximation Scheme (FPT-AS) providing a $1 - \varepsilon$ approximation in time $(\mu/\varepsilon)^{O(k)} \cdot |V|^{O(1)}$. This generalizes and improves the results of [Manurangsi, 2019] and [Huang and Sellier, 2022], providing the first FPT-AS working on an arbitrary matroid. Moreover, because of its simplicity, the kernel construction can be performed in the streaming setting.

In this paper, we study the Bayesian multi-task variable selection problem, where the goal is to select activated variables for multiple related data sets simultaneously. Our proposed method generalizes the spike-and-slab prior to multiple data sets, and we prove its posterior consistency in high-dimensional regimes. To calculate the posterior distribution, we propose a novel variational Bayes algorithm based on the recently developed "sum of single effects" model of Wang et al. (2020). Finally, motivated by differential gene network analysis in biology, we extend our method to joint learning of multiple directed acyclic graphical models. Both simulation studies and real gene expression data analysis are conducted to show the effectiveness of the proposed method.

The problem of learning one task with samples from another task has received much interest recently. In this paper, we ask a fundamental question: when is combining data from two tasks better than learning one task alone? Intuitively, the transfer effect from one task to another task depends on dataset shifts such as sample sizes and covariance matrices. However, quantifying such a transfer effect is challenging since we need to compare the risks between joint learning and single-task learning, and the comparative advantage of one over the other depends on the exact kind of dataset shift between both tasks. This paper uses random matrix theory to tackle this challenge in a linear regression setting with two tasks. We give precise asymptotics about the excess risks of some commonly used estimators in the high-dimensional regime, when the sample sizes increase proportionally with the feature dimension at fixed ratios. The precise asymptotics is provided as a function of the sample sizes and covariate/model shifts, which can be used to study transfer effects: In a random-effects model, we give conditions to determine positive and negative transfers between learning two tasks versus single-task learning; the conditions reveal intricate relations between dataset shifts and transfer effects. Simulations justify the validity of the asymptotics in finite dimensions. Our analysis examines several functions of two different sample covariance matrices, revealing some estimates that generalize classical results in the random matrix theory literature, which may be of independent interest.

In this paper, we describe a new algorithm called Preferential Attachment k-class Classifier (PreAttacK) for detecting fake accounts in a social network. Recently, several algorithms have obtained high accuracy on this problem. However, they have done so by relying on information about fake accounts' friendships or the content they share with others--the very things we seek to prevent. PreAttacK represents a significant departure from these approaches. We provide some of the first detailed distributional analyses of how new fake (and real) accounts first attempt to request friends after joining a major network (Facebook). We show that even before a new account has made friends or shared content, these initial friend request behaviors evoke a natural multi-class extension of the canonical Preferential Attachment model of social network growth. We use this model to derive a new algorithm, PreAttacK. We prove that in relevant problem instances, PreAttacK near-optimally approximates the posterior probability that a new account is fake under this multi-class Preferential Attachment model of new accounts' (not-yet-answered) friend requests. These are the first provable guarantees for fake account detection that apply to new users, and that do not require strong homophily assumptions. This principled approach also makes PreAttacK the only algorithm with provable guarantees that obtains state-of-the-art performance on new users on the global Facebook network, where it converges to AUC=0.9 after new users send + receive a total of just 20 not-yet-answered friend requests. For comparison, state-of-the-art benchmarks do not obtain this AUC even after observing additional data on new users' first 100 friend requests. Thus, unlike mainstream algorithms, PreAttacK converges before the median new fake account has made a single friendship (accepted friend request) with a human.

Philosophical research in AI has hitherto largely focused on the ethics of AI. In this paper we, an ethicist of belief and a machine learning scientist, suggest that we need to pursue a novel area of philosophical research in AI - the epistemology of AI, and in particular an ethics of belief for AI. Here we take the ethics of belief, a field that has been defined in various ways, to refer to a sub-field within epistemology. This subfield is concerned with the study of possible moral, practical, and other non-alethic dimensions of belief. And in this paper, we will primarily be concerned with the normative question within the ethics of belief regarding what agents - both human and artificial - ought to believe, rather than with descriptive questions concerning whether certain beliefs meet various evaluative standards such as being true, being justified or warranted, constituting knowledge, and so on. We suggest four topics in extant work in the ethics of (human) belief that can be applied to an ethics of AI belief: doxastic wronging by AI; morally owed beliefs; pragmatic and moral encroachment on AI beliefs; and moral responsibility for AI beliefs. We also indicate two relatively nascent areas of philosophical research that haven't yet been generally recognized as ethics of AI belief research, but that do fall within this field of research in virtue of investigating various moral and practical dimensions of belief: the epistemic and ethical decolonization of AI; and epistemic injustice in AI.

Multimodality Representation Learning, as a technique of learning to embed information from different modalities and their correlations, has achieved remarkable success on a variety of applications, such as Visual Question Answering (VQA), Natural Language for Visual Reasoning (NLVR), and Vision Language Retrieval (VLR). Among these applications, cross-modal interaction and complementary information from different modalities are crucial for advanced models to perform any multimodal task, e.g., understand, recognize, retrieve, or generate optimally. Researchers have proposed diverse methods to address these tasks. The different variants of transformer-based architectures performed extraordinarily on multiple modalities. This survey presents the comprehensive literature on the evolution and enhancement of deep learning multimodal architectures to deal with textual, visual and audio features for diverse cross-modal and modern multimodal tasks. This study summarizes the (i) recent task-specific deep learning methodologies, (ii) the pretraining types and multimodal pretraining objectives, (iii) from state-of-the-art pretrained multimodal approaches to unifying architectures, and (iv) multimodal task categories and possible future improvements that can be devised for better multimodal learning. Moreover, we prepare a dataset section for new researchers that covers most of the benchmarks for pretraining and finetuning. Finally, major challenges, gaps, and potential research topics are explored. A constantly-updated paperlist related to our survey is maintained at //github.com/marslanm/multimodality-representation-learning.

小貼士
登錄享
相關主題
北京阿比特科技有限公司