亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recent theoretical developments in coset coding theory have provided continuous-valued functions which give the equivocation and maximum likelihood (ML) decoding probability of coset secrecy codes. In this work, we develop a method for incorporating these functions, along with a complex set of constraints, into a gradient descent optimization algorithm. This algorithm employs a movement cost function and trigonometric update step to ensure that the continuous-valued code definition vector ultimately reaches a value which yields a realizable coset code. This algorithm is used to produce coset codes with blocklength up to a few thousand. These codes were compared against published codes, including both short-blocklength and capacity-achieving constructions. For most code sizes, codes generated using gradient descent outperformed all others, especially capacity-achieving constructions, which performed significantly worse than randomly-generated codes at short blocklength.

相關內容

We study the problem of fairly allocating indivisible goods among agents which are equipped with {\em leveled} valuation functions. Such preferences, that have been studied before in economics and fair division literature, capture a simple and intuitive economic behavior; larger bundles are always preferred to smaller ones. We provide a fine-grained analysis for various subclasses of leveled valuations focusing on two extensively studied notions of fairness, (approximate) MMS and EFX. In particular, we present a general positive result, showing the existence of $2/3$-MMS allocations under valuations that are both leveled and submodular. We also show how some of our ideas can be used beyond the class of leveled valuations; for the case of two submodular (not necessarily leveled) agents we show that there always exists a $2/3$-MMS allocation, complementing a recent impossibility result. Then, we switch to the case of subadditive and fractionally subadditive leveled agents, where we are able to show tight (lower and upper) bounds of $1/2$ on the approximation factor of MMS. Moreover, we show the existence of exact EFX allocations under general leveled valuations via a simple protocol that in addition satisfies several natural economic properties. Finally, we take a mechanism design approach and we propose protocols that are both truthful and approximately fair under leveled valuations.

Submodular optimization is a fundamental problem with many applications in machine learning, often involving decision-making over datasets with sensitive attributes such as gender or age. In such settings, it is often desirable to produce a diverse solution set that is fairly distributed with respect to these attributes. Motivated by this, we initiate the study of Fair Submodular Cover (FSC), where given a ground set $U$, a monotone submodular function $f:2^U\to\mathbb{R}_{\ge 0}$, a threshold $\tau$, the goal is to find a balanced subset of $S$ with minimum cardinality such that $f(S)\ge\tau$. We first introduce discrete algorithms for FSC that achieve a bicriteria approximation ratio of $(\frac{1}{\epsilon}, 1-O(\epsilon))$. We then present a continuous algorithm that achieves a $(\ln\frac{1}{\epsilon}, 1-O(\epsilon))$-bicriteria approximation ratio, which matches the best approximation guarantee of submodular cover without a fairness constraint. Finally, we complement our theoretical results with a number of empirical evaluations that demonstrate the effectiveness of our algorithms on instances of maximum coverage.

End-to-end (E2E) approaches to keyword search (KWS) are considerably simpler in terms of training and indexing complexity when compared to approaches which use the output of automatic speech recognition (ASR) systems. This simplification however has drawbacks due to the loss of modularity. In particular, where ASR-based KWS systems can benefit from external unpaired text via a language model, current formulations of E2E KWS systems have no such mechanism. Therefore, in this paper, we propose a multitask training objective which allows unpaired text to be integrated into E2E KWS without complicating indexing and search. In addition to training an E2E KWS model to retrieve text queries from spoken documents, we jointly train it to retrieve text queries from masked written documents. We show empirically that this approach can effectively leverage unpaired text for KWS, with significant improvements in search performance across a wide variety of languages. We conduct analysis which indicates that these improvements are achieved because the proposed method improves document representations for words in the unpaired text. Finally, we show that the proposed method can be used for domain adaptation in settings where in-domain paired data is scarce or nonexistent.

The problem of an optimal mapping between Hilbert spaces $IN$ and $OUT$, based on a series of density matrix mapping measurements $\rho^{(l)} \to \varrho^{(l)}$, $l=1\dots M$, is formulated as an optimization problem maximizing the total fidelity $\mathcal{F}=\sum_{l=1}^{M} \omega^{(l)} F\left(\varrho^{(l)},\sum_s B_s \rho^{(l)} B^{\dagger}_s\right)$ subject to probability preservation constraints on Kraus operators $B_s$. For $F(\varrho,\sigma)$ in the form that total fidelity can be represented as a quadratic form with superoperator $\mathcal{F}=\sum_s\left\langle B_s\middle|S\middle| B_s \right\rangle$ (either exactly or as an approximation) an iterative algorithm is developed to find the global maximum. The result comprises in $N_s$ operators $B_s$ that collectively form an $IN$ to $OUT$ quantum channel $A^{OUT}=\sum_s B_s A^{IN} B_s^{\dagger}$. The work introduces two important generalizations of unitary learning: 1. $IN$/$OUT$ states are represented as density matrices. 2. The mapping itself is formulated as a general quantum channel. This marks a crucial advancement from the commonly studied unitary mapping of pure states $\phi_l=\mathcal{U} \psi_l$ to a general quantum channel, what allows us to distinguish probabilistic mixture of states and their superposition. An application of the approach is demonstrated on unitary learning of density matrix mapping $\varrho^{(l)}=\mathcal{U} \rho^{(l)} \mathcal{U}^{\dagger}$, in this case a quadratic on $\mathcal{U}$ fidelity can be constructed by considering $\sqrt{\rho^{(l)}} \to \sqrt{\varrho^{(l)}}$ mapping, and on a general quantum channel of Kraus rank $N_s$, where quadratic on $B_s$ fidelity is an approximation -- a quantum channel is then built as a hierarchy of unitary mappings. The approach can be applied to study decoherence effects, spontaneous coherence, synchronizing, etc.

Most curriculum learning methods require an approach to sort the data samples by difficulty, which is often cumbersome to perform. In this work, we propose a novel curriculum learning approach termed Learning Rate Curriculum (LeRaC), which leverages the use of a different learning rate for each layer of a neural network to create a data-agnostic curriculum during the initial training epochs. More specifically, LeRaC assigns higher learning rates to neural layers closer to the input, gradually decreasing the learning rates as the layers are placed farther away from the input. The learning rates increase at various paces during the first training iterations, until they all reach the same value. From this point on, the neural model is trained as usual. This creates a model-level curriculum learning strategy that does not require sorting the examples by difficulty and is compatible with any neural network, generating higher performance levels regardless of the architecture. We conduct comprehensive experiments on 12 data sets from the computer vision (CIFAR-10, CIFAR-100, Tiny ImageNet, ImageNet-200, Food-101, UTKFace, PASCAL VOC), language (BoolQ, QNLI, RTE) and audio (ESC-50, CREMA-D) domains, considering various convolutional (ResNet-18, Wide-ResNet-50, DenseNet-121, YOLOv5), recurrent (LSTM) and transformer (CvT, BERT, SepTr) architectures. We compare our approach with the conventional training regime, as well as with Curriculum by Smoothing (CBS), a state-of-the-art data-agnostic curriculum learning approach. Unlike CBS, our performance improvements over the standard training regime are consistent across all data sets and models. Furthermore, we significantly surpass CBS in terms of training time (there is no additional cost over the standard training regime for LeRaC). Our code is freely available at: //github.com/CroitoruAlin/LeRaC.

We explore methods to reduce the impact of unobserved confounders on the causal mediation analysis of high-dimensional mediators with spatially smooth structures, such as brain imaging data. The key approach is to incorporate the latent individual effects, which influence the structured mediators, as unobserved confounders in the outcome model, thereby potentially debiasing the mediation effects. We develop BAyesian Structured Mediation analysis with Unobserved confounders (BASMU) framework, and establish its model identifiability conditions. Theoretical analysis is conducted on the asymptotic bias of the Natural Indirect Effect (NIE) and the Natural Direct Effect (NDE) when the unobserved confounders are omitted in mediation analysis. For BASMU, we propose a two-stage estimation algorithm to mitigate the impact of these unobserved confounders on estimating the mediation effect. Extensive simulations demonstrate that BASMU substantially reduces the bias in various scenarios. We apply BASMU to the analysis of fMRI data in the Adolescent Brain Cognitive Development (ABCD) study, focusing on four brain regions previously reported to exhibit meaningful mediation effects. Compared with the existing image mediation analysis method, BASMU identifies two to four times more voxels that have significant mediation effects, with the NIE increased by 41%, and the NDE decreased by 26%.

A wide range of symbolic analysis and optimization problems can be formalized using polyhedra. Sub-classes of polyhedra, also known as sub-polyhedral domains, are sought for their lower space and time complexity. We introduce the Strided Difference Bound Matrix (SDBM) domain, which represents a sweet spot in the context of optimizing compilers. Its expressiveness and efficient algorithms are particularly well suited to the construction of machine learning compilers. We present decision algorithms, abstract domain operators and computational complexity proofs for SDBM. We also conduct an empirical study with the MLIR compiler framework to validate the domain's practical applicability. We characterize a sub-class of SDBMs that frequently occurs in practice, and demonstrate even faster algorithms on this sub-class.

For a given function of user data, a querier must recover with at least a prescribed probability, the value of the function based on a user-provided query response. Subject to this requirement, the user forms the query response so as to minimize the likelihood of the querier guessing a list of prescribed size to which the data value belongs based on the query response. We obtain a general converse upper bound for maximum list privacy. This bound is shown to be tight for the case of a binary-valued function through an explicit achievability scheme that involves an add-noise query response.

Recently, a considerable literature has grown up around the theme of Graph Convolutional Network (GCN). How to effectively leverage the rich structural information in complex graphs, such as knowledge graphs with heterogeneous types of entities and relations, is a primary open challenge in the field. Most GCN methods are either restricted to graphs with a homogeneous type of edges (e.g., citation links only), or focusing on representation learning for nodes only instead of jointly propagating and updating the embeddings of both nodes and edges for target-driven objectives. This paper addresses these limitations by proposing a novel framework, namely the Knowledge Embedding based Graph Convolutional Network (KE-GCN), which combines the power of GCNs in graph-based belief propagation and the strengths of advanced knowledge embedding (a.k.a. knowledge graph embedding) methods, and goes beyond. Our theoretical analysis shows that KE-GCN offers an elegant unification of several well-known GCN methods as specific cases, with a new perspective of graph convolution. Experimental results on benchmark datasets show the advantageous performance of KE-GCN over strong baseline methods in the tasks of knowledge graph alignment and entity classification.

We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.

北京阿比特科技有限公司