亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We show that any one-round algorithm that computes a minimum spanning tree (MST) in the unicast congested clique must use a link bandwidth of $\Omega(\log^3 n)$ bits in the worst case. Consequently, computing an MST under the standard assumption of $O(\log n)$-size messages requires at least $2$ rounds. This is the first round complexity lower bound in the unicast congested clique for a problem where the output size is small, i.e., $O(n\log n)$ bits. Our lower bound holds as long as every edge of the MST is output by an incident node. To the best of our knowledge, all prior lower bounds for the unicast congested clique either considered problems with large output sizes (e.g., triangle enumeration) or required every node to learn the entire output.

相關內容

In the search for a logic capturing polynomial time the most promising candidates are Choiceless Polynomial Time (CPT) and rank logic. Rank logic extends fixed-point logic with counting by a rank operator over prime fields. We show that the isomorphism problem for CFI graphs over $\mathbb{Z}_{2^i}$ cannot be defined in rank logic, even if the base graph is totally ordered. However, CPT can define this isomorphism problem. We thereby separate rank logic from CPT and in particular from polynomial time.

Knowledge Graph (KG)-to-Text Generation has seen recent improvements in generating fluent and informative sentences which describe a given KG. As KGs are widespread across multiple domains and contain important entity-relation information, and as text simplification aims to reduce the complexity of a text while preserving the meaning of the original text, we propose KGSimple, a novel approach to unsupervised text simplification which infuses KG-established techniques in order to construct a simplified KG path and generate a concise text which preserves the original input's meaning. Through an iterative and sampling KG-first approach, our model is capable of simplifying text when starting from a KG by learning to keep important information while harnessing KG-to-text generation to output fluent and descriptive sentences. We evaluate various settings of the KGSimple model on currently-available KG-to-text datasets, demonstrating its effectiveness compared to unsupervised text simplification models which start with a given complex text. Our code is available on GitHub.

Past works have shown that the Bernstein-von Mises theorem, on the asymptotic normality of posterior distributions, holds if the parameter dimension $d$ grows slower than the cube root of sample size $n$. Here, we prove the first Bernstein-von Mises theorem in the regime $d^2\ll n$. We establish this result for 1) exponential families and 2) logistic regression with Gaussian design. The proof builds on our recent work on the accuracy of the Laplace approximation to posterior distributions, in which we showed the approximation error in TV distance scales as $d/\sqrt n$.

Programming-by-example (PBE) systems aim to alleviate the burden of programming. However, user-specified examples are often ambiguous, leaving multiple programs to satisfy the specification. Consequently, in most prior work, users have had to provide additional examples, particularly negative ones, to further constrain the search over compatible programs. Recent work resolves additional ambiguity by modeling program synthesis tasks as pragmatic communication, showing promising results on a graphics domain using a rudimentary user-study. We adapt pragmatic reasoning to a sub-domain of regular expressions and rigorously study its usability as a means of communication both with and without the ability to provide negative examples. Our user study (N=30) demonstrates that, with a pragmatic synthesizer, end-users can more successfully communicate a target regex using positive examples alone (95%) compared to using a non-pragmatic synthesizer (51%). Further, users can communicate more efficiently (57% fewer examples) with a pragmatic synthesizer compared to a non-pragmatic one.

We show that the essential properties of entropy (monotonicity, additivity and subadditivity) are consequences of entropy being a monoidal natural transformation from the under category functor $-/\mathsf{LProb}_{\rho}$ (where $\mathsf{LProb}_{\rho}$ is category of $\ell_{\rho}$ discrete probability spaces) to $\Delta_{\mathbb{R}}$. Moreover, the Shannon entropy can be characterized as the universal monoidal natural transformation from $-/\mathsf{LProb}_{\rho}$ to the category of "strongly regularly ordered" vector spaces (a reflective subcategory of the lax-slice 2-category over $\mathsf{MonCat}_{\ell}$ in the 2-category of monoidal categories), providing a succinct characterization of Shannon entropy as a reflection arrow. We can likewise define entropy for every category with a monoidal structure on its under categories (e.g. the category of finite abelian groups, the category of finite inhabited sets, the category of finite dimensional vector spaces, and the augmented simplex category) via the reflection arrow to the reflective subcategory of strongly regularly ordered vector spaces. This implies that all these entropies over different categories are components of a single natural transformation (the unit of the idempotent monad), allowing us to connect these entropies in a natural manner. We also provide a universal characterization of the conditional Shannon entropy based on the chain rule which, unlike the characterization of information loss by Baez, Fritz and Leinster, does not require any continuity assumption.

Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.

Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.

Feature attribution methods are popular in interpretable machine learning. These methods compute the attribution of each input feature to represent its importance, but there is no consensus on the definition of "attribution", leading to many competing methods with little systematic evaluation, complicated in particular by the lack of ground truth attribution. To address this, we propose a dataset modification procedure to induce such ground truth. Using this procedure, we evaluate three common methods: saliency maps, rationales, and attentions. We identify several deficiencies and add new perspectives to the growing body of evidence questioning the correctness and reliability of these methods applied on datasets in the wild. We further discuss possible avenues for remedy and recommend new attribution methods to be tested against ground truth before deployment. The code is available at \url{//github.com/YilunZhou/feature-attribution-evaluation}.

Non-convex optimization is ubiquitous in modern machine learning. Researchers devise non-convex objective functions and optimize them using off-the-shelf optimizers such as stochastic gradient descent and its variants, which leverage the local geometry and update iteratively. Even though solving non-convex functions is NP-hard in the worst case, the optimization quality in practice is often not an issue -- optimizers are largely believed to find approximate global minima. Researchers hypothesize a unified explanation for this intriguing phenomenon: most of the local minima of the practically-used objectives are approximately global minima. We rigorously formalize it for concrete instances of machine learning problems.

Compared with cheap addition operation, multiplication operation is of much higher computation complexity. The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values. In this paper, we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce computation costs. In AdderNets, we take the $\ell_1$-norm distance between filters and input feature as the output response. The influence of this new similarity measure on the optimization of neural network have been thoroughly analyzed. To achieve a better performance, we develop a special back-propagation approach for AdderNets by investigating the full-precision gradient. We then propose an adaptive learning rate strategy to enhance the training procedure of AdderNets according to the magnitude of each neuron's gradient. As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset without any multiplication in convolution layer.

北京阿比特科技有限公司