亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recent work has shown that methods like SAM which either explicitly or implicitly penalize second order information can improve generalization in deep learning. Seemingly similar methods like weight noise and gradient penalties often fail to provide such benefits. We show that these differences can be explained by the structure of the Hessian of the loss. First, we show that a common decomposition of the Hessian can be quantitatively interpreted as separating the feature exploitation from feature exploration. The feature exploration, which can be described by the Nonlinear Modeling Error matrix (NME), is commonly neglected in the literature since it vanishes at interpolation. Our work shows that the NME is in fact important as it can explain why gradient penalties are sensitive to the choice of activation function. Using this insight we design interventions to improve performance. We also provide evidence that challenges the long held equivalence of weight noise and gradient penalties. This equivalence relies on the assumption that the NME can be ignored, which we find does not hold for modern networks since they involve significant feature learning. We find that regularizing feature exploitation but not feature exploration yields performance similar to gradient penalties.

相關內容

The future development of an AI scientist, a tool that is capable of integrating a variety of experimental data and generating testable hypotheses, holds immense potential. So far, bespoke machine learning models have been created to specialize in singular scientific tasks, but otherwise lack the flexibility of a general purpose model. Here, we show that a general purpose large language model, chatGPT 3.5-turbo, can be fine-tuned to learn the structural biophysics of DNA. We find that both fine-tuning models to return chain-of-thought responses and chaining together models fine-tuned for subtasks have an enhanced ability to analyze and design DNA sequences and their structures.

This paper investigates goal-oriented communication for remote estimation of multiple Markov sources in resource-constrained networks. An agent decides the updating times of the sources and transmits the packet to a remote destination over an unreliable channel with delay. The destination is tasked with source reconstruction for actuation. We utilize the metric \textit{cost of actuation error} (CAE) to capture the state-dependent actuation costs. We aim for a sampling policy that minimizes the long-term average CAE subject to an average resource constraint. We formulate this problem as an average-cost constrained Markov Decision Process (CMDP) and relax it into an unconstrained problem by utilizing \textit{Lyapunov drift} techniques. Then, we propose a low-complexity \textit{drift-plus-penalty} (DPP) policy for systems with known source/channel statistics and a Lyapunov optimization-based deep reinforcement learning (LO-DRL) policy for unknown environments. Our policies significantly reduce the number of uninformative transmissions by exploiting the timing of the important information.

It has been proposed that, when processing a stream of events, humans divide their experiences in terms of inferred latent causes (LCs) to support context-dependent learning. However, when shared structure is present across contexts, it is still unclear how the "splitting" of LCs and learning of shared structure can be simultaneously achieved. Here, we present the Latent Cause Network (LCNet), a neural network model of LC inference. Through learning, it naturally stores structure that is shared across tasks in the network weights. Additionally, it represents context-specific structure using a context module, controlled by a Bayesian nonparametric inference algorithm, which assigns a unique context vector for each inferred LC. Across three simulations, we found that LCNet could 1) extract shared structure across LCs in a function learning task while avoiding catastrophic interference, 2) capture human data on curriculum effects in schema learning, and 3) infer the underlying event structure when processing naturalistic videos of daily events. Overall, these results demonstrate a computationally feasible approach to reconciling shared structure and context-specific structure in a model of LCs that is scalable from laboratory experiment settings to naturalistic settings.

Hex-dominant mesh generation has received significant attention in recent research due to its superior robustness compared to pure hex-mesh generation techniques. In this work, we introduce the first structure for analyzing hex-dominant meshes. This structure builds on the base complex of pure hex-meshes but incorporates the non-hex elements for a more comprehensive and complete representation. We provide its definition and describe its construction steps. Based on this structure, we present an extraction and categorization of sheets using advanced graph matching techniques to handle the non-hex elements. This enables us to develop an enhanced visual analysis of the structure for any hex-dominant meshes.We apply this structure-based visual analysis to compare hex-dominant meshes generated by different methods to study their advantages and disadvantages. This complements the standard quality metric based on the non-hex element percentage for hex-dominant meshes. Moreover, we propose a strategy to extract a cleaned (optimized) valence-based singularity graph wireframe to analyze the structure for both mesh and sheets. Our results demonstrate that the proposed hybrid base complex provides a coarse representation for mesh element, and the proposed valence singularity graph wireframe provides a better internal visualization of hex-dominant meshes.

We study Langevin-type algorithms for sampling from Gibbs distributions such that the potentials are dissipative and their weak gradients have finite moduli of continuity not necessarily convergent to zero. Our main result is a non-asymptotic upper bound of the 2-Wasserstein distance between a Gibbs distribution and the law of general Langevin-type algorithms based on the Liptser--Shiryaev theory and Poincar\'{e} inequalities. We apply this bound to show that the Langevin Monte Carlo algorithm can approximate Gibbs distributions with arbitrary accuracy if the potentials are dissipative and their gradients are uniformly continuous. We also propose Langevin-type algorithms with spherical smoothing for distributions whose potentials are not convex or continuously differentiable.

Quantum entanglement is a fundamental property of quantum mechanics and plays a crucial role in quantum computation and information. We study entanglement via the lens of computational complexity by considering quantum generalizations of the class NP with multiple unentangled quantum proofs, the so-called QMA(2) and its variants. The complexity of QMA(2) is a longstanding open problem, and only the trivial bounds QMA $\subseteq$ QMA(2) $\subseteq$ NEXP are known. In this work, we study the power of unentangled quantum proofs with non-negative amplitudes, a class which we denote $\text{QMA}^+(2)$. In this setting, we are able to design proof verification protocols for problems both using logarithmic size quantum proofs and having a constant probability gap in distinguishing yes from no instances. In particular, we design global protocols for small set expansion, unique games, and PCP verification. As a consequence, we obtain NP $\subseteq \text{QMA}^+_{\log}(2)$ with a constant gap. By virtue of the new constant gap, we are able to ``scale up'' this result to $\text{QMA}^+(2)$, obtaining the full characterization $\text{QMA}^+(2)$=NEXP by establishing stronger explicitness properties of the PCP for NEXP. One key novelty of these protocols is the manipulation of quantum proofs in a global and coherent way yielding constant gaps. Previous protocols (only available for general amplitudes) are either local having vanishingly small gaps or treat the quantum proofs as classical probability distributions requiring polynomially many proofs thereby not implying non-trivial bounds on QMA(2). Finally, we show that QMA(2) is equal to $\text{QMA}^+(2)$ provided the gap of the latter is a sufficiently large constant. In particular, if $\text{QMA}^+(2)$ admits gap amplification, then QMA(2)=NEXP.

A simple, recently observed generalization of the classical Singleton bound to list-decoding asserts that rate $R$ codes are not list-decodable using list-size $L$ beyond an error fraction $\frac{L}{L+1} (1-R)$ (the Singleton bound being the case of $L=1$, i.e., unique decoding). We prove that in order to approach this bound for any fixed $L >1$, one needs exponential alphabets. Specifically, for every $L>1$ and $R\in(0,1)$, if a rate $R$ code can be list-of-$L$ decoded up to error fraction $\frac{L}{L+1} (1-R -\varepsilon)$, then its alphabet must have size at least $\exp(\Omega_{L,R}(1/\varepsilon))$. This is in sharp contrast to the situation for unique decoding where certain families of rate $R$ algebraic-geometry (AG) codes over an alphabet of size $O(1/\varepsilon^2)$ are unique-decodable up to error fraction $(1-R-\varepsilon)/2$. Our bounds hold even for subconstant $\varepsilon\ge 1/n$, implying that any code exactly achieving the $L$-th generalized Singleton bound requires alphabet size $2^{\Omega_{L,R}(n)}$. Previously this was only known only for $L=2$ under the additional assumptions that the code is both linear and MDS. Our lower bound is tight up to constant factors in the exponent -- with high probability random codes (or, as shown recently, even random linear codes) over $\exp(O_L(1/\varepsilon))$-sized alphabets, can be list-of-$L$ decoded up to error fraction $\frac{L}{L+1} (1-R -\varepsilon)$.

Fully decentralized learning is gaining momentum for training AI models at the Internet's edge, addressing infrastructure challenges and privacy concerns. In a decentralized machine learning system, data is distributed across multiple nodes, with each node training a local model based on its respective dataset. The local models are then shared and combined to form a global model capable of making accurate predictions on new data. Our exploration focuses on how different types of network structures influence the spreading of knowledge - the process by which nodes incorporate insights gained from learning patterns in data available on other nodes across the network. Specifically, this study investigates the intricate interplay between network structure and learning performance using three network topologies and six data distribution methods. These methods consider different vertex properties, including degree centrality, betweenness centrality, and clustering coefficient, along with whether nodes exhibit high or low values of these metrics. Our findings underscore the significance of global centrality metrics (degree, betweenness) in correlating with learning performance, while local clustering proves less predictive. We highlight the challenges in transferring knowledge from peripheral to central nodes, attributed to a dilution effect during model aggregation. Additionally, we observe that central nodes exert a pull effect, facilitating the spread of knowledge. In examining degree distribution, hubs in Barabasi-Albert networks positively impact learning for central nodes but exacerbate dilution when knowledge originates from peripheral nodes. Finally, we demonstrate the formidable challenge of knowledge circulation outside of segregated communities.

Pre-trained Language Models (PLMs) which are trained on large text corpus via self-supervised learning method, have yielded promising performance on various tasks in Natural Language Processing (NLP). However, though PLMs with huge parameters can effectively possess rich knowledge learned from massive training text and benefit downstream tasks at the fine-tuning stage, they still have some limitations such as poor reasoning ability due to the lack of external knowledge. Research has been dedicated to incorporating knowledge into PLMs to tackle these issues. In this paper, we present a comprehensive review of Knowledge-Enhanced Pre-trained Language Models (KE-PLMs) to provide a clear insight into this thriving field. We introduce appropriate taxonomies respectively for Natural Language Understanding (NLU) and Natural Language Generation (NLG) to highlight these two main tasks of NLP. For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG), and rule knowledge. The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods. Finally, we point out some promising future directions of KE-PLMs.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

北京阿比特科技有限公司