亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the computational complexity of computing Bayes-Nash equilibria in first-price auctions with discrete value distributions and discrete bidding space, under general subjective beliefs. It is known that such auctions do not always have pure equilibria. In this paper, we prove that the problem of deciding their existence is NP-complete, even for approximate equilibria. On the other hand, it can be shown that mixed equilibria are guaranteed to exist; however, their computational complexity has not been studied before. We establish the PPAD-completeness of computing a mixed equilibrium and we complement this by an efficient algorithm for finding symmetric approximate equilibria in the special case of iid priors. En route to these results, we develop a computational equivalence framework between continuous and discrete first-price auctions, which can be of independent interest, and which allows us to transfer existing positive and negative results from one setting to the other. Finally, we show that correlated equilibria of the auction can be computed in polynomial time.

相關內容

Deep learning solutions are instrumental in cybersecurity, harnessing their ability to analyze vast datasets, identify complex patterns, and detect anomalies. However, malevolent actors can exploit these capabilities to orchestrate sophisticated attacks, posing significant challenges to defenders and traditional security measures. Adversarial attacks, particularly those targeting vulnerabilities in deep learning models, present a nuanced and substantial threat to cybersecurity. Our study delves into adversarial learning threats such as Data Poisoning, Test Time Evasion, and Reverse Engineering, specifically impacting Network Intrusion Detection Systems. Our research explores the intricacies and countermeasures of attacks to deepen understanding of network security challenges amidst adversarial threats. In our study, we present insights into the dynamic realm of adversarial learning and its implications for network intrusion. The intersection of adversarial attacks and defenses within network traffic data, coupled with advances in machine learning and deep learning techniques, represents a relatively underexplored domain. Our research lays the groundwork for strengthening defense mechanisms to address the potential breaches in network security and privacy posed by adversarial attacks. Through our in-depth analysis, we identify domain-specific research gaps, such as the scarcity of real-life attack data and the evaluation of AI-based solutions for network traffic. Our focus on these challenges aims to stimulate future research efforts toward the development of resilient network defense strategies.

This study explores the role of pretesting when integrated with conversational AI tools, specifically ChatGPT, in enhancing learning outcomes. Drawing on existing research, which demonstrates the benefits of pretesting in memory activation and retention, this experiment extends these insights into the context of digital learning environments. A randomized true experimental study was utilized. Participants were divided into two groups: one engaged in pretesting before using ChatGPT for a problem-solving task involving chi-square analysis, while the control group accessed ChatGPT immediately. The results indicate that the pretest group significantly outperformed the no-pretest group in a subsequent test, which suggests that pretesting enhances the retention of complex material. This study contributes to the field by demonstrating that pretesting can augment the learning process in technology-assisted environments by preparing the memory and promoting active engagement with the material. The findings also suggest that learning strategies like pretesting retain their relevance in the context of rapidly evolving AI technologies. Further research and practical implications are presented.

Population protocols are a model of distributed computation in which an arbitrary number of indistinguishable finite-state agents interact in pairs to decide some property of their initial configuration. We investigate the behaviour of population protocols under adversarial faults that cause agents to silently crash and no longer interact with other agents. As a starting point, we consider the property ``the number of agents exceeds a given threshold $t$'', represented by the predicate $x \geq t$, and show that the standard protocol for $x \geq t$ is very fragile: one single crash in a computation with $x:=2t-1$ agents can already cause the protocol to answer incorrectly that $x \geq t$ does not hold. However, a slightly less known protocol is robust: for any number $t' \geq t$ of agents, at least $t' - t+1$ crashes must occur for the protocol to answer that the property does not hold. We formally define robustness for arbitrary population protocols, and investigate the question whether every predicate computable by population protocols has a robust protocol. Angluin et al. proved in 2007 that population protocols decide exactly the Presburger predicates, which can be represented as Boolean combinations of threshold predicates of the form $\sum_{i=1}^n a_i \cdot x_i \geq t$ for $a_1,...,a_n, t \in \mathbb{Z}$ and modulo prdicates of the form $\sum_{i=1}^n a_i \cdot x_i \bmod m \geq t $ for $a_1, \ldots, a_n, m, t \in \mathbb{N}$. We design robust protocols for all threshold and modulo predicates. We also show that, unfortunately, the techniques in the literature that construct a protocol for a Boolean combination of predicates given protocols for the conjuncts do not preserve robustness. So the question remains open.

With the increasing availability of high-dimensional data, analysts often rely on exploratory data analysis to understand complex data sets. A key approach to exploring such data is dimensionality reduction, which embeds high-dimensional data in two dimensions to enable visual exploration. However, popular embedding techniques, such as t-SNE and UMAP, typically assume that data points are independent. When this assumption is violated, as in time-series data, the resulting visualizations may fail to reveal important temporal patterns and trends. To address this, we propose a formal extension to existing dimensionality reduction methods that incorporates two temporal loss terms that explicitly highlight temporal progression in the embedded visualizations. Through a series of experiments on both synthetic and real-world datasets, we demonstrate that our approach effectively uncovers temporal patterns and improves the interpretability of the visualizations. Furthermore, the method improves temporal coherence while preserving the fidelity of the embeddings, providing a robust tool for dynamic data analysis.

The study of biomechanics during locomotion provides valuable insights into the effects of varying conditions on specific movement patterns. This research focuses on examining the influence of different shoe parameters on walking biomechanics, aiming to understand their impact on gait patterns. To achieve this, various methodologies are explored to estimate human body biomechanics, including computer vision techniques and wearable devices equipped with advanced sensors. Given privacy considerations and the need for robust, accurate measurements, this study employs wearable devices with Inertial Measurement Unit (IMU) sensors. These devices offer a non-invasive, precise, and high-resolution approach to capturing biomechanical data during locomotion. Raw sensor data collected from wearable devices is processed using an Extended Kalman Filter to reduce noise and extract meaningful information. This includes calculating joint angles throughout the gait cycle, enabling a detailed analysis of movement dynamics. The analysis identifies correlations between shoe parameters and key gait characteristics, such as stability, mobility, step time, and propulsion forces. The findings provide deeper insights into how footwear design influences walking efficiency and biomechanics. This study paves the way for advancements in footwear technology and contributes to the development of personalized solutions for enhancing gait performance and mobility.

Federated averaging (FedAvg) is the most fundamental algorithm in Federated learning (FL). Previous theoretical results assert that FedAvg convergence and generalization degenerate under heterogeneous clients. However, recent empirical results show that FedAvg can perform well in many real-world heterogeneous tasks. These results reveal an inconsistency between FL theory and practice that is not fully explained. In this paper, we show that common heterogeneity measures contribute to this inconsistency based on rigorous convergence analysis. Furthermore, we introduce a new measure \textit{client consensus dynamics} and prove that \textit{FedAvg can effectively handle client heterogeneity when an appropriate aggregation strategy is used}. Building on this theoretical insight, we present a simple and effective FedAvg variant termed FedAWARE. Extensive experiments on three datasets and two modern neural network architectures demonstrate that FedAWARE ensures faster convergence and better generalization in heterogeneous client settings. Moreover, our results show that FedAWARE can significantly enhance the generalization performance of advanced FL algorithms when used as a plug-in module.

We study the Out-of-Distribution (OOD) generalization in machine learning and propose a general framework that establishes information-theoretic generalization bounds. Our framework interpolates freely between Integral Probability Metric (IPM) and $f$-divergence, which naturally recovers some known results (including Wasserstein- and KL-bounds), as well as yields new generalization bounds. Additionally, we show that our framework admits an optimal transport interpretation. When evaluated in two concrete examples, the proposed bounds either strictly improve upon existing bounds in some cases or match the best existing OOD generalization bounds. Moreover, by focusing on $f$-divergence and combining it with the Conditional Mutual Information (CMI) methods, we derive a family of CMI-based generalization bounds, which include the state-of-the-art ICIMI bound as a special instance. Finally, leveraging these findings, we analyze the generalization of the Stochastic Gradient Langevin Dynamics (SGLD) algorithm, showing that our derived generalization bounds outperform existing information-theoretic generalization bounds in certain scenarios.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.

北京阿比特科技有限公司