亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate the role of constraints in the computational complexity of min-max optimization. The work of Daskalakis, Skoulakis, and Zampetakis [2021] was the first to study min-max optimization through the lens of computational complexity, showing that min-max problems with nonconvex-nonconcave objectives are PPAD-hard. However, their proof hinges on the presence of joint constraints between the maximizing and minimizing players. The main goal of this paper is to understand the role of these constraints in min-max optimization. The first contribution of this paper is a fundamentally new proof of their main result, which improves it in multiple directions: it holds for degree 2 polynomials, it is essentially tight in the parameters, and it is much simpler than previous approaches, clearly highlighting the role of constraints in the hardness of the problem. Second, we show that with general constraints (i.e., the min player and max player have different constraints), even convex-concave min-max optimization becomes PPAD-hard. Along the way, we also provide PPAD-membership of a general problem related to quasi-variational inequalities, which has applications beyond our problem.

相關內容

We investigate the numerical solution of multiscale transport equations using Physics Informed Neural Networks (PINNs) with ReLU activation functions. Therefore, we study the analogy between PINNs and Least-Squares Finite Elements (LSFE) which lies in the shared approach to reformulate the PDE solution as a minimization of a quadratic functional. We prove that in the diffusive regime, the correct limit is not reached, in agreement with known results for first-order LSFE. A diffusive scaling is introduced that can be applied to overcome this, again in full agreement with theoretical results for LSFE. We provide numerical results in the case of slab geometry that support our theoretical findings.

With the advent of large language models (LLMs) in the artificial intelligence (AI) area, the field of software engineering (SE) has also witnessed a paradigm shift. These models, by leveraging the power of deep learning and massive amounts of data, have demonstrated an unprecedented capacity to understand, generate, and operate programming languages. They can assist developers in completing a broad spectrum of software development activities, encompassing software design, automated programming, and maintenance, which potentially reduces huge human efforts. Integrating LLMs within the SE landscape (LLM4SE) has become a burgeoning trend, necessitating exploring this emergent landscape's challenges and opportunities. The paper aims at revisiting the software development life cycle (SDLC) under LLMs, and highlighting challenges and opportunities of the new paradigm. The paper first summarizes the overall process of LLM4SE, and then elaborates on the current challenges based on a through discussion. The discussion was held among more than 20 participants from academia and industry, specializing in fields such as software engineering and artificial intelligence. Specifically, we achieve 26 key challenges from seven aspects, including software requirement & design, coding assistance, testing code generation, code review, code maintenance, software vulnerability management, and data, training, and evaluation. We hope the achieved challenges would benefit future research in the LLM4SE field.

In this paper, we investigate the cumulative distribution functions (CDFs) of the maximum and minimum of multivariate Poisson distributions with three dependence structures, namely, the common shock, comonotonic shock and thinning-dependence models. In particular, we formulate the definition of a thinning-dependent multivariate Poisson distribution based on Wang and Yuen (2005). We derive explicit CDFs of the maximum and minimum of the multivariate Poisson random vectors and conduct asymptotic analyses on them. Our results reveal the substantial difference between the three dependence structures for multivariate Poisson distribution and may suggest an alternative method for studying the dependence for other multivariate distributions. We further provide numerical examples demonstrating obtained results.

This study explores the role of pretesting when integrated with conversational AI tools, specifically ChatGPT, in enhancing learning outcomes. Drawing on existing research, which demonstrates the benefits of pretesting in memory activation and retention, this experiment extends these insights into the context of digital learning environments. A randomized true experimental study was utilized. Participants were divided into two groups: one engaged in pretesting before using ChatGPT for a problem-solving task involving chi-square analysis, while the control group accessed ChatGPT immediately. The results indicate that the pretest group significantly outperformed the no-pretest group in a subsequent test, which suggests that pretesting enhances the retention of complex material. This study contributes to the field by demonstrating that pretesting can augment the learning process in technology-assisted environments by preparing the memory and promoting active engagement with the material. The findings also suggest that learning strategies like pretesting retain their relevance in the context of rapidly evolving AI technologies. Further research and practical implications are presented.

Random walks on expanders play a crucial role in Markov Chain Monte Carlo algorithms, derandomization, graph theory, and distributed computing. A desirable property is that they are rapidly mixing, which is equivalent to having a spectral gap $\gamma$ (asymptotically) bounded away from $0$. Our work has two main strands. First, we establish a dichotomy for the robustness of mixing times on edge-weighted $d$-regular graphs (i.e., reversible Markov chains) subject to a Lipschitz condition, which bounds the ratio of adjacent weights by $\beta \geq 1$. If $\beta \ge 1$ is sufficiently small, then $\gamma \asymp 1$ and the mixing time is logarithmic in $n$. On the other hand, if $\beta \geq 2d$, there is an edge-weighting such that $\gamma$ is polynomially small in $1/n$. Second, we apply our robustness result to a time-dependent version of the so-called $\varepsilon$-biased random walk, as introduced in Azar et al. [Combinatorica 1996]. We show that, for any constant $\varepsilon>0$, a bias strategy can be chosen adaptively so that the $\varepsilon$-biased random walk covers any bounded-degree regular expander in $\Theta(n)$ expected time, improving the previous-best bound of $O(n \log \log n)$. We prove the first non-trivial lower bound on the cover time of the $\varepsilon$-biased random walk, showing that, on bounded-degree regular expanders, it is $\omega(n)$ whenever $\varepsilon = o(1)$. We establish this by controlling how much the probability of arbitrary events can be ``boosted'' by using a time-dependent bias strategy.

We propose a practical tool for evaluating and comparing the accuracy of FDMs for the Helmholtz equation. The tool based on Fourier analysis makes it easy to find wavenumber explicit order of convergence, and can be used for rigorous proof. It fills in the gap between the dispersion analysis and the actual error with source term.

This survey presents an in-depth exploration of knowledge distillation (KD) techniques within the realm of Large Language Models (LLMs), spotlighting the pivotal role of KD in transferring sophisticated capabilities from proprietary giants such as GPT-4 to accessible, open-source models like LLaMA and Mistral. Amidst the evolving AI landscape, this work elucidates the critical disparities between proprietary and open-source LLMs, demonstrating how KD serves as an essential conduit for imbuing the latter with the former's advanced functionalities and nuanced understandings. Our survey is meticulously structured around three foundational pillars: algorithm, skill, and verticalization -- providing a comprehensive examination of KD mechanisms, the enhancement of specific cognitive abilities, and their practical implications across diverse fields. Crucially, the survey navigates the intricate interplay between data augmentation (DA) and KD, illustrating how DA emerges as a powerful paradigm within the KD framework to bolster LLMs' performance. By leveraging DA to generate context-rich, skill-specific training data, KD transcends traditional boundaries, enabling open-source models to approximate the contextual adeptness, ethical alignment, and deep semantic insights characteristic of their proprietary counterparts. This work aims to provide an insightful guide for researchers and practitioners, offering a detailed overview of current methodologies in knowledge distillation and proposing future research directions. By bridging the gap between proprietary and open-source LLMs, this survey underscores the potential for more accessible, efficient, and sustainable AI solutions, fostering a more inclusive and equitable landscape in AI advancements. An associated Github repository is available at //github.com/Tebmer/Awesome-Knowledge-Distillation-of-LLMs.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.

北京阿比特科技有限公司