亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We survey recent progress on efficient algorithms for approximately diagonalizing a square complex matrix in the models of rational (variable precision) and finite (floating point) arithmetic. This question has been studied across several research communities for decades, but many mysteries remain. We present several open problems which we hope will be of broad interest.

相關內容

Tensor networks have been an important concept and technique in many research areas, such as quantum computation and machine learning. We study the exponential complexity of contracting tensor networks on two special graph structures: planar graphs and finite element graphs. We prove that any finite element graph has a $O(d\sqrt{\max\{\Delta,d\}N})$ size edge separator. Furthermore, we develop a $2^{O(d\sqrt{\max\{\Delta,d\}N})}$ time algorithm to contracting a tensor network consisting of $N$ Boolean tensors, whose underlying graph is a finite element graph with maximum degree $\Delta$ and has no face with more than $d$ boundary edges in the planar skeleton, based on the $2^{O(\sqrt{\Delta N})}$ time algorithm \cite{fastcounting} for planar Boolean tensor network contractions. We use two methods to accelerate the exponential algorithms by transferring high-dimensional tensors to low-dimensional tensors. We put up a $O(k)$ size planar gadget for any Boolean symmetric tensor of dimension $k$, where the gadget only consists of Boolean tensors with dimension no more than $5$. Another method is decomposing any tensor into a series of vectors (unary functions), according to its \emph{CP decomposition} \cite{tensor-rank}. We also prove the sub-exponential time lower bound for contracting tensor networks under the counting \emph{Exponential Time Hypothesis} (\#ETH) holds.

The hardware computing landscape is changing. What used to be distributed systems can now be found on a chip with highly configurable, diverse, specialized and general purpose units. Such Systems-on-a-Chip (SoC) are used to control today's cyber-physical systems, being the building blocks of critical infrastructures. They are deployed in harsh environments and are connected to the cyberspace, which makes them exposed to both accidental faults and targeted cyberattacks. This is in addition to the changing fault landscape that continued technology scaling, emerging devices and novel application scenarios will bring. In this paper, we discuss how the very features, distributed, parallelized, reconfigurable, heterogeneous, that cause many of the imminent and emerging security and resilience challenges, also open avenues for their cure though SoC replication, diversity, rejuvenation, adaptation, and hybridization. We show how to leverage these techniques at different levels across the entire SoC hardware/software stack, calling for more research on the topic.

Online computation is a concept to model uncertainty where not all information on a problem instance is known in advance. An online algorithm receives requests which reveal the instance piecewise and has to respond with irrevocable decisions. Often, an adversary is assumed that constructs the instance knowing the deterministic behavior of the algorithm. Thus, the adversary is able to tailor the input to any online algorithm. From a game theoretical point of view, the adversary and the online algorithm are players in an asymmetric two-player game. To overcome this asymmetry, the online algorithm is equipped with an isomorphic copy of the graph, which is referred to as unlabeled map. By applying the game theoretical perspective on online graph problems, where the solution is a subset of the vertices, we analyze the complexity of these online vertex subset games. For this, we introduce a framework for reducing online vertex subset games from TQBF. This framework is based on gadget reductions from 3-SATISFIABILITY to the corresponding offline problem. We further identify a set of rules for extending the 3-SATISFIABILITY-reduction and provide schemes for additional gadgets which assure that these rules are fulfilled. By extending the gadget reduction of the vertex subset problem with these additional gadgets, we obtain a reduction for the corresponding online vertex subset game. At last, we provide example reductions for online vertex subset games based on VERTEX COVER, INDEPENDENT SET, and DOMINATING SET, proving that they are PSPACE-complete. Thus, this paper establishes that the online version with a map of NP-complete vertex subset problems form a large class of PSPACE-complete problems.

The Graph Exploration problem asks a searcher to explore an unknown environment. The environment is modeled as a graph, where the searcher needs to visit each vertex beginning at some vertex $s$. Furthermore, Treasure Hunt problems are a variation of Graph Exploration, in which the searcher needs to find a hidden treasure, which is located at some vertex $t$. In these online problems, any online algorithm performs poorly because it has too little knowledge about the instance to react adequately to the requests of the adversary. Thus, the impact of a priori knowledge is of interest. In graph problems, one form of a priori knowledge is a map of the graph. We survey the graph exploration and treasure hunt problem with an unlabeled map, which is an isomorphic copy of the graph, that is provided to the searcher. We formulate decision variants of both problems by interpreting the online problems as a game between the online algorithm (the searcher) and the adversary. The map, however, is not controllable by the adversary. The question is, whether the searcher is able to explore the graph fully or find the treasure for all possible decisions of the adversary. We prove the PSPACE-completeness of these games, whereby we analyze the variations which ask for the mere existence of a tour through the graph or path to the treasure and the variations that include costs. Additionally, we analyze the complexity of related problems that relax the path constraint, allowing multiple visits of vertices or edges, or have additional constraints, like requiring to visit specific edges.

The volume function V(t) of a compact set S\in R^d is just the Lebesgue measure of the set of points within a distance to S not larger than t. According to some classical results in geometric measure theory, the volume function turns out to be a polynomial, at least in a finite interval, under a quite intuitive, easy to interpret, sufficient condition (called ``positive reach'') which can be seen as an extension of the notion of convexity. However, many other simple sets, not fulfilling the positive reach condition, have also a polynomial volume function. To our knowledge, there is no general, simple geometric description of such sets. Still, the polynomial character of $V(t)$ has some relevant consequences since the polynomial coefficients carry some useful geometric information. In particular, the constant term is the volume of S and the first order coefficient is the boundary measure (in Minkowski's sense). This paper is focused on sets whose volume function is polynomial on some interval starting at zero, whose length (that we call ``polynomial reach'') might be unknown. Our main goal is to approximate such polynomial reach by statistical means, using only a large enough random sample of points inside S. The practical motivation is simple: when the value of the polynomial reach , or rather a lower bound for it, is approximately known, the polynomial coefficients can be estimated from the sample points by using standard methods in polynomial approximation. As a result, we get a quite general method to estimate the volume and boundary measure of the set, relying only on an inner sample of points and not requiring the use any smoothing parameter. This paper explores the theoretical and practical aspects of this idea.

We study the computational complexity of a robust version of the problem of testing two univariate C-finite functions for eventual inequality at large times. Specifically, working in the bit-model of real computation, we consider the eventual inequality testing problem for real functions that are specified by homogeneous linear Cauchy problems with arbitrary real coefficients and initial values. In order to assign to this problem a well-defined computational complexity, we develop a natural notion of polynomial-time decidability of subsets of computable metric spaces which extends our recently introduced notion of maximal partial decidability. We show that eventual inequality of C-finite functions is polynomial-time decidable in this sense.

This paper presents a comprehensive and practical guide for practitioners and end-users working with Large Language Models (LLMs) in their downstream natural language processing (NLP) tasks. We provide discussions and insights into the usage of LLMs from the perspectives of models, data, and downstream tasks. Firstly, we offer an introduction and brief summary of current GPT- and BERT-style LLMs. Then, we discuss the influence of pre-training data, training data, and test data. Most importantly, we provide a detailed discussion about the use and non-use cases of large language models for various natural language processing tasks, such as knowledge-intensive tasks, traditional natural language understanding tasks, natural language generation tasks, emergent abilities, and considerations for specific tasks.We present various use cases and non-use cases to illustrate the practical applications and limitations of LLMs in real-world scenarios. We also try to understand the importance of data and the specific challenges associated with each NLP task. Furthermore, we explore the impact of spurious biases on LLMs and delve into other essential considerations, such as efficiency, cost, and latency, to ensure a comprehensive understanding of deploying LLMs in practice. This comprehensive guide aims to provide researchers and practitioners with valuable insights and best practices for working with LLMs, thereby enabling the successful implementation of these models in a wide range of NLP tasks. A curated list of practical guide resources of LLMs, regularly updated, can be found at \url{//github.com/Mooler0410/LLMsPracticalGuide}.

Model complexity is a fundamental problem in deep learning. In this paper we conduct a systematic overview of the latest studies on model complexity in deep learning. Model complexity of deep learning can be categorized into expressive capacity and effective model complexity. We review the existing studies on those two categories along four important factors, including model framework, model size, optimization process and data complexity. We also discuss the applications of deep learning model complexity including understanding model generalization capability, model optimization, and model selection and design. We conclude by proposing several interesting future directions.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

Deep learning has penetrated all aspects of our lives and brought us great convenience. However, the process of building a high-quality deep learning system for a specific task is not only time-consuming but also requires lots of resources and relies on human expertise, which hinders the development of deep learning in both industry and academia. To alleviate this problem, a growing number of research projects focus on automated machine learning (AutoML). In this paper, we provide a comprehensive and up-to-date study on the state-of-the-art AutoML. First, we introduce the AutoML techniques in details according to the machine learning pipeline. Then we summarize existing Neural Architecture Search (NAS) research, which is one of the most popular topics in AutoML. We also compare the models generated by NAS algorithms with those human-designed models. Finally, we present several open problems for future research.

北京阿比特科技有限公司