亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study quantitative properties of quantum programs. Properties of interest include (positive) almost-sure termination, expected runtime or expected cost, that is, for example, the expected number of applications of a given quantum gate, etc. After studying the completeness of these problems in the arithmetical hierarchy over the Clifford+T fragment of quantum mechanics, we express these problems using a variation of a quantum pre-expectation transformer, a weakest precondition based technique that allows to symbolically compute these quantitative properties. Under a smooth restriction-a restriction to polynomials of bounded degree over a real closed field-we show that the quantitative problem, which consists in finding an upper-bound to the pre-expectation, can be decided in time double-exponential in the size of a program, thus providing, despite its great complexity, one of the first decidable results on the analysis and verification of quantum programs. Finally, we sketch how the latter can be transformed into an efficient synthesis method.

相關內容

In this paper, we explore how geometric structures can be grown exponentially fast. The studied processes start from an initial shape and apply a sequence of centralized growth operations to grow other shapes. We focus on the case where the initial shape is just a single node. A technical challenge in growing shapes that fast is the need to avoid collisions caused when the shape breaks, stretches, or self-intersects. We identify a parameter $k$, representing the number of turning points within specific parts of a shape. We prove that, if edges can only be formed when generating new nodes and cannot be deleted, trees having $O(k)$ turning points on every root-to-leaf path can be grown in $O(k\log n)$ time steps and spirals with $O(\log n)$ turning points can be grown in $O(\log n)$ time steps, $n$ being the size of the final shape. For this case, we also show that the maximum number of turning points in a root-to-leaf path of a tree is a lower bound on the number of time steps to grow the tree and that there exists a class of paths such that any path in the class with $\Omega(k)$ turning points requires $\Omega(k\log k)$ time steps to be grown. If nodes can additionally be connected as soon as they become adjacent, we prove that if a shape $S$ has a spanning tree with $O(k)$ turning points on every root-to-leaf path, then the adjacency closure of $S$ can be grown in $O(k \log n)$ time steps. In the strongest model that we study, where edges can be deleted and neighbors can be handed over to newly generated nodes, we obtain a universal algorithm: for any shape $S$ it gives a process that grows $S$ from a single node exponentially fast.

This paper explores a modern predictive uncertainty estimation approach, called evidential deep learning (EDL), in which a single neural network model is trained to learn a meta distribution over the predictive distribution by minimizing a specific objective function. Despite their strong empirical performance, recent studies by Bengs et al. identify a fundamental pitfall of the existing methods: the learned epistemic uncertainty may not vanish even in the infinite-sample limit. We corroborate the observation by providing a unifying view of a class of widely used objectives from the literature. Our analysis reveals that the EDL methods essentially train a meta distribution by minimizing a certain divergence measure between the distribution and a sample-size-independent target distribution, resulting in spurious epistemic uncertainty. Grounded in theoretical principles, we propose learning a consistent target distribution by modeling it with a mixture of Dirichlet distributions and learning via variational inference. Afterward, a final meta distribution model distills the learned uncertainty from the target model. Experimental results across various uncertainty-based downstream tasks demonstrate the superiority of our proposed method, and illustrate the practical implications arising from the consistency and inconsistency of learned epistemic uncertainty.

We analyze the capabilities of Transformer language models on learning discrete algorithms. To this end, we introduce two new tasks demanding the composition of several discrete sub-tasks. On both training LLaMA models from scratch and prompting on GPT-4 and Gemini we measure learning compositions of learned primitives. We observe that the compositional capabilities of state-of-the-art Transformer language models are very limited and sample-wise scale worse than relearning all sub-tasks for a new algorithmic composition. We also present a theorem in complexity theory, showing that gradient descent on memorizing feedforward models can be exponentially data inefficient.

In the present paper, we prove a new theorem, resulting in an update formula for linear regression model residuals calculating the exact k-fold cross-validation residuals for any choice of cross-validation strategy without model refitting. The required matrix inversions are limited by the cross-validation segment sizes and can be executed with high efficiency in parallel. The well-known formula for leave-one-out cross-validation follows as a special case of the theorem. In situations where the cross-validation segments consist of small groups of repeated measurements, we suggest a heuristic strategy for fast serial approximations of the cross-validated residuals and associated Predicted Residual Sum of Squares (PRESS) statistic. We also suggest strategies for efficient estimation of the minimum PRESS value and full PRESS function over a selected interval of regularisation values. The computational effectiveness of the parameter selection for Ridge- and Tikhonov regression modelling resulting from our theoretical findings and heuristic arguments is demonstrated in several applications with real and highly multivariate datasets.

This book delves into the burgeoning field of quantum resource theories, a novel and vibrant area of research within quantum information science that seeks to unify diverse quantum phenomena under a single framework. By recognizing various attributes of physical systems as "resources," this approach offers a fresh perspective on quantum phenomena, transforming our understanding and application of concepts such as quantum entanglement, coherence, and more. With a focus on the pedagogical, the book aims to equip readers with the advanced mathematical tools and physical principles needed to navigate and contribute to this rapidly evolving field. It covers a wide range of topics, from the foundational aspects of quantum mechanics and quantum information to detailed explorations of specific resource theories, including entanglement, asymmetry, and thermodynamics. Through rigorous mathematical exposition and a unique axiomatic approach, the book provides deep insights into the operational and conceptual frameworks that underpin quantum resource theories, making it an invaluable resource for graduate students, early-career researchers, and anyone interested in the cutting-edge developments in quantum information science.

In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle the multidimensional relationship between these two concepts. Based on a systematic literature review and a subsequent qualitative content analysis, we identify seven archetypal claims from 175 papers on the alleged fairness benefits of XAI. We present crucial caveats with respect to these claims and provide an entry point for future discussions around the potentials and limitations of XAI for specific fairness desiderata. Importantly, we notice that claims are often (i) vague and simplistic, (ii) lacking normative grounding, or (iii) poorly aligned with the actual capabilities of XAI. We encourage to conceive XAI not as an ethical panacea but as one of many tools to approach the multidimensional, sociotechnical challenge of algorithmic fairness. Moreover, when making a claim about XAI and fairness, we emphasize the need to be more specific about what kind of XAI method is used and which fairness desideratum it refers to, how exactly it enables fairness, and who is the stakeholder that benefits from XAI.

In this paper, we prove the first Bayesian regret bounds for Thompson Sampling in reinforcement learning in a multitude of settings. We simplify the learning problem using a discrete set of surrogate environments, and present a refined analysis of the information ratio using posterior consistency. This leads to an upper bound of order $\widetilde{O}(H\sqrt{d_{l_1}T})$ in the time inhomogeneous reinforcement learning problem where $H$ is the episode length and $d_{l_1}$ is the Kolmogorov $l_1-$dimension of the space of environments. We then find concrete bounds of $d_{l_1}$ in a variety of settings, such as tabular, linear and finite mixtures, and discuss how how our results are either the first of their kind or improve the state-of-the-art.

Technology enables a more sustainable and universally accessible educational model. However, technology has brought a paradox into students' lives: it helps them engage in learning activities, but it is also a source of distraction. During the academic year 2021-2022, the authors conducted a study focusing on classroom distractions. One of the objectives was to identify the main digital distractions from the point of view of students. The study was carried out at an engineering school, where technology is fully integrated in the classroom and in the academic routines of teachers and students. Discussions and surveys, complemented by a statistical study based on bivariate correlations, were used with participating students (n = 105). Students considered digital distractions to have a significant impact on their performance in lab sessions. This performance was mainly self-assessed as improvable. Contrary to other contemporary research, the results were not influenced by the year of study of the subject, as the issue is important regardless of the students' backgrounds. Professors should implement strategies to raise students' awareness of the significant negative effects of digital distractions on their performance, as well as to develop students' self-control skills. This is of vital importance for the use of technology to be sustainable in the long-term.

In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

北京阿比特科技有限公司