亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Deep neural networks (DNNs) are increasingly being deployed to perform safety-critical tasks. The opacity of DNNs, which prevents humans from reasoning about them, presents new safety and security challenges. To address these challenges, the verification community has begun developing techniques for rigorously analyzing DNNs, with numerous verification algorithms proposed in recent years. While a significant amount of work has gone into developing these verification algorithms, little work has been devoted to rigorously studying the computability and complexity of the underlying theoretical problems. Here, we seek to contribute to the bridging of this gap. We focus on two kinds of DNNs: those that employ piecewise-linear activation functions (e.g., ReLU), and those that employ piecewise-smooth activation functions (e.g., Sigmoids). We prove the two following theorems: 1) The decidability of verifying DNNs with piecewise-smooth activation functions is equivalent to a well-known, open problem formulated by Tarski; and 2) The DNN verification problem for any quantifier-free linear arithmetic specification can be reduced to the DNN reachability problem, whose approximation is NP-complete. These results answer two fundamental questions about the computability and complexity of DNN verification, and the ways it is affected by the network's activation functions and error tolerance; and could help guide future efforts in developing DNN verification tools.

相關內容

Many applications rely on solving time-dependent partial differential equations (PDEs) that include second derivatives. Summation-by-parts (SBP) operators are crucial for developing stable, high-order accurate numerical methodologies for such problems. Conventionally, SBP operators are tailored to the assumption that polynomials accurately approximate the solution, and SBP operators should thus be exact for them. However, this assumption falls short for a range of problems for which other approximation spaces are better suited. We recently addressed this issue and developed a theory for first-derivative SBP operators based on general function spaces, coined function-space SBP (FSBP) operators. In this paper, we extend the innovation of FSBP operators to accommodate second derivatives. The developed second-derivative FSBP operators maintain the desired mimetic properties of existing polynomial SBP operators while allowing for greater flexibility by being applicable to a broader range of function spaces. We establish the existence of these operators and detail a straightforward methodology for constructing them. By exploring various function spaces, including trigonometric, exponential, and radial basis functions, we illustrate the versatility of our approach. We showcase the superior performance of these non-polynomial FSBP operators over traditional polynomial-based operators for a suite of one- and two-dimensional problems, encompassing a boundary layer problem and the viscous Burgers' equation. The work presented here opens up possibilities for using second-derivative SBP operators based on suitable function spaces, paving the way for a wide range of applications in the future.

Despite significant effort, the quantum machine learning community has only demonstrated quantum learning advantages for artificial cryptography-inspired datasets when dealing with classical data. In this paper we address the challenge of finding learning problems where quantum learning algorithms can achieve a provable exponential speedup over classical learning algorithms. We reflect on computational learning theory concepts related to this question and discuss how subtle differences in definitions can result in significantly different requirements and tasks for the learner to meet and solve. We examine existing learning problems with provable quantum speedups and find that they largely rely on the classical hardness of evaluating the function that generates the data, rather than identifying it. To address this, we present two new learning separations where the classical difficulty primarily lies in identifying the function generating the data. Furthermore, we explore computational hardness assumptions that can be leveraged to prove quantum speedups in scenarios where data is quantum-generated, which implies likely quantum advantages in a plethora of more natural settings (e.g., in condensed matter and high energy physics). We also discuss the limitations of the classical shadow paradigm in the context of learning separations, and how physically-motivated settings such as characterizing phases of matter and Hamiltonian learning fit in the computational learning framework.

Nurmuhammad et al. developed Sinc-Nystr\"{o}m methods for initial value problems in which solutions exhibit exponential decay end behavior. In the methods, the Single-Exponential (SE) transformation or the Double-Exponential (DE) transformation is combined with the Sinc approximation. Hara and Okayama improved those transformations so that a better convergence rate could be attained, which was afterward supported by theoretical error analyses. However, due to a special function included in the basis functions, the methods have a drawback for computation. To address this issue, Okayama and Hara proposed Sinc-collocation methods, which do not include any special function in the basis functions. This study gives error analyses for the methods.

Theoretical studies on transfer learning or domain adaptation have so far focused on situations with a known hypothesis class or model; however in practice, some amount of model selection is usually involved, often appearing under the umbrella term of hyperparameter-tuning: for example, one may think of the problem of tuning for the right neural network architecture towards a target task, while leveraging data from a related source task. Now, in addition to the usual tradeoffs on approximation vs estimation errors involved in model selection, this problem brings in a new complexity term, namely, the transfer distance between source and target distributions, which is known to vary with the choice of hypothesis class. We present a first study of this problem, focusing on classification; in particular, the analysis reveals some remarkable phenomena: adaptive rates, i.e., those achievable with no distributional information, can be arbitrarily slower than oracle rates, i.e., when given knowledge on distances.

Verification and safety assessment of neural network controlled systems (NNCSs) is an emerging challenge. To provide guarantees, verification tools must efficiently capture the interplay between the neural network and the physical system within the control loop. In this paper, a compositional approach focused on inclusion preserving long term symbolic dependency modeling is proposed for the analysis of NNCSs. First of all, the matrix structure of symbolic zonotopes is exploited to efficiently abstract the input/output mapping of the loop elements through (inclusion preserving) affine symbolic expressions, thus maintaining linear dependencies between interacting blocks. Then, two further extensions are studied. Firstly, symbolic polynotopes are used to abstract the loop elements behaviour by means of polynomial symbolic expressions and dependencies. Secondly, an original input partitioning algorithm takes advantage of symbol preservation to assess the sensitivity of the computed approximation to some input directions. The approach is evaluated via different numerical examples and benchmarks. A good trade-off between low conservatism and computational efficiency is obtained.

The convexification numerical method with the rigorously established global convergence property is constructed for a problem for the Mean Field Games System of the second order. This is the problem of the retrospective analysis of a game of infinitely many rational players. In addition to traditional initial and terminal conditions, one extra terminal condition is assumed to be known. Carleman estimates and a Carleman Weight Function play the key role. Numerical experiments demonstrate a good performance for complicated functions. Various versions of the convexification have been actively used by this research team for a number of years to numerically solve coefficient inverse problems.

We derive optimality conditions for the optimum sample allocation problem, formulated as the determination of the fixed strata sample sizes that minimize the total cost of the survey, under assumed level of the variance of the stratified estimator and one-sided upper bounds imposed on sample sizes in strata. In this context, we take that the variance function is of some generic form that involves the stratified $\pi$ estimator of the population total with stratified simple random sampling without replacement design as a special case. The optimality conditions mentioned above will be derived with the use of convex optimization theory and the Karush-Kuhn-Tucker conditions. Based on the established optimality conditions we give a formal proof of the existing procedure, termed here as LRNA, that solves the allocation problem considered. We formulate the LRNA in such a way that it also provides the solution to classical optimum allocation problem (i.e. minimization of the estimator's variance under fixed total cost) under one-sided lower bounds imposed on sample sizes in strata. From this standpoint, the LRNA can be considered as a counterparty to the popular recursive Neyman allocation procedure that is used to solve the classical problem of optimum sample allocation but with one-sided upper bounds. Ready-to-use R-implementation of the LRNA is available through our package stratallo, which is published on the Comprehensive R Archive Network (CRAN) package repository.

In many numerical simulations stochastic gradient descent (SGD) type optimization methods perform very effectively in the training of deep neural networks (DNNs) but till this day it remains an open problem of research to provide a mathematical convergence analysis which rigorously explains the success of SGD type optimization methods in the training of DNNs. In this work we study SGD type optimization methods in the training of fully-connected feedforward DNNs with rectified linear unit (ReLU) activation. We first establish general regularity properties for the risk functions and their generalized gradient functions appearing in the training of such DNNs and, thereafter, we investigate the plain vanilla SGD optimization method in the training of such DNNs under the assumption that the target function under consideration is a constant function. Specifically, we prove under the assumption that the learning rates (the step sizes of the SGD optimization method) are sufficiently small but not $L^1$-summable and under the assumption that the target function is a constant function that the expectation of the riskof the considered SGD process converges in the training of such DNNs to zero as the number of SGD steps increases to infinity.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

In recent years, there has been an exponential growth in the number of complex documents and texts that require a deeper understanding of machine learning methods to be able to accurately classify texts in many applications. Many machine learning approaches have achieved surpassing results in natural language processing. The success of these learning algorithms relies on their capacity to understand complex models and non-linear relationships within data. However, finding suitable structures, architectures, and techniques for text classification is a challenge for researchers. In this paper, a brief overview of text classification algorithms is discussed. This overview covers different text feature extractions, dimensionality reduction methods, existing algorithms and techniques, and evaluations methods. Finally, the limitations of each technique and their application in the real-world problem are discussed.

北京阿比特科技有限公司