亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce a rigorous framework for stochastic cell transmission models for general traffic networks. The performance of traffic systems is evaluated based on preference functionals and acceptable designs. The numerical implementation combines simulation, Gaussian process regression, and a stochastic exploration procedure. The approach is illustrated in two case studies.

相關內容

We consider minimizing functions for which it is expensive to compute the (possibly stochastic) gradient. Such functions are prevalent in reinforcement learning, imitation learning and adversarial training. Our target optimization framework uses the (expensive) gradient computation to construct surrogate functions in a \emph{target space} (e.g. the logits output by a linear model for classification) that can be minimized efficiently. This allows for multiple parameter updates to the model, amortizing the cost of gradient computation. In the full-batch setting, we prove that our surrogate is a global upper-bound on the loss, and can be (locally) minimized using a black-box optimization algorithm. We prove that the resulting majorization-minimization algorithm ensures convergence to a stationary point of the loss. Next, we instantiate our framework in the stochastic setting and propose the $SSO$ algorithm, which can be viewed as projected stochastic gradient descent in the target space. This connection enables us to prove theoretical guarantees for $SSO$ when minimizing convex functions. Our framework allows the use of standard stochastic optimization algorithms to construct surrogates which can be minimized by any deterministic optimization method. To evaluate our framework, we consider a suite of supervised learning and imitation learning problems. Our experiments indicate the benefits of target optimization and the effectiveness of $SSO$.

Binary spatio-temporal data are common in many application areas. Such data can be considered from many perspectives, including via deterministic or stochastic cellular automata, where local rules govern the transition probabilities that describe the evolution of the 0 and 1 states across space and time. One implementation of a stochastic cellular automata for such data is with a spatio-temporal generalized linear model (or mixed model), with the local rule covariates being included in the transformed mean response. However, in real world applications, we seldom have a complete understanding of the local rules and it is helpful to augment the transformed linear predictor with a latent spatio-temporal dynamic process. Here, we demonstrate for the first time that an echo state network (ESN) latent process can be used to enhance the local rule covariates. We implement this in a hierarchical Bayesian framework with regularized horseshoe priors on the ESN output weight matrices, which extends the ESN literature as well. Finally, we gain added expressiveness from the ESNs by considering an ensemble of ESN reservoirs, which we accommodate through model averaging. This is also new to the ESN literature. We demonstrate our methodology on a simulated process in which we assume we do not know all of the local CA rules, as well as a fire evolution data set, and data describing the spread of raccoon rabies in Connecticut, USA.

We investigate the propagation of acoustic singular surfaces, specifically, linear shock waves and nonlinear acceleration waves, in a class of inhomogeneous gases whose ambient mass density varies exponentially. Employing the mathematical tools of singular surface theory, we first determine the evolution of both the jump amplitudes and the locations/velocities of their associated wave-fronts, along with a variety of related analytical results. We then turn to what have become known as Krylov subspace spectral (KSS) methods to numerically simulate the evolution of the full waveforms under consideration. These are not only performed quite efficiently, since KSS allows the use of `large' CFL numbers, but also quite accurately, in the sense of capturing theoretically-predicted features of the solution profiles more faithfully than other time-stepping methods, since KSS customizes the computation of the components of the solution corresponding to the different frequencies involved. The presentation concludes with a listing of possible, acoustics-related, follow-on studies.

Many real-world tasks include some kind of parameter estimation, i.e., determination of a parameter encoded in a probability distribution. Often, such probability distributions arise from stochastic processes. For a stationary stochastic process with temporal correlations, the random variables that constitute it are identically distributed but not independent. This is the case, for instance, for quantum continuous measurements. In this paper we prove two fundamental results concerning the estimation of parameters encoded in a memoryful stochastic process. First, we show that for processes with finite Markov order, the Fisher information is always asymptotically linear in the number of outcomes, and determined by the conditional distribution of the process' Markov order. Second, we prove with suitable examples that correlations do not necessarily enhance the metrological precision. In fact, we show that unlike for entropic information quantities, in general nothing can be said about the sub- or super-additivity of the joint Fisher information, in the presence of correlations. We discuss how the type of correlations in the process affects the scaling. We then apply these results to the case of thermometry on a spin chain.

We consider the problem of continuous-time policy evaluation. This consists in learning through observations the value function associated with an uncontrolled continuous-time stochastic dynamic and a reward function. We propose two original variants of the well-known TD(0) method using vanishing time steps. One is model-free and the other is model-based. For both methods, we prove theoretical convergence rates that we subsequently verify through numerical simulations. Alternatively, those methods can be interpreted as novel reinforcement learning approaches for approximating solutions of linear PDEs (partial differential equations) or linear BSDEs (backward stochastic differential equations).

Available corpora for Argument Mining differ along several axes, and one of the key differences is the presence (or absence) of discourse markers to signal argumentative content. Exploring effective ways to use discourse markers has received wide attention in various discourse parsing tasks, from which it is well-known that discourse markers are strong indicators of discourse relations. To improve the robustness of Argument Mining systems across different genres, we propose to automatically augment a given text with discourse markers such that all relations are explicitly signaled. Our analysis unveils that popular language models taken out-of-the-box fail on this task; however, when fine-tuned on a new heterogeneous dataset that we construct (including synthetic and real examples), they perform considerably better. We demonstrate the impact of our approach on an Argument Mining downstream task, evaluated on different corpora, showing that language models can be trained to automatically fill in discourse markers across different corpora, improving the performance of a downstream model in some, but not all, cases. Our proposed approach can further be employed as an assistive tool for better discourse understanding.

Representational drift refers to over-time changes in neural activation accompanied by a stable task performance. Despite being observed in the brain and in artificial networks, the mechanisms of drift and its implications are not fully understood. Motivated by recent experimental findings of stimulus-dependent drift in the piriform cortex, we use theory and simulations to study this phenomenon in a two-layer linear feedforward network. Specifically, in a continual online learning scenario, we study the drift induced by the noise inherent in the Stochastic Gradient Descent (SGD). By decomposing the learning dynamics into the normal and tangent spaces of the minimum-loss manifold, we show the former corresponds to a finite variance fluctuation, while the latter could be considered as an effective diffusion process on the manifold. We analytically compute the fluctuation and the diffusion coefficients for the stimuli representations in the hidden layer as functions of network parameters and input distribution. Further, consistent with experiments, we show that the drift rate is slower for a more frequently presented stimulus. Overall, our analysis yields a theoretical framework for better understanding of the drift phenomenon in biological and artificial neural networks.

The proliferation of mobile devices has led to the collection of large amounts of population data. This situation has prompted the need to utilize this rich, multidimensional data in practical applications. In response to this trend, we have integrated functional data analysis (FDA) and factor analysis to address the challenge of predicting hourly population changes across various districts in Tokyo. Specifically, by assuming a Gaussian process, we avoided the large covariance matrix parameters of the multivariate normal distribution. In addition, the data were both time and spatially dependent between districts. To capture these characteristics, a Bayesian factor model was introduced, which modeled the time series of a small number of common factors and expressed the spatial structure through factor loading matrices. Furthermore, the factor loading matrices were made identifiable and sparse to ensure the interpretability of the model. We also proposed a Bayesian shrinkage method as a systematic approach for factor selection. Through numerical experiments and data analysis, we investigated the predictive accuracy and interpretability of our proposed method. We concluded that the flexibility of the method allows for the incorporation of additional time series features, thereby improving its accuracy.

The numerical evaluation of statistics plays a crucial role in statistical physics and its applied fields. It is possible to evaluate the statistics for a stochastic differential equation with Gaussian white noise via the corresponding backward Kolmogorov equation. The important notice is that there is no need to obtain the solution of the backward Kolmogorov equation on the whole domain; it is enough to evaluate a value of the solution at a certain point that corresponds to the initial coordinate for the stochastic differential equation. For this aim, an algorithm based on combinatorics has recently been developed. In this paper, we discuss a higher-order approximation of resolvent, and an algorithm based on a second-order approximation is proposed. The proposed algorithm shows a second-order convergence. Furthermore, the convergence property of the naive algorithms naturally leads to extrapolation methods; they work well to calculate a more accurate value with fewer computational costs. The proposed method is demonstrated with the Ornstein-Uhlenbeck process and the noisy van der Pol system.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

北京阿比特科技有限公司