亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Based on a novel generalized second law of thermodynamics, we demonstrate that feedback control enjoys more net-extractable work than the control without measurements. The generalized second law asserts that the total entropy production of a closed system is bounded below by correlation's dissipation, simply named co-dissipation. Accordingly, co-dissipation is entropy production not to be converted to work. For the control without measurement, co-dissipation is caused by the loss of internal correlations among subsystems. On the other hand, because the feedback control can vanish the co-dissipation, it can extract work in principle from the internal correlation loss, which results in its fundamental advantage. Moreover, the characteristics of co-dissipation are consistent with heat dissipation in terms of irreversible useless entropy production. Hence, the generalized second law implies that the system's entropy production is bounded by the sum of two types of dissipation: heat and correlation. The generalized second law is derived by taking the sum of the entropy productions for all subsystems. We develop the technique for this computation, where the entropy productions are summed in parallel with the sequence of the graphs representing the dependency among subsystems. Furthermore, the positivity of co-dissipation is guaranteed by a purely information-theoretic fact, the data processing inequality, which will possibly shed light on the relation between thermodynamics and information theory.

相關內容

We present a novel generative modeling method called diffusion normalizing flow based on stochastic differential equations (SDEs). The algorithm consists of two neural SDEs: a forward SDE that gradually adds noise to the data to transform the data into Gaussian random noise, and a backward SDE that gradually removes the noise to sample from the data distribution. By jointly training the two neural SDEs to minimize a common cost function that quantifies the difference between the two, the backward SDE converges to a diffusion process the starts with a Gaussian distribution and ends with the desired data distribution. Our method is closely related to normalizing flow and diffusion probabilistic models and can be viewed as a combination of the two. Compared with normalizing flow, diffusion normalizing flow is able to learn distributions with sharp boundaries. Compared with diffusion probabilistic models, diffusion normalizing flow requires fewer discretization steps and thus has better sampling efficiency. Our algorithm demonstrates competitive performance in both high-dimension data density estimation and image generation tasks.

This paper proposes a novel active Simultaneous Localization and Mapping (SLAM) method with continuous trajectory optimization over a stochastic robot dynamics model. The problem is formalized as a stochastic optimal control over the continuous robot kinematic model to minimize a cost function that involves the covariance matrix of the landmark states. We tackle the problem by separately obtaining an open-loop control sequence subject to deterministic dynamics by iterative Covariance Regulation (iCR) and a closed-loop feedback control under stochastic robot and covariance dynamics by Linear Quadratic Regulator (LQR). The proposed optimization method captures the coupling between localization and mapping in predicting uncertainty evolution and synthesizes highly informative sensing trajectories. We demonstrate its performance in active landmark-based SLAM using relative-position measurements with a limited field of view.

The discretization of robust quadratic optimal control problems under uncertainty using the finite element method and the stochastic collocation method leads to large saddle-point systems, which are fully coupled across the random realizations. Despite its relevance for numerous engineering problems, the solution of such systems is notoriusly challenging. In this manuscript, we study efficient preconditioners for all-at-once approaches using both an algebraic and an operator preconditioning framework. We show in particular that for values of the regularization parameter not too small, the saddle-point system can be efficiently solved by preconditioning in parallel all the state and adjoint equations. For small values of the regularization parameter, robustness can be recovered by the additional solution of a small linear system, which however couples all realizations. A mean approximation and a Chebyshev semi-iterative method are investigated to solve this reduced system. Our analysis considers a random elliptic partial differential equation whose diffusion coefficient $\kappa(x,\omega)$ is modeled as an almost surely continuous and positive random field, though not necessarily uniformly bounded and coercive. We further provide estimates on the dependence of the preconditioned system on the variance of the random field. Such estimates involve either the first or second moment of the random variables $1/\min_{x\in \overline{D}} \kappa(x,\omega)$ and $\max_{x\in \overline{D}}\kappa(x,\omega)$, where $D$ is the spatial domain. The theoretical results are confirmed by numerical experiments, and implementation details are further addressed.

Much recent interest has focused on the design of optimization algorithms from the discretization of an associated optimization flow, i.e., a system of differential equations (ODEs) whose trajectories solve an associated optimization problem. Such a design approach poses an important problem: how to find a principled methodology to design and discretize appropriate ODEs. This paper aims to provide a solution to this problem through the use of contraction theory. We first introduce general mathematical results that explain how contraction theory guarantees the stability of the implicit and explicit Euler integration methods. Then, we propose a novel system of ODEs, namely the Accelerated-Contracting-Nesterov flow, and use contraction theory to establish it is an optimization flow with exponential convergence rate, from which the linear convergence rate of its associated optimization algorithm is immediately established. Remarkably, a simple explicit Euler discretization of this flow corresponds to the Nesterov acceleration method. Finally, we present how our approach leads to performance guarantees in the design of optimization algorithms for time-varying optimization problems.

The number of down-steps between pairs of up-steps in $k_t$-Dyck paths, a generalization of Dyck paths consisting of steps $\{(1, k), (1, -1)\}$ such that the path stays (weakly) above the line $y=-t$, is studied. Results are proved bijectively and by means of generating functions, and lead to several interesting identities as well as links to other combinatorial structures. In particular, there is a connection between $k_t$-Dyck paths and perforation patterns for punctured convolutional codes (binary matrices) used in coding theory. Surprisingly, upon restriction to usual Dyck paths this yields a new combinatorial interpretation of Catalan numbers.

We investigate variational principles for the approximation of quantum dynamics that apply for approximation manifolds that do not have complex linear tangent spaces. The first one, dating back to McLachlan (1964) minimizes the residuum of the time-dependent Schr\"odinger equation, while the second one, originating from the lecture notes of Kramer--Saraceno (1981), imposes the stationarity of an action functional. We characterize both principles in terms of metric and a symplectic orthogonality conditions, consider their conservation properties, and derive an elementary a-posteriori error estimate. As an application, we revisit the time-dependent Hartree approximation and frozen Gaussian wave packets.

Approximate linear programs (ALPs) are well-known models based on value function approximations (VFAs) to obtain policies and lower bounds on the optimal policy cost of discounted-cost Markov decision processes (MDPs). Formulating an ALP requires (i) basis functions, the linear combination of which defines the VFA, and (ii) a state-relevance distribution, which determines the relative importance of different states in the ALP objective for the purpose of minimizing VFA error. Both these choices are typically heuristic: basis function selection relies on domain knowledge while the state-relevance distribution is specified using the frequency of states visited by a heuristic policy. We propose a self-guided sequence of ALPs that embeds random basis functions obtained via inexpensive sampling and uses the known VFA from the previous iteration to guide VFA computation in the current iteration. Self-guided ALPs mitigate the need for domain knowledge during basis function selection as well as the impact of the initial choice of the state-relevance distribution, thus significantly reducing the ALP implementation burden. We establish high probability error bounds on the VFAs from this sequence and show that a worst-case measure of policy performance is improved. We find that these favorable implementation and theoretical properties translate to encouraging numerical results on perishable inventory control and options pricing applications, where self-guided ALP policies improve upon policies from problem-specific methods. More broadly, our research takes a meaningful step toward application-agnostic policies and bounds for MDPs.

The complexity of several logics, such as Presburger arithmetic, dependence logics and ambient logics, can only be characterised in terms of alternating Turing machines. Despite quite natural, the presence of alternation can sometimes cause neat ideas to be obfuscated inside heavy technical machinery. In these notes, we propose two problems on deterministic machines that can be used to prove lower bounds with respect to the computational class $k$AExp$_{\text{pol}}$, that is the class of all problems solvable by an alternating Turing machine running in $k$ exponential time and performing a polynomial amount of alternations, with respect to the input size. The first problem, called $k$AExp$_{\text{pol}}$-prenex TM problem, is a problem about deterministic Turing machines. The second problem, called the $k$-exp alternating multi-tiling problem, is analogous to the first one, but on tiling systems. Both problems are natural extensions of the TM alternation problem and the alternating multi-tiling problem proved AExp$_{\text{pol}}$-complete by L. Bozzelli, A. Molinari, A. Montanari and A. Peron in [GandALF, pp. 31-45, 2017]. The proofs presented in these notes follow the elegant exposition in A. Molinari's PhD thesis to extend these results from the case $k = 1$ to the case of arbitrary $k$.

Recently introduced generative adversarial network (GAN) has been shown numerous promising results to generate realistic samples. The essential task of GAN is to control the features of samples generated from a random distribution. While the current GAN structures, such as conditional GAN, successfully generate samples with desired major features, they often fail to produce detailed features that bring specific differences among samples. To overcome this limitation, here we propose a controllable GAN (ControlGAN) structure. By separating a feature classifier from a discriminator, the generator of ControlGAN is designed to learn generating synthetic samples with the specific detailed features. Evaluated with multiple image datasets, ControlGAN shows a power to generate improved samples with well-controlled features. Furthermore, we demonstrate that ControlGAN can generate intermediate features and opposite features for interpolated and extrapolated input labels that are not used in the training process. It implies that ControlGAN can significantly contribute to the variety of generated samples.

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

北京阿比特科技有限公司