亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Devising optimal interventions for constraining stochastic systems is a challenging endeavour that has to confront the interplay between randomness and nonlinearity. Existing methods for identifying the necessary dynamical adjustments resort either to space discretising solutions of ensuing partial differential equations, or to iterative stochastic path sampling schemes. Yet, both approaches become computationally demanding for increasing system dimension. Here, we propose a generally applicable and practically feasible non-iterative methodology for obtaining optimal dynamical interventions for diffusive nonlinear systems. We estimate the necessary controls from an interacting particle approximation to the logarithmic gradient of two forward probability flows evolved following deterministic particle dynamics. Applied to several biologically inspired models, we show that our method provides the necessary optimal controls in settings with terminal-, transient-, or generalised collective-state constraints and arbitrary system dynamics.

相關內容

Many modern datasets, from areas such as neuroimaging and geostatistics, come in the form of a random sample of tensor-valued data which can be understood as noisy observations of a smooth multidimensional random function. Most of the traditional techniques from functional data analysis are plagued by the curse of dimensionality and quickly become intractable as the dimension of the domain increases. In this paper, we propose a framework for learning continuous representations from a sample of multidimensional functional data that is immune to several manifestations of the curse. These representations are constructed using a set of separable basis functions that are defined to be optimally adapted to the data. We show that the resulting estimation problem can be solved efficiently by the tensor decomposition of a carefully defined reduction transformation of the observed data. Roughness-based regularization is incorporated using a class of differential operator-based penalties. Relevant theoretical properties are also established. The advantages of our method over competing methods are demonstrated in a simulation study. We conclude with a real data application in neuroimaging.

We study the parameterized complexity of computing the tree-partition-width, a graph parameter equivalent to treewidth on graphs of bounded maximum degree. On one hand, we can obtain approximations of the tree-partition-width efficiently: we show that there is an algorithm that, given an $n$-vertex graph $G$ and an integer $k$, constructs a tree-partition of width $O(k^7)$ for $G$ or reports that $G$ has tree-partition width more than $k$, in time $k^{O(1)}n^2$. We can improve on the approximation factor or the dependence on $n$ by sacrificing the dependence on $k$. On the other hand, we show the problem of computing tree-partition-width exactly is XALP-complete, which implies that it is $W[t]$-hard for all $t$. We deduce XALP-completeness of the problem of computing the domino treewidth. Finally, we adapt some known results on the parameter tree-partition-width and the topological minor relation, and use them to compare tree-partition-width to tree-cut width.

Equilibrium properties in statistical physics are obtained by computing averages with respect to Boltzmann-Gibbs measures, sampled in practice using ergodic dynamics such as the Langevin dynamics. Some quantities however cannot be computed by simply sampling the Boltzmann-Gibbs measure, in particular transport coefficients, which relate the current of some physical quantity of interest to the forcing needed to induce it. For instance, a temperature difference induces an energy current, the proportionality factor between these two quantities being the thermal conductivity. From an abstract point of view, transport coefficients can also be considered as some form of sensitivity analysis with respect to an added forcing to the baseline dynamics. There are various numerical techniques to estimate transport coefficients, which all suffer from large errors, in particular large statistical errors. This contribution reviews the most popular methods, namely the Green-Kubo approach where the transport coefficient is expressed as some time-integrated correlation function, and the approach based on longtime averages of the stochastic dynamics perturbed by an external driving (so-called nonequilibrium molecular dynamics). In each case, the various sources of errors are made precise, in particular the bias related to the time discretization of the underlying continuous dynamics, and the variance of the associated Monte Carlo estimators. Some recent alternative techniques to estimate transport coefficients are also discussed.

We consider deterministic algorithms for the well-known hidden subgroup problem ($\mathsf{HSP}$): for a finite group $G$ and a finite set $X$, given a function $f:G \to X$ and the promise that for any $g_1, g_2 \in G, f(g_1) = f(g_2)$ iff $g_1H=g_2H$ for a subgroup $H \le G$, the goal of the decision version is to determine whether $H$ is trivial or not, and the goal of the identification version is to identify $H$. An algorithm for the problem should query $f(g)$ for $g\in G$ at least as possible. Nayak asked whether there exist deterministic algorithms with $O(\sqrt{\frac{|G|}{|H|}})$ query complexity for $\mathsf{HSP}$. We answer this problem by proving the following results, which also extend the main results of Ref. [30], since here the algorithms do not rely on any prior knowledge of $H$. (i)When $G$ is a general finite Abelian group, there exist an algorithm with $O(\sqrt{\frac{|G|}{|H|}})$ queries to decide the triviality of $H$ and an algorithm to identify $H$ with $O(\sqrt{\frac{|G|}{|H|}\log |H|}+\log |H|)$ queries. (ii)In general there is no deterministic algorithm for the identification version of $\mathsf{HSP}$ with query complexity of $O(\sqrt{\frac{|G|}{|H|}})$, since there exists an instance of $\mathsf{HSP}$ that needs $\omega(\sqrt{\frac{|G|}{|H|}})$ queries to identify $H$. $f(x)$ is said to be $\omega(g(x))$ if for every positive constant $C$, there exists a positive constant $N$ such that for $x>N$, $f(x)\ge C\cdot g(x)$, which means $g$ is a strict lower bound for $f$. On the other hand, there exist instances of $\mathsf{HSP}$ with query complexity far smaller than $O(\sqrt{\frac{|G|}{|H|}})$, whose query complexity is $O(\log \frac{|G|}{|H|})$ and even $O(1)$.

We consider the problem of nonlinear stochastic optimal control. This problem is thought to be fundamentally intractable owing to Bellman's infamous "curse of dimensionality". We present a result that shows that repeatedly solving an open-loop deterministic problem from the current state, similar to Model Predictive Control (MPC), results in a feedback policy that is $O(\epsilon^4)$ near to the true global stochastic optimal policy. Furthermore, empirical results show that solving the Stochastic Dynamic Programming (DP) problem is highly susceptible to noise, even when tractable, and in practice, the MPC-type feedback law offers superior performance even for stochastic systems.

In this work, we propose a model for the orientation of non-spherical particles arising in multi-phase turbulence flow. This model addresses the macroscopic scale in use in CFD codes enabling turbulence models for the fluid phase. It consists in a stochastic version of the Jeffery equation that can be incorporated in a statistical Lagrangian description of the particles suspended in the flow. For use in this context, we propose and analyse a numerical scheme based on the well-known splitting scheme algorithm decoupling the orientation dynamics into their main contributions: stretching and rotation. We detail the implementation in an open-source CFD software. We analyse the weak and strong convergence both of the global scheme and of their sub-parts. Subsequently, the splitting technique yields to a highly efficient hybrid algorithm coupling pure probabilistic and deterministic numerical schemes. Various experiments were implemented and compared with analytic predictions of the model to test the scheme for use in a CFD code.

This text presents an introduction to an emerging paradigm in control of dynamical systems and differentiable reinforcement learning called online nonstochastic control. The new approach applies techniques from online convex optimization and convex relaxations to obtain new methods with provable guarantees for classical settings in optimal and robust control. The primary distinction between online nonstochastic control and other frameworks is the objective. In optimal control, robust control, and other control methodologies that assume stochastic noise, the goal is to perform comparably to an offline optimal strategy. In online nonstochastic control, both the cost functions as well as the perturbations from the assumed dynamical model are chosen by an adversary. Thus the optimal policy is not defined a priori. Rather, the target is to attain low regret against the best policy in hindsight from a benchmark class of policies. This objective suggests the use of the decision making framework of online convex optimization as an algorithmic methodology. The resulting methods are based on iterative mathematical optimization algorithms, and are accompanied by finite-time regret and computational complexity guarantees.

Stochastic restoration algorithms allow to explore the space of solutions that correspond to the degraded input. In this paper we reveal additional fundamental advantages of stochastic methods over deterministic ones, which further motivate their use. First, we prove that any restoration algorithm that attains perfect perceptual quality and whose outputs are consistent with the input must be a posterior sampler, and is thus required to be stochastic. Second, we illustrate that while deterministic restoration algorithms may attain high perceptual quality, this can be achieved only by filling up the space of all possible source images using an extremely sensitive mapping, which makes them highly vulnerable to adversarial attacks. Indeed, we show that enforcing deterministic models to be robust to such attacks profoundly hinders their perceptual quality, while robustifying stochastic models hardly influences their perceptual quality, and improves their output variability. These findings provide a motivation to foster progress in stochastic restoration methods, paving the way to better recovery algorithms.

Over three decades of scientific endeavors to realize programmable matter, a substance that can change its physical properties based on user input or responses to its environment, there have been many advances in both the engineering of modular robotic systems and the corresponding algorithmic theory of collective behavior. However, while the design of modular robots routinely addresses the challenges of realistic three-dimensional (3D) space, algorithmic theory remains largely focused on 2D abstractions such as planes and planar graphs. In this work, we formalize the 3D geometric space variant for the canonical amoebot model of programmable matter, using the face-centered cubic (FCC) lattice to represent space and define local spatial orientations. We then give a distributed algorithm for leader election in connected, contractible 2D or 3D geometric amoebot systems that deterministically elects exactly one leader in $\mathcal{O}(n)$ rounds under an unfair sequential adversary, where $n$ is the number of amoebots in the system. We then demonstrate how this algorithm can be transformed using the concurrency control framework for amoebot algorithms (DISC 2021) to obtain the first known amoebot algorithm, both in 2D and 3D space, to solve leader election under an unfair asynchronous adversary.

A common technique to verify complex logic specifications for dynamical systems is the construction of symbolic abstractions: simpler, finite-state models whose behaviour mimics the one of the systems of interest. Typically, abstractions are constructed exploiting an accurate knowledge of the underlying model: in real-life applications, this may be a costly assumption. By sampling random $\ell$-step trajectories of an unknown system, we build an abstraction based on the notion of $\ell$-completeness. We newly define the notion of probabilistic behavioural inclusion, and provide probably approximately correct (PAC) guarantees that this abstraction includes all behaviours of the concrete system, for finite and infinite time horizon, leveraging the scenario theory for non convex problems. Our method is then tested on several numerical benchmarks.

北京阿比特科技有限公司