亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Definite integrals with parameters of holonomic functions satisfy holonomic systems of linear partial differential equations. When we restrict parameters to a one dimensional curve, the system becomes a linear ordinary differential equation (ODE) with respect to a curve in the parameter space. We can evaluate the integral by solving the linear ODE numerically. This approach to evaluate numerically definite integrals is called the holonomic gradient method (HGM) and it is useful to evaluate several normalizing constants in statistics. We will discuss and compare methods to solve linear ODE's to evaluate normalizing constants.

相關內容

Multiway data analysis aims to uncover patterns in data structured as multi-indexed arrays, and the covariance of such data plays a crucial role in various machine learning applications. However, the intrinsically high dimension of multiway covariance presents significant challenges. To address these challenges, factorized covariance models have been proposed that rely on a separability assumption: the multiway covariance can be accurately expressed as a sum of Kronecker products of mode-wise covariances. This paper is concerned with the accuracy of such separable models for representing multiway covariances. We reduce the question of whether a given covariance can be represented as a separable multiway covariance to an equivalent question about separability of quantum states. Based on this equivalence, we establish that generic multiway covariances tend to be not separable. Moreover, we show that determining the best separable approximation of a generic covariance is NP-hard. Our results suggest that factorized covariance models might not accurately approximate covariance, without additional assumptions ensuring separability. To balance these negative results, we propose an iterative Frank-Wolfe algorithm for computing Kronecker-separable covariance approximations with some additional side information. We establish an oracle complexity bound and empirically observe its consistent convergence to a separable limit point, often close to the ``best'' separable approximation. These results suggest that practical methods may be able to find a Kronecker-separable approximation of covariances, despite the worst-case NP hardness results.

In the stochastic gradient descent (SGD) for sequential simulations such as the neural stochastic differential equations, the Multilevel Monte Carlo (MLMC) method is known to offer better theoretical computational complexity compared to the naive Monte Carlo approach. However, in practice, MLMC scales poorly on massively parallel computing platforms such as modern GPUs, because of its large parallel complexity which is equivalent to that of the naive Monte Carlo method. To cope with this issue, we propose the delayed MLMC gradient estimator that drastically reduces the parallel complexity of MLMC by recycling previously computed gradient components from earlier steps of SGD. The proposed estimator provably reduces the average parallel complexity per iteration at the cost of a slightly worse per-iteration convergence rate. In our numerical experiments, we use an example of deep hedging to demonstrate the superior parallel complexity of our method compared to the standard MLMC in SGD.

We propose efficient algorithms for enumerating the notorious combinatorial structures of maximal planar graphs, called canonical orderings and Schnyder woods, and the related classical graph drawings by de Fraysseix, Pach, and Pollack [Combinatorica, 1990] and by Schnyder [SODA, 1990], called canonical drawings and Schnyder drawings, respectively. To this aim (i) we devise an algorithm for enumerating special $e$-bipolar orientations of maximal planar graphs, called canonical orientations; (ii) we establish bijections between canonical orientations and canonical drawings, and between canonical orientations and Schnyder drawings; and (iii) we exploit the known correspondence between canonical orientations and canonical orderings, and the known bijection between canonical orientations and Schnyder woods. All our enumeration algorithms have $O(n)$ setup time, space usage, and delay between any two consecutively listed outputs, for an $n$-vertex maximal planar graph.

Local fields, and fields complete with respect to a discrete valuation, are essential objects in commutative algebra, with applications to number theory and algebraic geometry. We formalize in Lean the basic theory of discretely valued fields. In particular, we prove that the unit ball with respect to a discrete valuation on a field is a discrete valuation ring and, conversely, that the adic valuation on the field of fractions of a discrete valuation ring is discrete. We define finite extensions of valuations and of discrete valuation rings, and prove some global-to-local results. Building on this general theory, we formalize the abstract definition and some fundamental properties of local fields. As an application, we show that finite extensions of the field $\mathbb{Q}_p$ of $p$-adic numbers and of the field $\mathbb{F}_p(\!(X)\!)$ of Laurent series over $\mathbb{F}_p$ are local fields.

We consider relational semantics (R-models) for the Lambek calculus extended with intersection and explicit constants for zero and unit. For its variant without constants and a restriction which disallows empty antecedents, Andreka and Mikulas (1994) prove strong completeness. We show that it fails without this restriction, but, on the other hand, prove weak completeness for non-standard interpretation of constants. For the standard interpretation, even weak completeness fails. The weak completeness result extends to an infinitary setting, for so-called iterative divisions (Kleene star under division). We also prove strong completeness results for product-free fragments.

Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models. They comprise a neural ODE and a certified upper bound on the error between the abstract neural network and the concrete dynamical model. So far neural abstractions have exclusively been obtained as neural networks consisting entirely of $ReLU$ activation functions, resulting in neural ODE models that have piecewise affine dynamics, and which can be equivalently interpreted as linear hybrid automata. In this work, we observe that the utility of an abstraction depends on its use: some scenarios might require coarse abstractions that are easier to analyse, whereas others might require more complex, refined abstractions. We therefore consider neural abstractions of alternative shapes, namely either piecewise constant or nonlinear non-polynomial (specifically, obtained via sigmoidal activations). We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics. Empirically, we demonstrate the trade-off that these different neural abstraction templates have vis-a-vis their precision and synthesis time, as well as the time required for their safety verification (done via reachability computation). We improve existing synthesis techniques to enable abstraction of higher-dimensional models, and additionally discuss the abstraction of complex neural ODEs to improve the efficiency of reachability analysis for these models.

To plan the trajectories of a large and heterogeneous swarm, sequential or synchronous distributed methods usually become intractable, due to the lack of global connectivity and clock synchronization, Moreover, the existing asynchronously distributed schemes usually require recheck-like mechanisms instead of inherently considering the other' moving tendency. To this end, we propose a novel asynchronous protocol to allocate the agents' derivable space in a distributed way, by which each agent can replan trajectory depending on its own timetable. Properties such as collision avoidance and recursive feasibility are theoretically shown and a lower bound of protocol updating is provided. Comprehensive simulations and comparisons with five state-of-the-art methods validate the effectiveness of our method and illustrate the improvement in both the completion time and the moving distance. Finally, hardware experiments are carried out, where 8 heterogeneous unmanned ground vehicles with onboard computation navigate in cluttered scenarios at a high agility.

We derive a family of efficient constrained dynamics algorithms by formulating an equivalent linear quadratic regulator (LQR) problem using Gauss principle of least constraint and solving it using dynamic programming. Our approach builds upon the pioneering (but largely unknown) O(n + m^2d + m^3) solver by Popov and Vereshchagin (PV), where n, m and d are the number of joints, number of constraints and the kinematic tree depth respectively. We provide an expository derivation for the original PV solver and extend it to floating-base kinematic trees with constraints allowed on any link. We make new connections between the LQR's dual Hessian and the inverse operational space inertia matrix (OSIM), permitting efficient OSIM computation, which we further accelerate using matrix inversion lemma. By generalizing the elimination ordering and accounting for MUJOCO-type soft constraints, we derive two original O(n + m) complexity solvers. Our numerical results indicate that significant simulation speed-up can be achieved for high dimensional robots like quadrupeds and humanoids using our algorithms as they scale better than the widely used O(nd^2 + m^2d + d^2m) LTL algorithm of Featherstone. The derivation through the LQR-constrained dynamics connection can make our algorithm accessible to a wider audience and enable cross-fertilization of software and research results between the fields

We present a result under which certain functions of covariance matrices are maximized at multiples of the identity matrix. This is used to show that experimental designs that are optimal under an assumption of independent observations can be minimax, in broad classes of correlation structures.

The inherently diverse and uncertain nature of trajectories presents a formidable challenge in accurately modeling them. Motion prediction systems must effectively learn spatial and temporal information from the past to forecast the future trajectories of the agent. Many existing methods learn temporal motion via separate components within stacked models to capture temporal features. This paper introduces a novel framework, called Temporal Waypoint Dropping (TWD), that promotes explicit temporal learning through the waypoint dropping technique. Learning through waypoint dropping can compel the model to improve its understanding of temporal correlations among agents, thus leading to a significant enhancement in trajectory prediction. Trajectory prediction methods often operate under the assumption that observed trajectory waypoint sequences are complete, disregarding real-world scenarios where missing values may occur, which can influence their performance. Moreover, these models frequently exhibit a bias towards particular waypoint sequences when making predictions. Our TWD is capable of effectively addressing these issues. It incorporates stochastic and fixed processes that regularize projected past trajectories by strategically dropping waypoints based on temporal sequences. Through extensive experiments, we demonstrate the effectiveness of TWD in forcing the model to learn complex temporal correlations among agents. Our approach can complement existing trajectory prediction methods to enhance prediction accuracy. We also evaluate our proposed method across three datasets: NBA Sports VU, ETH-UCY, and TrajNet++.

北京阿比特科技有限公司