亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Simulation of realistic classical mechanical systems is of great importance to many areas of engineering such as robotics, dynamics of rotating machinery and control theory. In this work, we develop quantum algorithms to estimate quantities of interest such as the kinetic energy in a given classical mechanical system in the presence of friction or damping as well as forcing or source terms, which makes the algorithm of practical interest. We show that for such systems, the quantum algorithm scales polynomially with the logarithm of the dimension of the system. We cast this problem in terms of Hamilton's equations of motion (equivalent to the first variation of the Lagrangian) and solve them using quantum algorithms for differential equations. We then consider the hardness of estimating the kinetic energy of a damped coupled oscillator system. We show that estimating the kinetic energy at a given time of this system to within additive precision is BQP hard when the strength of the damping term is bounded by an inverse polynomial in the number of qubits. We then consider the problem of designing optimal control of classical systems, which can be cast as the second variation of the Lagrangian. In this direction, we first consider the Riccati equation, which is a nonlinear differential equation ubiquitous in control theory. We give an efficient quantum algorithm to solve the Riccati differential equation well into the nonlinear regime. To our knowledge, this is the first example of any nonlinear differential equation that can be solved when the strength of the nonlinearity is asymptotically greater than the amount of dissipation. We then show how to use this algorithm to solve the linear quadratic regulator problem, which is an example of the Hamilton-Jacobi-Bellman equation.

相關內容

Utilizing robotic systems in the construction industry is gaining popularity due to their build time, precision, and efficiency. In this paper, we introduce a system that allows the coordination of multiple manipulator robots for construction activities. As a case study, we chose robotic brick wall assembly. By utilizing a multi robot system where arm manipulators collaborate with each other, the entirety of a potentially long wall can be assembled simultaneously. However, the reduction of overall bricklaying time is dependent on the minimization of time required for each individual manipulator. In this paper, we execute the simulation with various placements of material and the robots base, as well as different robot configurations, to determine the optimal position of the robot and material and the best configuration for the robot. The simulation results provide users with insights into how to find the best placement of robots and raw materials for brick wall assembly.

This paper focuses on the numerical scheme for delay-type stochastic McKean-Vlasov equations (DSMVEs) driven by fractional Brownian motion with Hurst parameter $H\in (0,1/2)\cup (1/2,1)$. The existence and uniqueness of the solutions to such DSMVEs whose drift coefficients contain polynomial delay terms are proved by exploting the Banach fixed point theorem. Then the propagation of chaos between interacting particle system and non-interacting system in $\mathcal{L}^p$ sense is shown. We find that even if the delay term satisfies the polynomial growth condition, the unmodified classical Euler-Maruyama scheme still can approximate the corresponding interacting particle system without the particle corruption. The convergence rates are revealed for $H\in (0,1/2)\cup (1/2,1)$. Finally, as an example that closely fits the original equation, a stochastic opinion dynamics model with both extrinsic memory and intrinsic memory is simulated to illustrate the plausibility of the theoretical result.

Earth system models (ESMs) are fundamental for understanding Earth's complex climate system. However, the computational demands and storage requirements of ESM simulations limit their utility. For the newly published CESM2-LENS2 data, which suffer from this issue, we propose a novel stochastic generator (SG) as a practical complement to the CESM2, capable of rapidly producing emulations closely mirroring training simulations. Our SG leverages the spherical harmonic transformation (SHT) to shift from spatial to spectral domains, enabling efficient low-rank approximations that significantly reduce computational and storage costs. By accounting for axial symmetry and retaining distinct ranks for land and ocean regions, our SG captures intricate non-stationary spatial dependencies. Additionally, a modified Tukey g-and-h (TGH) transformation accommodates non-Gaussianity in high-temporal-resolution data. We apply the proposed SG to generate emulations for surface temperature simulations from the CESM2-LENS2 data across various scales, marking the first attempt of reproducing daily data. These emulations are then meticulously validated against training simulations. This work offers a promising complementary pathway for efficient climate modeling and analysis while overcoming computational and storage limitations.

Probabilistic graphical models are widely used to model complex systems under uncertainty. Traditionally, Gaussian directed graphical models are applied for analysis of large networks with continuous variables as they can provide conditional and marginal distributions in closed form simplifying the inferential task. The Gaussianity and linearity assumptions are often adequate, yet can lead to poor performance when dealing with some practical applications. In this paper, we model each variable in graph G as a polynomial regression of its parents to capture complex relationships between individual variables and with a utility function of polynomial form. We develop a message-passing algorithm to propagate information throughout the network solely using moments which enables the expected utility scores to be calculated exactly. Our propagation method scales up well and enables to perform inference in terms of a finite number of expectations. We illustrate how the proposed methodology works with examples and in an application to decision problems in energy planning and for real-time clinical decision support.

The statistical basis for conventional full-waveform inversion (FWI) approaches is commonly associated with Gaussian statistics. However, errors are rarely Gaussian in non-linear problems like FWI. In this work, we investigate the portability of a new objective function for FWI applications based on the graph-space optimal transport and $\kappa$-generalized Gaussian probability distribution. In particular, we demonstrate that the proposed objective function is robust in mitigating two critical problems in FWI, which are associated with cycle skipping issues and non-Gaussian errors. The results reveal that our proposal can mitigate the negative influence of cycle-skipping ambiguity and non-Gaussian noises and reduce the computational runtime for computing the transport plan associated with the optimal transport theory.

This research conducts a thorough reevaluation of seismic fragility curves by utilizing ordinal regression models, moving away from the commonly used log-normal distribution function known for its simplicity. It explores the nuanced differences and interrelations among various ordinal regression approaches, including Cumulative, Sequential, and Adjacent Category models, alongside their enhanced versions that incorporate category-specific effects and variance heterogeneity. The study applies these methodologies to empirical bridge damage data from the 2008 Wenchuan earthquake, using both frequentist and Bayesian inference methods, and conducts model diagnostics using surrogate residuals. The analysis covers eleven models, from basic to those with heteroscedastic extensions and category-specific effects. Through rigorous leave-one-out cross-validation, the Sequential model with category-specific effects emerges as the most effective. The findings underscore a notable divergence in damage probability predictions between this model and conventional Cumulative probit models, advocating for a substantial transition towards more adaptable fragility curve modeling techniques that enhance the precision of seismic risk assessments. In conclusion, this research not only readdresses the challenge of fitting seismic fragility curves but also advances methodological standards and expands the scope of seismic fragility analysis. It advocates for ongoing innovation and critical reevaluation of conventional methods to advance the predictive accuracy and applicability of seismic fragility models within the performance-based earthquake engineering domain.

Learning to Optimize (L2O) stands at the intersection of traditional optimization and machine learning, utilizing the capabilities of machine learning to enhance conventional optimization techniques. As real-world optimization problems frequently share common structures, L2O provides a tool to exploit these structures for better or faster solutions. This tutorial dives deep into L2O techniques, introducing how to accelerate optimization algorithms, promptly estimate the solutions, or even reshape the optimization problem itself, making it more adaptive to real-world applications. By considering the prerequisites for successful applications of L2O and the structure of the optimization problems at hand, this tutorial provides a comprehensive guide for practitioners and researchers alike.

For a set of robots (or agents) moving in a graph, two properties are highly desirable: confidentiality (i.e., a message between two agents must not pass through any intermediate agent) and efficiency (i.e., messages are delivered through shortest paths). These properties can be obtained if the \textsc{Geodesic Mutual Visibility} (GMV, for short) problem is solved: oblivious robots move along the edges of the graph, without collisions, to occupy some vertices that guarantee they become pairwise geodesic mutually visible. This means there is a shortest path (i.e., a ``geodesic'') between each pair of robots along which no other robots reside. In this work, we optimally solve GMV on finite hexagonal grids $G_k$. This, in turn, requires first solving a graph combinatorial problem, i.e. determining the maximum number of mutually visible vertices in $G_k$.

This work presents several new results concerning the analysis of the convergence of binary, univariate, and linear subdivision schemes, all related to the {\it contractivity factor} of a convergent scheme. First, we prove that a convergent scheme cannot have a contractivity factor lower than half. Since the lower this factor is, the faster is the convergence of the scheme, schemes with contractivity factor $\frac{1}{2}$, such as those generating spline functions, have optimal convergence rate. Additionally, we provide further insights and conditions for the convergence of linear schemes and demonstrate their applicability in an improved algorithm for determining the convergence of such subdivision schemes.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

北京阿比特科技有限公司