亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study dimension reduction techniques for large-scale controlled stochastic differential equations (SDEs). The drift of the considered SDEs contains a polynomial term satisfying a one-sided growth condition. Such nonlinearities in high dimensional settings occur, e.g., when stochastic reaction diffusion equations are discretized in space. We provide a brief discussion around existence, uniqueness and stability of solutions. (Almost) stability then is the basis for new concepts of Gramians that we introduce and study in this work. With the help of these Gramians, dominant subspace are identified leading to a balancing related highly accurate reduced order SDE. We provide an algebraic error criterion and an error analysis of the propose model reduction schemes. The paper is concluded by applying our method to spatially discretized reaction diffusion equations.

相關內容

There is extensive literature on perceiving road structures by fusing various sensor inputs such as lidar point clouds and camera images using deep neural nets. Leveraging the latest advance of neural architects (such as transformers) and bird-eye-view (BEV) representation, the road cognition accuracy keeps improving. However, how to cognize the ``road'' for automated vehicles where there is no well-defined ``roads'' remains an open problem. For example, how to find paths inside intersections without HD maps is hard since there is neither an explicit definition for ``roads'' nor explicit features such as lane markings. The idea of this paper comes from a proverb: it becomes a way when people walk on it. Although there are no ``roads'' from sensor readings, there are ``roads'' from tracks of other vehicles. In this paper, we propose FlowMap, a path generation framework for automated vehicles based on traffic flows. FlowMap is built by extending our previous work RoadMap, a light-weight semantic map, with an additional traffic flow layer. A path generation algorithm on traffic flow fields (TFFs) is proposed to generate human-like paths. The proposed framework is validated using real-world driving data and is amenable to generating paths for super complicated intersections without using HD maps.

Optimal model reduction for large-scale linear dynamical systems is studied. In contrast to most existing works, the systems under consideration are not required to be stable, neither in discrete nor in continuous time. As a consequence, the underlying rational transfer functions are allowed to have poles in general domains in the complex plane. In particular, this covers the case of specific conservative partial differential equations such as the linear Schr\"odinger and the undamped linear wave equation with spectra on the imaginary axis. By an appropriate modification of the classical continuous time Hardy space $\mathcal{H}_2$, a new $\mathcal{H}_2$ like optimal model reduction problem is introduced and first order optimality conditions are derived. As in the classical $\mathcal{H}_2$ case, these conditions exhibit a rational Hermite interpolation structure for which an iterative model reduction algorithm is proposed. Numerical examples demonstrate the effectiveness of the new method.

This paper develops projection-free algorithms for online convex optimization with stochastic constraints. We design an online primal-dual projection-free framework that can take any projection-free algorithms developed for online convex optimization with no long-term constraint. With this general template, we deduce sublinear regret and constraint violation bounds for various settings. Moreover, for the case where the loss and constraint functions are smooth, we develop a primal-dual conditional gradient method that achieves $O(\sqrt{T})$ regret and $O(T^{3/4})$ constraint violations. Furthermore, for the setting where the loss and constraint functions are stochastic and strong duality holds for the associated offline stochastic optimization problem, we prove that the constraint violation can be reduced to have the same asymptotic growth as the regret.

Decision making in advanced driver assistance systems involves in general the estimated trajectories of the surrounding objects. Multiple object tracking refers to the process of estimating in real time these trajectories, leveraging for this purpose sensors to detect the objects. This paper deals with devising attacks on object tracking in automated vehicles. The vehicle is assumed to have a detection-based object tracking system that relies on multiple sensors and uses an estimator such as a Kalman filter for sensor fusion and state estimation. The attack goal is to modify the object's state estimated by the victim vehicle to put the vehicle in an unsafe situation. This goal is achieved by judiciously perturbing some or all of the sensor outputs corresponding to the object of interest over a desired horizon. A stochastic model predictive control (SMPC) problem is formulated to compute the sequence of perturbations, whereby hard constraints on the perturbations and probabilistic chance constraints on the object's state are imposed. The chance constraints ensure that some desired conditions for a successful attack are satisfied with a prespecified probability. Reasonable assumptions are then made to obtain a computationally tractable linear SMPC program. The approach is demonstrated on an adaptive cruise control system in a simulation environment, where successful sequential attacks are generated, leading the victim vehicle into dangerous driving situations including collisions.

We are concerned with the arithmetic of solutions to ordinary or partial nonlinear differential equations which are algebraic in the indeterminates and their derivatives. We call these solutions D-algebraic functions, and their equations are algebraic (ordinary or partial) differential equations (ADEs). The general purpose is to find ADEs whose solutions are specified rational expressions of solutions to given ADEs. For univariate D-algebraic functions, we show how to derive an ADE whose order is bounded by the sum of the orders of the given algebraic ODEs. In the multivariate case, we prove that this cannot be done with algebraic PDEs, and introduce a general algorithm for these computations. Using our accompanying Maple software, we discuss applications in physics, statistics, and symbolic integration.

The Internet of Things (IoT) is integrating the Internet and smart devices in almost every domain such as home automation, e-healthcare systems, vehicular networks, industrial control and military applications. In these sectors, sensory data, which is collected from multiple sources and managed through intermediate processing by multiple nodes, is used for decision-making processes. Ensuring data integrity and keeping track of data provenance is a core requirement in such a highly dynamic context, since data provenance is an important tool for the assurance of data trustworthiness. Dealing with such requirements is challenging due to the limited computational and energy resources in IoT networks. This requires addressing several challenges such as processing overhead, secure provenance, bandwidth consumption and storage efficiency. In this paper, we propose ZIRCON, a novel zero-watermarking approach to establish end-to-end data trustworthiness in an IoT network. In ZIRCON, provenance information is stored in a tamper-proof centralized network database through watermarks, generated at source node before transmission. We provide an extensive security analysis showing the resilience of our scheme against passive and active attacks. We also compare our scheme with existing works based on performance metrics such as computational time, energy utilization and cost analysis. The results show that ZIRCON is robust against several attacks, lightweight, storage efficient, and better in energy utilization and bandwidth consumption, compared to prior art.

We consider a certifiable object pose estimation problem, where -- given a partial point cloud of an object -- the goal is to not only estimate the object pose, but also to provide a certificate of correctness for the resulting estimate. Our first contribution is a general theory of certification for end-to-end perception models. In particular, we introduce the notion of $\zeta$-correctness, which bounds the distance between an estimate and the ground truth. We show that $\zeta$-correctness can be assessed by implementing two certificates: (i) a certificate of observable correctness, that asserts if the model output is consistent with the input data and prior information, (ii) a certificate of non-degeneracy, that asserts whether the input data is sufficient to compute a unique estimate. Our second contribution is to apply this theory and design a new learning-based certifiable pose estimator. We propose C-3PO, a semantic-keypoint-based pose estimation model, augmented with the two certificates, to solve the certifiable pose estimation problem. C-3PO also includes a keypoint corrector, implemented as a differentiable optimization layer, that can correct large detection errors (e.g. due to the sim-to-real gap). Our third contribution is a novel self-supervised training approach that uses our certificate of observable correctness to provide the supervisory signal to C-3PO during training. In it, the model trains only on the observably correct input-output pairs, in each training iteration. As training progresses, we see that the observably correct input-output pairs grow, eventually reaching near 100% in many cases. Our experiments show that (i) standard semantic-keypoint-based methods outperform more recent alternatives, (ii) C-3PO further improves performance and significantly outperforms all the baselines, and (iii) C-3PO's certificates are able to discern correct pose estimates.

Discrete Differential Equations (DDEs) are functional equations that relate polynomially a power series $F(t,u)$ in $t$ with polynomial coefficients in a "catalytic" variable $u$ and the specializations, say at $u=1$, of $F(t,u)$ and of some of its partial derivatives in $u$. DDEs occur frequently in combinatorics, especially in map enumeration. If a DDE is of fixed-point type then its solution $F(t,u)$ is unique, and a general result by Popescu (1986) implies that $F(t,u)$ is an algebraic power series. Constructive proofs of algebraicity for solutions of fixed-point type DDEs were proposed by Bousquet-M\'elou and Jehanne (2006). Bostan et. al (2022) initiated a systematic algorithmic study of such DDEs of order 1. We generalize this study to DDEs of arbitrary order. First, we propose nontrivial extensions of algorithms based on polynomial elimination and on the guess-and-prove paradigm. Second, we design two brand-new algorithms that exploit the special structure of the underlying polynomial systems. Last, but not least, we report on implementations that are able to solve highly challenging DDEs with a combinatorial origin.

Foundation models have achieved great advances in multi-task learning with a unified interface of unimodal and multimodal tasks. However, the potential of such multi-task learners has not been exploited during transfer learning. In this work, we present a universal parameter-efficient transfer learning method, termed Predict-Interpolate Tuning ($\pi$-Tuning), for vision, language, and vision-language tasks. It aggregates the parameters of lightweight task-specific experts learned from similar tasks to aid the target downstream task. The task similarities are predicted in a unified modality-independent space, yielding a scalable graph to demonstrate task relationships. $\pi$-Tuning has several appealing benefits. First, it flexibly explores both intra- and inter-modal transferability between similar tasks to improve the accuracy and robustness of transfer learning, especially in data-scarce scenarios. Second, it offers a systematical solution for transfer learning with multi-task prediction-and-then-interpolation, compatible with diverse types of parameter-efficient experts, such as prompt and adapter. Third, an extensive study of task-level mutual benefits on 14 unimodal and 6 multimodal datasets shows that $\pi$-Tuning surpasses fine-tuning and other parameter-efficient transfer learning methods both in full-shot and low-shot regimes. The task graph also enables an in-depth interpretable analysis of task transferability across modalities.

Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference. We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute: self-supervised pretraining and high-resource machine translation. We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps. Moreover, this acceleration in convergence typically outpaces the additional computational overhead of using larger models. Therefore, the most compute-efficient training strategy is to counterintuitively train extremely large models but stop after a small number of iterations. This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models. However, we show that large models are more robust to compression techniques such as quantization and pruning than small models. Consequently, one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models.

北京阿比特科技有限公司