亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate time-adaptive Magnus-type integrators for the numerical approximation of a Mott transistor. The rapidly attenuating electromagnetic field calls for adaptive choice of the time steps. As a basis for step selection, asymptotically correct defect-based estimators of the local error are employed. We analyze the error of the numerical approximation in the presence of the unsmooth external potential and demonstrate the advantages of the adaptive approach.

相關內容

Label smoothing (LS) is an arising learning paradigm that uses the positively weighted average of both the hard training labels and uniformly distributed soft labels. It was shown that LS serves as a regularizer for training data with hard labels and therefore improves the generalization of the model. Later it was reported LS even helps with improving robustness when learning with noisy labels. However, we observed that the advantage of LS vanishes when we operate in a high label noise regime. Intuitively speaking, this is due to the increased entropy of $\mathbb{P}(\text{noisy label}|X)$ when the noise rate is high, in which case, further applying LS tends to "oversmooth" the estimated posterior. We proceeded to discover that several learning-with-noisy-labels solutions in the literature instead relate more closely to negative/not label smoothing (NLS), which acts counter to LS and defines as using a negative weight to combine the hard and soft labels! We provide understandings for the properties of LS and NLS when learning with noisy labels. Among other established properties, we theoretically show NLS is considered more beneficial when the label noise rates are high. We provide extensive experimental results on multiple benchmarks to support our findings too.

The transport of traffic flow can be modeled by the advection equation. Finite difference and finite volumes methods have been used to numerically solve this hyperbolic equation on a mesh. Advection has also been modeled discretely on directed graphs using the graph advection operator [4, 18]. In this paper, we first show that we can reformulate this graph advection operator as a finite difference scheme. We then propose the Directed Graph Advection Mat\'ern Gaussian Process (DGAMGP) model that incorporates the dynamics of this graph advection operator into the kernel of a trainable Mat\'ern Gaussian Process to effectively model traffic flow and its uncertainty as an advective process on a directed graph.

The discovery of structure from time series data is a key problem in fields of study working with complex systems. Most identifiability results and learning algorithms assume the underlying dynamics to be discrete in time. Comparatively few, in contrast, explicitly define dependencies in infinitesimal intervals of time, independently of the scale of observation and of the regularity of sampling. In this paper, we consider score-based structure learning for the study of dynamical systems. We prove that for vector fields parameterized in a large class of neural networks, least squares optimization with adaptive regularization schemes consistently recovers directed graphs of local independencies in systems of stochastic differential equations. Using this insight, we propose a score-based learning algorithm based on penalized Neural Ordinary Differential Equations (modelling the mean process) that we show to be applicable to the general setting of irregularly-sampled multivariate time series and to outperform the state of the art across a range of dynamical systems.

We consider the approximation of the inverse square root of regularly accretive operators in Hilbert spaces. The approximation is of rational type and comes from the use of the Gauss-Legendre rule applied to a special integral formulation of the problem. We derive sharp error estimates, based on the use of the numerical range, and provide some numerical experiments. For practical purposes, the finite dimensional case is also considered. In this setting, the convergence is shown to be of exponential type.

We consider the problem of estimating channel fading coefficients (modeled as a correlated Gaussian vector) via Downlink (DL) training and Uplink (UL) feedback in wideband FDD massive MIMO systems. Using rate-distortion theory, we derive optimal bounds on the achievable channel state estimation error in terms of the number of training pilots in DL ($\beta_{tr}$) and feedback dimension in UL ($\beta_{fb}$), with random, spatially isotropic pilots. It is shown that when the number of training pilots exceeds the channel covariance rank ($r$), the optimal rate-distortion feedback strategy achieves an estimation error decay of $\Theta (SNR^{-\alpha})$ in estimating the channel state, where $\alpha = min (\beta_{fb}/r , 1)$ is the so-called quality scaling exponent. We also discuss an "analog" feedback strategy, showing that it can achieve the optimal quality scaling exponent for a wide range of training and feedback dimensions with no channel covariance knowledge and simple signal processing at the user side. Our findings are supported by numerical simulations comparing various strategies in terms of channel state mean squared error and achievable ergodic sum-rate in DL with zero-forcing precoding.

We introduce conservative integrators for long term integration of piecewise smooth systems with transversal dynamics and piecewise smooth conserved quantities. In essence, for a piecewise dynamical system with piecewise defined conserved quantities such that its trajectories cross transversally to its interface, we combine Mannshardt's transition scheme and the Discrete Multiplier Method to obtain conservative integrators capable of preserving conserved quantities up to machine precision and accuracy order. We prove that the order of accuracy of the conservative integrators is preserved after crossing the interface in the case of codimension one number of conserved quantities. Numerical examples illustrate the preservation of accuracy order and conserved quantities across the interface.

Most of the advanced control systems use sensor-based feedback for robust control. Tilt angle estimation is key feedback for many robotics and mechatronics applications in order to stabilize a system. Tilt angle cannot be directly measured when the system in consideration is not attached to a stationary frame. it is usually estimated through indirect measurements in such systems. The precision of this estimation depends on the measurements; hence it can get expensive and complicated as the precision requirement increases. This research is aimed at developing a novel and economic method to estimate tilt angle with a relatively less sophisticated and complicated system, while maintaining precision in estimating tilt angle. The method is developed to explore a pendulum as an inertial measurement sensor and estimates tilt angle based on dynamics of pendulum and parameter estimation models. Further, algorithms are developed with varying order of complexity and accuracy to have customization for different applications. Furthermore, this study will validate the developed algorithms by experimental testing. This method focuses on developing algorithms to reduce the input measurement error in the Kalman filter.

How to obtain good value estimation is one of the key problems in Reinforcement Learning (RL). Current value estimation methods, such as DDPG and TD3, suffer from unnecessary over- or underestimation bias. In this paper, we explore the potential of double actors, which has been neglected for a long time, for better value function estimation in continuous setting. First, we uncover and demonstrate the bias alleviation property of double actors by building double actors upon single critic and double critics to handle overestimation bias in DDPG and underestimation bias in TD3 respectively. Next, we interestingly find that double actors help improve the exploration ability of the agent. Finally, to mitigate the uncertainty of value estimate from double critics, we further propose to regularize the critic networks under double actors architecture, which gives rise to Double Actors Regularized Critics (DARC) algorithm. Extensive experimental results on challenging continuous control tasks show that DARC significantly outperforms state-of-the-art methods with higher sample efficiency.

We present R-LINS, a lightweight robocentric lidar-inertial state estimator, which estimates robot ego-motion using a 6-axis IMU and a 3D lidar in a tightly-coupled scheme. To achieve robustness and computational efficiency even in challenging environments, an iterated error-state Kalman filter (ESKF) is designed, which recursively corrects the state via repeatedly generating new corresponding feature pairs. Moreover, a novel robocentric formulation is adopted in which we reformulate the state estimator concerning a moving local frame, rather than a fixed global frame as in the standard world-centric lidar-inertial odometry(LIO), in order to prevent filter divergence and lower computational cost. To validate generalizability and long-time practicability, extensive experiments are performed in indoor and outdoor scenarios. The results indicate that R-LINS outperforms lidar-only and loosely-coupled algorithms, and achieve competitive performance as the state-of-the-art LIO with close to an order-of-magnitude improvement in terms of speed.

This paper presents a safety-aware learning framework that employs an adaptive model learning method together with barrier certificates for systems with possibly nonstationary agent dynamics. To extract the dynamic structure of the model, we use a sparse optimization technique, and the resulting model will be used in combination with control barrier certificates which constrain feedback controllers only when safety is about to be violated. Under some mild assumptions, solutions to the constrained feedback-controller optimization are guaranteed to be globally optimal, and the monotonic improvement of a feedback controller is thus ensured. In addition, we reformulate the (action-)value function approximation to make any kernel-based nonlinear function estimation method applicable. We then employ a state-of-the-art kernel adaptive filtering technique for the (action-)value function approximation. The resulting framework is verified experimentally on a brushbot, whose dynamics is unknown and highly complex.

北京阿比特科技有限公司