亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

An asymptotic preserving and energy stable scheme for the Euler-Poisson system under the quasineutral scaling is designed and analysed. Correction terms are introduced in the convective fluxes and the electrostatic potential, which lead to the dissipation of mechanical energy and the entropy stability. The resolution of the semi-implicit in time finite volume in space fully-discrete scheme involves two steps: the solution of an elliptic problem for the potential and an explicit evaluation for the density and velocity. The proposed scheme possesses several physically relevant attributes, such as the the entropy stability and the consistency with the weak formulation of the continuous Euler-Poisson system. The AP property of the scheme, i.e. the boundedness of the mesh parameters with respect to the Debye length and its consistency with the quasineutral limit system, is shown. The results of numerical case studies are presented to substantiate the robustness and efficiency of the proposed method.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

Generalized metric spaces are obtained by weakening the requirements (e.g., symmetry) on the distance function and by allowing it to take values in structures (e.g., quantales) that are more general than the set of non-negative real numbers. Quantale-valued metric spaces have gained prominence due to their use in quantitative reasoning on programs/systems, and for defining various notions of behavioral metrics. We investigate imprecision and robustness in the framework of quantale-valued metric spaces, when the quantale is continuous. In particular, we study the relation between the robust topology, which captures robustness of analyses, and the Hausdorff-Smyth hemi-metric. To this end, we define a preorder-enriched monad $\mathsf{P}_S$, called the Hausdorff-Smyth monad, and when $Q$ is a continuous quantale and $X$ is a $Q$-metric space, we relate the topology induced by the metric on $\mathsf{P}_S(X)$ with the robust topology on the powerset $\mathsf{P}(X)$ defined in terms of the metric on $X$.

The acoustic variability of noisy and reverberant speech mixtures is influenced by multiple factors, such as the spectro-temporal characteristics of the target speaker and the interfering noise, the signal-to-noise ratio (SNR) and the room characteristics. This large variability poses a major challenge for learning-based speech enhancement systems, since a mismatch between the training and testing conditions can substantially reduce the performance of the system. Generalization to unseen conditions is typically assessed by testing the system with a new speech, noise or binaural room impulse response (BRIR) database different from the one used during training. However, the difficulty of the speech enhancement task can change across databases, which can substantially influence the results. The present study introduces a generalization assessment framework that uses a reference model trained on the test condition, such that it can be used as a proxy for the difficulty of the test condition. This allows to disentangle the effect of the change in task difficulty from the effect of dealing with new data, and thus to define a new measure of generalization performance termed the generalization gap. The procedure is repeated in a cross-validation fashion by cycling through multiple speech, noise, and BRIR databases to accurately estimate the generalization gap. The proposed framework is applied to evaluate the generalization potential of a feedforward neural network (FFNN), Conv-TasNet, DCCRN and MANNER. We find that for all models, the performance degrades the most in speech mismatches, while good noise and room generalization can be achieved by training on multiple databases. Moreover, while recent models show higher performance in matched conditions, their performance substantially decreases in mismatched conditions and can become inferior to that of the FFNN-based system.

Power grids are moving towards 100% renewable energy source bulk power grids, and the overall dynamics of power system operations and electricity markets are changing. The electricity markets are not only dispatching resources economically but also taking into account various controllable actions like renewable curtailment, transmission congestion mitigation, and energy storage optimization to ensure grid reliability. As a result, price formations in electricity markets have become quite complex. Traditional root cause analysis and statistical approaches are rendered inapplicable to analyze and infer the main drivers behind price formation in the modern grid and markets with variable renewable energy (VRE). In this paper, we propose a machine learning-based analysis framework to deconstruct the primary drivers for price spike events in modern electricity markets with high renewable energy. The outcomes can be utilized for various critical aspects of market design, renewable dispatch and curtailment, operations, and cyber-security applications. The framework can be applied to any ISO or market data; however, in this paper, it is applied to open-source publicly available datasets from California Independent System Operator (CAISO) and ISO New England (ISO-NE).

For Prognostics and Health Management (PHM) of Lithium-ion (Li-ion) batteries, many models have been established to characterize their degradation process. The existing empirical or physical models can reveal important information regarding the degradation dynamics. However, there are no general and flexible methods to fuse the information represented by those models. Physics-Informed Neural Network (PINN) is an efficient tool to fuse empirical or physical dynamic models with data-driven models. To take full advantage of various information sources, we propose a model fusion scheme based on PINN. It is implemented by developing a semi-empirical semi-physical Partial Differential Equation (PDE) to model the degradation dynamics of Li-ion batteries. When there is little prior knowledge about the dynamics, we leverage the data-driven Deep Hidden Physics Model (DeepHPM) to discover the underlying governing dynamic models. The uncovered dynamics information is then fused with that mined by the surrogate neural network in the PINN framework. Moreover, an uncertainty-based adaptive weighting method is employed to balance the multiple learning tasks when training the PINN. The proposed methods are verified on a public dataset of Li-ion Phosphate (LFP)/graphite batteries.

The motions of mechanisms can be described in terms of screw coordinates by means of an exponential mapping. The product of exponentials (POE) describes the configuration of a chain of bodies connected by lower pair joints. The kinematics is thus given in terms of joint screws. The POE serves to express loop constraints for mechanisms as well as the forward kinematics of serial manipulators. Besides the compact formulations, the POE gives rise to purely algebraic relations for derivatives wrt. joint variables. It is known that the partial derivatives of the instantaneous joint screws (columns of the geometric Jacobian) are determined by Lie brackets the joint screws. Lesser-known is that derivative of arbitrary order can be compactly expressed by Lie brackets. This has significance for higher-order forward/inverse kinematics and dynamics of robots and multibody systems. Various relations were reported but are scattered in the literature and insufficiently recognized. This paper aims to provide a comprehensive overview of the relevant relations. Its original contributions are closed form and recursive relations for higher-order derivatives and Taylor expansions of various kinematic relations. Their application to kinematic control and dynamics of robotic manipulators and multibody systems is discussed.

The regular variation model for multivariate extremes decomposes the joint distribution of the extremes in polar coordinates in terms of the angles and the norm of the random vector as the product of two independent densities: the angular (spectral) measure and the density of the norm. The support of the angular measure is the surface of a unit hypersphere and the density of the norm corresponds to a Pareto density. The dependence structure is determined by the angular measure on the hypersphere, and directions with high probability characterize the dependence structure among the elements of the random vector of extreme values. Previous applications of the regular variation model have not considered a probabilistic model for the angular density and no statistical tests were applied. In this paper, circular and spherical distributions based on nonnegative trigonometric sums are considered flexible probabilistic models for the spectral measure that allows the application of statistical tests to make inferences about the dependence structure among extreme values. The proposed methodology is applied to real datasets from finance.

The circular uniform distribution on the unit circle is closed under summation, that is, the sum of independent circular uniformly distributed random variables is also circular uniformly distributed. In this study, it is shown that a family of circular distributions based on nonnegative trigonometric sums (NNTS) is also closed under summation. Given the flexibility of NNTS circular distributions to model multimodality and skewness, these are good candidates for use as alternative models to test for circular uniformity to detect different deviations from the null hypothesis of circular uniformity. The circular uniform distribution is a member of the NNTS family, but in the NNTS parameter space, it corresponds to a point on the boundary of the parameter space, implying that the regularity conditions are not satisfied when the parameters are estimated by using the maximum likelihood method. Two NNTS tests for circular uniformity were developed by considering the standardised maximum likelihood estimator and the generalised likelihood ratio. Given the nonregularity condition, the critical values of the proposed NNTS circular uniformity tests were obtained via simulation and interpolated for any sample size by the fitting of regression models. The validity of the proposed NNTS circular uniformity tests was evaluated by generating NNTS models close to the circular uniformity null hypothesis.

Dense SLAM based on monocular cameras does indeed have immense application value in the field of AR/VR, especially when it is performed on a mobile device. In this paper, we propose a novel method that integrates a light-weight depth completion network into a sparse SLAM system using a multi-basis depth representation, so that dense mapping can be performed online even on a mobile phone. Specifically, we present a specifically optimized multi-basis depth completion network, called BBC-Net, tailored to the characteristics of traditional sparse SLAM systems. BBC-Net can predict multiple balanced bases and a confidence map from a monocular image with sparse points generated by off-the-shelf keypoint-based SLAM systems. The final depth is a linear combination of predicted depth bases that can be optimized by tuning the corresponding weights. To seamlessly incorporate the weights into traditional SLAM optimization and ensure efficiency and robustness, we design a set of depth weight factors, which makes our network a versatile plug-in module, facilitating easy integration into various existing sparse SLAM systems and significantly enhancing global depth consistency through bundle adjustment. To verify the portability of our method, we integrate BBC-Net into two representative SLAM systems. The experimental results on various datasets show that the proposed method achieves better performance in monocular dense mapping than the state-of-the-art methods. We provide an online demo running on a mobile phone, which verifies the efficiency and mapping quality of the proposed method in real-world scenarios.

The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司