亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

To assess the integrity of the developing nervous system, the Prechtl general movement assessment (GMA) is recognized for its clinical value in diagnosing neurological impairments in early infancy. GMA has been increasingly augmented through machine learning approaches intending to scale-up its application, circumvent costs in the training of human assessors and further standardize classification of spontaneous motor patterns. Available deep learning tools, all of which are based on single sensor modalities, are however still considerably inferior to that of well-trained human assessors. These approaches are hardly comparable as all models are designed, trained and evaluated on proprietary/silo-data sets. With this study we propose a sensor fusion approach for assessing fidgety movements (FMs). FMs were recorded from 51 typically developing participants. We compared three different sensor modalities (pressure, inertial, and visual sensors). Various combinations and two sensor fusion approaches (late and early fusion) for infant movement classification were tested to evaluate whether a multi-sensor system outperforms single modality assessments. Convolutional neural network (CNN) architectures were used to classify movement patterns. The performance of the three-sensor fusion (classification accuracy of 94.5%) was significantly higher than that of any single modality evaluated. We show that the sensor fusion approach is a promising avenue for automated classification of infant motor patterns. The development of a robust sensor fusion system may significantly enhance AI-based early recognition of neurofunctions, ultimately facilitating automated early detection of neurodevelopmental conditions.

相關內容

 傳感器(英文名稱:transducer/sensor)是一種檢測裝置,能感受到被測量的信息,并能將感受到的信息,按一定規律變換成為電信號或其他所需形式的信息輸出,以滿足信息的傳輸、處理、存儲、顯示、記錄和控制等要求。

The distinctive feature of a polynomial parametric speed let polynomial Pythagorean-hodograph (PH) curves be attractive for the design of accurate and efficient application algorithms. We propose a robust path following scheme for the construction of smooth spatial motions by exploiting PH spline curves. In order to cover a general configuration setting, we present a guidance law which is suitable both for fully-actuated and (more common) under-actuated vehicles, which cannot control all the degrees of freedom. The robustness of the guidance law is enhanced by also taking into account the influence of wind or currents into the equations of motion. A selection of numerical experiments validates the effectiveness of the control strategy when $C^1$ spatial PH quintic interpolants are suitably considered for both kinematic and dynamic simulations.

Federated learning (FL) is a collaborative and privacy-preserving Machine Learning paradigm, allowing the development of robust models without the need to centralise sensitive data. A critical challenge in FL lies in fairly and accurately allocating contributions from diverse participants. Inaccurate allocation can undermine trust, lead to unfair compensation, and thus participants may lack the incentive to join or actively contribute to the federation. Various remuneration strategies have been proposed to date, including auction-based approaches and Shapley-value based methods, the latter offering a means to quantify the contribution of each participant. However, little to no work has studied the stability of these contribution evaluation methods. In this paper, we focus on calculating contributions using gradient-based model reconstruction techniques with Shapley values. We first show that baseline Shapley values do not accurately reflect clients' contributions, leading to unstable reward allocations amongst participants in a cross-silo federation. We then introduce \textsc{FedRandom}, a new method that mitigates these shortcomings with additional data samplings, and show its efficacy at increasing the stability of contribution evaluation in federated learning.

A well-established approach to proving progress properties such as deadlock-freedom and termination is to associate obligations with threads. For example, in most existing work the proof rule for lock acquisition prescribes a standard usage protocol by burdening the acquiring thread with an obligation to release the lock. The fact that the obligation creation is hardcoded into the acquire operation, however, rules out non-standard clients e.g. where the release happens in a different thread. We overcome this limitation by instead having the blocking operations take the obligation creation operations required for the specific client scenario as arguments. We dub this simple instance of higher-order programming with auxiliary code Sassy. To illustrate Sassy, we extend HeapLang, a simple, higher-order, concurrent programming language with erasable code and state. The resulting language gets stuck if no progress is made. Consequently, we can apply standard safety separation logic to compositionally reason about termination in a fine-grained concurrent setting. We validated Sassy by developing (non-foundational) machine-checked proofs of representative locks -- an unfair Spinlock (competitive succession), a fair Ticketlock (direct handoff succession) and the hierarchically constructed Cohortlock that is starvation-free if the underlying locks are starvation-free -- against our specifications using an encoding of the approach in the VeriFast program verifier for C and Java.

Filtering is concerned with online estimation of the state of a dynamical system from partial and noisy observations. In applications where the state is high dimensional, ensemble Kalman filters are often the method of choice. This paper establishes long-time accuracy of ensemble Kalman filters. We introduce conditions on the dynamics and the observations under which the estimation error remains small in the long-time horizon. Our theory covers a wide class of partially-observed chaotic dynamical systems, which includes the Navier-Stokes equations and Lorenz models. In addition, we prove long-time accuracy of ensemble Kalman filters with surrogate dynamics, thus validating the use of machine-learned forecast models in ensemble data assimilation.

In order to determine an optimal plan for a complex task, one often deals with dynamic and hierarchical relationships between several entities. Traditionally, such problems are tackled with optimal control, which relies on the optimization of cost functions; instead, a recent biologically-motivated proposal casts planning and control as an inference process. Active inference assumes that action and perception are two complementary aspects of life whereby the role of the former is to fulfill the predictions inferred by the latter. In this study, we present a solution, based on active inference, for complex control tasks. The proposed architecture exploits hybrid (discrete and continuous) processing, and it is based on three features: the representation of potential body configurations related to the objects of interest; the use of hierarchical relationships that enable the agent to flexibly expand its body schema for tool use; the definition of potential trajectories related to the agent's intentions, used to infer and plan with dynamic elements at different temporal scales. We evaluate this deep hybrid model on a habitual task: reaching a moving object after having picked a moving tool. We show that the model can tackle the presented task under different conditions. This study extends past work on planning as inference and advances an alternative direction to optimal control.

This paper addresses the problem of adaptively controlling the bias parameter in nonlinear opinion dynamics (NOD) to allocate agents into groups of arbitrary sizes for the purpose of maximizing collective rewards. In previous work, an algorithm based on the coupling of NOD with an multi-objective behavior optimization was successfully deployed as part of a multi-robot system in an autonomous task allocation field experiment. Motivated by the field results, in this paper we propose and analyze a new task allocation model that synthesizes NOD with an evolutionary game framework. We prove sufficient conditions under which it is possible to control the opinion state in the group to a desired allocation of agents between two tasks through an adaptive bias using decentralized feedback. We then verify the theoretical results with a simulation study of a collaborative evolutionary division of labor game.

Statistical learning under distribution shift is challenging when neither prior knowledge nor fully accessible data from the target distribution is available. Distributionally robust learning (DRL) aims to control the worst-case statistical performance within an uncertainty set of candidate distributions, but how to properly specify the set remains challenging. To enable distributional robustness without being overly conservative, in this paper, we propose a shape-constrained approach to DRL, which incorporates prior information about the way in which the unknown target distribution differs from its estimate. More specifically, we assume the unknown density ratio between the target distribution and its estimate is isotonic with respect to some partial order. At the population level, we provide a solution to the shape-constrained optimization problem that does not involve the isotonic constraint. At the sample level, we provide consistency results for an empirical estimator of the target in a range of different settings. Empirical studies on both synthetic and real data examples demonstrate the improved accuracy of the proposed shape-constrained approach.

We propose a procedure for the numerical approximation of invariance equations arising in the moment matching technique associated with reduced-order modeling of high-dimensional dynamical systems. The Galerkin residual method is employed to find an approximate solution to the invariance equation using a Newton iteration on the coefficients of a monomial basis expansion of the solution. These solutions to the invariance equations can then be used to construct reduced-order models. We assess the ability of the method to solve the invariance PDE system as well as to achieve moment matching and recover a system's steady-state behaviour for linear and nonlinear signal generators with system dynamics up to $n=1000$ dimensions.

The Evidential regression network (ENet) estimates a continuous target and its predictive uncertainty without costly Bayesian model averaging. However, it is possible that the target is inaccurately predicted due to the gradient shrinkage problem of the original loss function of the ENet, the negative log marginal likelihood (NLL) loss. In this paper, the objective is to improve the prediction accuracy of the ENet while maintaining its efficient uncertainty estimation by resolving the gradient shrinkage problem. A multi-task learning (MTL) framework, referred to as MT-ENet, is proposed to accomplish this aim. In the MTL, we define the Lipschitz modified mean squared error (MSE) loss function as another loss and add it to the existing NLL loss. The Lipschitz modified MSE loss is designed to mitigate the gradient conflict with the NLL loss by dynamically adjusting its Lipschitz constant. By doing so, the Lipschitz MSE loss does not disturb the uncertainty estimation of the NLL loss. The MT-ENet enhances the predictive accuracy of the ENet without losing uncertainty estimation capability on the synthetic dataset and real-world benchmarks, including drug-target affinity (DTA) regression. Furthermore, the MT-ENet shows remarkable calibration and out-of-distribution detection capability on the DTA benchmarks.

Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern 1) a taxonomy and extensive overview of the state-of-the-art, 2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner, 3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time, and storage.

北京阿比特科技有限公司