Predictive simulation of human motion could provide insight into optimal techniques. In repetitive or long-duration tasks, these simulations must predict fatigue-induced adaptation. However, most studies minimize cost function terms related to actuator activations, assuming it minimizes fatigue. An additional modeling layer is needed to consider the previous use of muscles to reveal adaptive strategies to the decreased force production capability. Here, we propose interfacing Xia's three-compartment fatigue dynamics model with rigid-body dynamics. A stabilization invariant was added to Xia's model. We simulated the maximum repetition of dumbbell biceps curls as an optimal control problem (OCP) using direct multiple shooting. We explored three cost functions (minimizing minimum torque, fatigue, or both) and two OCP formulations (full-horizon and sliding-horizon approaches). We found that Xia's model modified with the stabilization invariant (10 or 5) was adapted to direct multiple shooting. Sliding-horizon OCPs achieved 20 to 21 repetitions. The kinematic strategy slowly deviated from a plausible dumbbell lifting task to a swinging strategy as fatigue onset increasingly compromised the ability to keep the arm vertical. In full-horizon OCPs, the latter kinematic strategy was used over the whole motion, resulting in 32 repetitions. We showed that sliding-horizon OCPs revealed a reactive strategy to fatigue when only torque was included in the cost function, whereas an anticipatory strategy was revealed when the fatigue term was included in the cost function. Overall, the proposed approach has the potential to be a valuable tool in optimizing performance and helping reduce fatigue-related injuries in a variety of fields.
Numerical simulation of fluids plays an essential role in modeling many physical phenomena, which enables technological advancements, contributes to sustainable practices, and expands our understanding of various natural and engineered systems. The calculation of heat transfer in fluid flow in simple flat channels is a relatively easy task for various simulation methods. However, once the channel geometry becomes more complex, numerical simulations become a bottleneck in optimizing wall geometries. We present a combination of accurate numerical simulations of arbitrary, flat, and non-flat channels and machine learning models predicting drag coefficient and Stanton number. We show that convolutional neural networks (CNN) can accurately predict the target properties at a fraction of the time of numerical simulations. We use the CNN models in a virtual high-throughput screening approach to explore a large number of possible, randomly generated wall architectures. Data Augmentation was applied to existing geometries data to add generated new training data which have the same number of parameters of heat transfer to improve the model's generalization. The general approach is not only applicable to simple flow setups as presented here but can be extended to more complex tasks, such as multiphase or even reactive unit operations in chemical engineering.
Stochastic inversion problems are typically encountered when it is wanted to quantify the uncertainty affecting the inputs of computer models. They consist in estimating input distributions from noisy, observable outputs, and such problems are increasingly examined in Bayesian contexts where the targeted inputs are affected by stochastic uncertainties. In this regard, a stochastic input can be qualified as meaningful if it explains most of the output uncertainty. While such inverse problems are characterized by identifiability conditions, constraints of "signal to noise", that can formalize this meaningfulness, should be accounted for within the definition of the model, prior to inference. This article investigates the possibility of forcing a solution to be meaningful in the context of parametric uncertainty quantification, through the tools of global sensitivity analysis and information theory (variance, entropy, Fisher information). Such forcings have mainly the nature of constraints placed on the input covariance, and can be made explicit by considering linear or linearizable models. Simulated experiments indicate that, when injected into the modeling process, these constraints can limit the influence of measurement or process noise on the estimation of the input distribution, and let hope for future extensions in a full non-linear framework, for example through the use of linear Gaussian mixtures.
The main computational cost per iteration of adaptive cubic regularization methods for solving large-scale nonconvex problems is the computation of the step $s_k$, which requires an approximate minimizer of the cubic model. We propose a new approach in which this minimizer is sought in a low dimensional subspace that, in contrast to classical approaches, is reused for a number of iterations. A regularized Newton step to correct $s_k$ is also incorporated whenever needed. We show that our method increases efficiency while preserving the worst-case complexity of classical cubic regularized methods. We also explore the use of rational Krylov subspaces for the subspace minimization, to overcome some of the issues encountered when using polynomial Krylov subspaces. We provide several experimental results illustrating the gains of the new approach when compared to classic implementations.
Hardware implementations of Spiking Neural Networks (SNNs) represent a promising approach to edge-computing for applications that require low-power and low-latency, and which cannot resort to external cloud-based computing services. However, most solutions proposed so far either support only relatively small networks, or take up significant hardware resources, to implement large networks. To realize large-scale and scalable SNNs it is necessary to develop an efficient asynchronous communication and routing fabric that enables the design of multi-core architectures. In particular the core interface that manages inter-core spike communication is a crucial component as it represents the bottleneck of Power-Performance-Area (PPA) especially for the arbitration architecture and the routing memory. In this paper we present an arbitration mechanism with the corresponding asynchronous encoding pipeline circuits, based on hierarchical arbiter trees. The proposed scheme reduces the latency by more than 70% in sparse-event mode, compared to the state-of-the-art arbitration architectures, with lower area cost. The routing memory makes use of asynchronous Content Addressable Memory (CAM) with Current Sensing Completion Detection (CSCD), which saves approximately 46% energy, and achieves a 40% increase in throughput against conventional asynchronous CAM using configurable delay lines, at the cost of only a slight increase in area. In addition as it radically reduces the core interface resources in multi-core neuromorphic processors, the arbitration architecture and CAM architecture we propose can be also applied to a wide range of general asynchronous circuits and systems.
Computer-based simulations of non-invasive cardiac electrical outputs, such as electrocardiograms and body surface potential maps, usually entail severe computational costs due to the need of capturing fine-scale processes and to the complexity of the heart-torso morphology. In this work, we model cardiac electrical outputs by employing a coupled model consisting of a reaction-diffusion model - either the bidomain model or the most efficient pseudo-bidomain model - on the heart, and an elliptic model in the torso. We then solve the coupled problem with a segregated and staggered in-time numerical scheme, that allows for independent and infrequent solution in the torso region. To further reduce the computational load, main novelty of this work is in introduction of an interpolation method at the interface between the heart and torso domains, enabling the use of non-conforming meshes, and the numerical framework application to realistic cardiac and torso geometries. The reliability and efficiency of the proposed scheme is tested against the corresponding state-of-the-art bidomain-torso model. Furthermore, we explore the impact of torso spatial discretization and geometrical non-conformity on the model solution and the corresponding clinical outputs. The investigation of the interface interpolation method provides insights into the influence of torso spatial discretization and of the geometrical non-conformity on the simulation results and their clinical relevance.
The use of Air traffic management (ATM) simulators for planing and operations can be challenging due to their modelling complexity. This paper presents XALM (eXplainable Active Learning Metamodel), a three-step framework integrating active learning and SHAP (SHapley Additive exPlanations) values into simulation metamodels for supporting ATM decision-making. XALM efficiently uncovers hidden relationships among input and output variables in ATM simulators, those usually of interest in policy analysis. Our experiments show XALM's predictive performance comparable to the XGBoost metamodel with fewer simulations. Additionally, XALM exhibits superior explanatory capabilities compared to non-active learning metamodels. Using the `Mercury' (flight and passenger) ATM simulator, XALM is applied to a real-world scenario in Paris Charles de Gaulle airport, extending an arrival manager's range and scope by analysing six variables. This case study illustrates XALM's effectiveness in enhancing simulation interpretability and understanding variable interactions. By addressing computational challenges and improving explainability, XALM complements traditional simulation-based analyses. Lastly, we discuss two practical approaches for reducing the computational burden of the metamodelling further: we introduce a stopping criterion for active learning based on the inherent uncertainty of the metamodel, and we show how the simulations used for the metamodel can be reused across key performance indicators, thus decreasing the overall number of simulations needed.
A physics-informed machine learning model, in the form of a multi-output Gaussian process, is formulated using the Euler-Bernoulli beam equation. Given appropriate datasets, the model can be used to regress the analytical value of the structure's bending stiffness, interpolate responses, and make probabilistic inferences on latent physical quantities. The developed model is applied on a numerically simulated cantilever beam, where the regressed bending stiffness is evaluated and the influence measurement noise on the prediction quality is investigated. Further, the regressed probabilistic stiffness distribution is used in a structural health monitoring context, where the Mahalanobis distance is employed to reason about the possible location and extent of damage in the structural system. To validate the developed framework, an experiment is conducted and measured heterogeneous datasets are used to update the assumed analytical structural model.
We consider a Celestial Mechanics model: the spin-orbit problem with a dissipative tidal torque, which is a singular perturbation of a conservative system. The goal of this paper is to show that it is possible to compute quasi-periodic attractors accurately and reliably for parameter values extremely close to the breakdown. Therefore, it is possible to obtain information on mathematical phenomena at breakdown. The method we use incorporates the same time numerical and rigorous improvements. Among them (i) the formalism is based on studying the time-one map of the spin-orbit problem (which reduces the dimensionality of the problem) and has mathematical advantages; (ii) very accurate integration of the ODE (high order Taylor methods implemented with extended precision) for the map at its jets; (iii) a very efficient KAM method for maps which computes the attractor and its tangent spaces ( quadratically convergent step with low storage requirements, and low operation count); (iv) the algorithms are backed by a rigorous a-posteriori KAM Theorem, which establishes that if the algorithm, produces a very approximate solution of functional equation with reasonable condition numbers. then there is a true solution nearby; and (v) the continuation algorithm is guaranteed to reach arbitrarily close to the border of existence if it is given enough computer resources. As a byproduct of the accuracy that we maintain till breakdown, we study several scale invariant observables of the tori used in the renormalization group of infinite dimensional spaces. In contrast with previously studied simple models, the behavior at breakdown of the spin-orbit problem does not satisfy standard scaling relations which implies that the spin-orbit problem is not described by a hyperbolic fixed point of a renormalization operator.
Partially linear additive models generalize linear ones since they model the relation between a response variable and covariates by assuming that some covariates have a linear relation with the response but each of the others enter through unknown univariate smooth functions. The harmful effect of outliers either in the residuals or in the covariates involved in the linear component has been described in the situation of partially linear models, that is, when only one nonparametric component is involved in the model. When dealing with additive components, the problem of providing reliable estimators when atypical data arise, is of practical importance motivating the need of robust procedures. Hence, we propose a family of robust estimators for partially linear additive models by combining $B-$splines with robust linear regression estimators. We obtain consistency results, rates of convergence and asymptotic normality for the linear components, under mild assumptions. A Monte Carlo study is carried out to compare the performance of the robust proposal with its classical counterpart under different models and contamination schemes. The numerical experiments show the advantage of the proposed methodology for finite samples. We also illustrate the usefulness of the proposed approach on a real data set.
Penalized $M-$estimators for logistic regression models have been previously study for fixed dimension in order to obtain sparse statistical models and automatic variable selection. In this paper, we derive asymptotic results for penalized $M-$estimators when the dimension $p$ grows to infinity with the sample size $n$. Specifically, we obtain consistency and rates of convergence results, for some choices of the penalty function. Moreover, we prove that these estimators consistently select variables with probability tending to 1 and derive their asymptotic distribution.