亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The locality of solution features in cardiac electrophysiology simulations calls for adaptive methods. Due to the overhead incurred by established mesh refinement and coarsening, however, such approaches failed in accelerating the computations. Here we investigate a different route to spatial adaptivity that is based on nested subset selection for algebraic degrees of freedom in spectral deferred correction methods. This combination of algebraic adaptivity and iterative solvers for higher order collocation time stepping realizes a multirate integration with minimal overhead. This leads to moderate but significant speedups in both monodomain and cell-by-cell models of cardiac excitation, as demonstrated at four numerical examples.

相關內容

Network meta-analysis (NMA) combines evidence from multiple trials to compare the effectiveness of a set of interventions. In public health research, interventions are often complex, made up of multiple components or features. This makes it difficult to define a common set of interventions on which to perform the analysis. One approach to this problem is component network meta-analysis (CNMA) which uses a meta-regression framework to define each intervention as a subset of components whose individual effects combine additively. In this paper, we are motivated by a systematic review of complex interventions to prevent obesity in children. Due to considerable heterogeneity across the trials, these interventions cannot be expressed as a subset of components but instead are coded against a framework of characteristic features. To analyse these data, we develop a bespoke CNMA-inspired model that allows us to identify the most important features of interventions. We define a meta-regression model with covariates on three levels: intervention, study, and follow-up time, as well as flexible interaction terms. By specifying different regression structures for trials with and without a control arm, we relax the assumption from previous CNMA models that a control arm is the absence of intervention components. Furthermore, we derive a correlation structure that accounts for trials with multiple intervention arms and multiple follow-up times. Although our model was developed for the specifics of the obesity data set, it has wider applicability to any set of complex interventions that can be coded according to a set of shared features.

This research explores the reliability of deep learning, specifically Long Short-Term Memory (LSTM) networks, for estimating the Hurst parameter in fractional stochastic processes. The study focuses on three types of processes: fractional Brownian motion (fBm), fractional Ornstein-Uhlenbeck (fOU) process, and linear fractional stable motions (lfsm). The work involves a fast generation of extensive datasets for fBm and fOU to train the LSTM network on a large volume of data in a feasible time. The study analyses the accuracy of the LSTM network's Hurst parameter estimation regarding various performance measures like RMSE, MAE, MRE, and quantiles of the absolute and relative errors. It finds that LSTM outperforms the traditional statistical methods in the case of fBm and fOU processes; however, it has limited accuracy on lfsm processes. The research also delves into the implications of training length and valuation sequence length on the LSTM's performance. The methodology is applied by estimating the Hurst parameter in Li-ion battery degradation data and obtaining confidence bounds for the estimation. The study concludes that while deep learning methods show promise in parameter estimation of fractional processes, their effectiveness is contingent on the process type and the quality of training data.

Fully explicit stabilized multirate (mRKC) methods are well-suited for the numerical solution of large multiscale systems of stiff ordinary differential equations thanks to their improved stability properties. To demonstrate their efficiency for the numerical solution of stiff, multiscale, nonlinear parabolic PDE's, we apply mRKC methods to the monodomain equation from cardiac electrophysiology. In doing so, we propose an improved version, specifically tailored to the monodomain model, which leads to the explicit exponential multirate stabilized (emRKC) method. Several numerical experiments are conducted to evaluate the efficiency of both mRKC and emRKC, while taking into account different finite element meshes (structured and unstructured) and realistic ionic models. The new emRKC method typically outperforms a standard implicit-explicit baseline method for cardiac electrophysiology. Code profiling and strong scalability results further demonstrate that emRKC is faster and inherently parallel without sacrificing accuracy.

Sensory perception originates from the responses of sensory neurons, which react to a collection of sensory signals linked to various physical attributes of a singular perceptual object. Unraveling how the brain extracts perceptual information from these neuronal responses is a pivotal challenge in both computational neuroscience and machine learning. Here we introduce a statistical mechanical theory, where perceptual information is first encoded in the correlated variability of sensory neurons and then reformatted into the firing rates of downstream neurons. Applying this theory, we illustrate the encoding of motion direction using neural covariance and demonstrate high-fidelity direction recovery by spiking neural networks. Networks trained under this theory also show enhanced performance in classifying natural images, achieving higher accuracy and faster inference speed. Our results challenge the traditional view of neural covariance as a secondary factor in neural coding, highlighting its potential influence on brain function.

We present a deep learning method for accurately localizing the center of a single corneal reflection (CR) in an eye image. Unlike previous approaches, we use a convolutional neural network (CNN) that was trained solely using simulated data. Using only simulated data has the benefit of completely sidestepping the time-consuming process of manual annotation that is required for supervised training on real eye images. To systematically evaluate the accuracy of our method, we first tested it on images with simulated CRs placed on different backgrounds and embedded in varying levels of noise. Second, we tested the method on high-quality videos captured from real eyes. Our method outperformed state-of-the-art algorithmic methods on real eye images with a 35% reduction in terms of spatial precision, and performed on par with state-of-the-art on simulated images in terms of spatial accuracy.We conclude that our method provides a precise method for CR center localization and provides a solution to the data availability problem which is one of the important common roadblocks in the development of deep learning models for gaze estimation. Due to the superior CR center localization and ease of application, our method has the potential to improve the accuracy and precision of CR-based eye trackers

This paper explores the impact of biologically plausible neuron models on the performance of Spiking Neural Networks (SNNs) for regression tasks. While SNNs are widely recognized for classification tasks, their application to Scientific Machine Learning and regression remains underexplored. We focus on the membrane component of SNNs, comparing four neuron models: Leaky Integrate-and-Fire, FitzHugh-Nagumo, Izhikevich, and Hodgkin-Huxley. We investigate their effect on SNN accuracy and efficiency for function regression tasks, by using Euler and Runge-Kutta 4th-order approximation schemes. We show how more biologically plausible neuron models improve the accuracy of SNNs while reducing the number of spikes in the system. The latter represents an energetic gain on actual neuromorphic chips since it directly reflects the amount of energy required for the computations.

Computational modeling of charged species transport has enabled the analysis, design, and optimization of a diverse array of electrochemical and electrokinetic devices. These systems are represented by the Poisson-Nernst-Planck (PNP) equations coupled with the Navier-Stokes (NS) equation. Direct numerical simulation (DNS) to accurately capture the spatio-temporal variation of ion concentration and current flux remains challenging due to the (a) small critical dimension of the diffuse charge layer (DCL), (b) stiff coupling due to fast charge relaxation times, large advective effects, and steep gradients close to boundaries, and (c) complex geometries exhibited by electrochemical devices. In the current study, we address these challenges by presenting a direct numerical simulation framework that incorporates (a) a variational multiscale (VMS) treatment, (b) a block-iterative strategy in conjunction with semi-implicit (for NS) and implicit (for PNP) time integrators, and (c) octree based adaptive mesh refinement. The VMS formulation provides numerical stabilization critical for capturing the electro-convective flows often observed in engineered devices. The block-iterative strategy decouples the difficulty of non-linear coupling between the NS and PNP equations and allows the use of tailored numerical schemes separately for NS and PNP equations. The carefully designed second-order, hybrid implicit methods circumvent the harsh timestep requirements of explicit time steppers, thus enabling simulations over longer time horizons. Finally, the octree-based meshing allows efficient and targeted spatial resolution of the DCL. These features are incorporated into a massively parallel computational framework, enabling the simulation of realistic engineering electrochemical devices. The numerical framework is illustrated using several challenging canonical examples.

This paper presents a regularized recursive identification algorithm with simultaneous on-line estimation of both the model parameters and the algorithms hyperparameters. A new kernel is proposed to facilitate the algorithm development. The performance of this novel scheme is compared with that of the recursive least-squares algorithm in simulation.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

北京阿比特科技有限公司