亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A general class of hybrid models has been introduced recently, gathering the advantages multiscale descriptions. Concerning biological applications, the particular coupled structure fits to collective cell migrations and pattern formation scenarios. In this context, cells are modelled as discrete entities and their dynamics is given by ODEs, while the chemical signal influencing the motion is considered as a continuous signal which solves a diffusive equation. From the analytical point of view, this class of model has been proved to have a mean-field limit in the Wasserstein distance towards a system given by the coupling of a Vlasov-type equation with the chemoattractant equation. Moreover, a pressureless nonlocal Euler-type system has been derived for these models, rigorously equivalent to the Vlasov one for monokinetic initial data. In the present paper, we present a numerical study of the solutions to the Vlasov and Euler systems, exploring general settings for inital data, far from the monokinetic ones.

相關內容

The incidence of vertebral fragility fracture is increased by the presence of preexisting pathologies such as metastatic disease. Computational tools could support the fracture prediction and consequently the decision of the best medical treatment. Anyway, validation is required to use these tools in clinical practice. To address this necessity, in this study subject-specific homogenized finite element models of single vertebrae were generated from micro CT images for both healthy and metastatic vertebrae and validated against experimental data. More in detail, spine segments were tested under compression and imaged with micro CT. The displacements field could be extracted for each vertebra singularly using the digital volume correlation full-field technique. Homogenized finite element models of each vertebra could hence be built from the micro CT images, applying boundary conditions consistent with the experimental displacements at the endplates. Numerical and experimental displacements and strains fields were eventually compared. In addition, the outcomes of a micro CT based homogenized model were compared to the ones of a clinical-CT based model. Good agreement between experimental and computational displacement fields, both for healthy and metastatic vertebrae, was found. Comparison between micro CT based and clinical-CT based outcomes showed strong correlations. Furthermore, models were able to qualitatively identify the regions which experimentally showed the highest strain concentration. In conclusion, the combination of experimental full-field technique and the in-silico modelling allowed the development of a promising pipeline for validation of fracture risk predictors, although further improvements in both fields are needed to better analyse quantitatively the post-yield behaviour of the vertebra.

Mesh-based Graph Neural Networks (GNNs) have recently shown capabilities to simulate complex multiphysics problems with accelerated performance times. However, mesh-based GNNs require a large number of message-passing (MP) steps and suffer from over-smoothing for problems involving very fine mesh. In this work, we develop a multiscale mesh-based GNN framework mimicking a conventional iterative multigrid solver, coupled with adaptive mesh refinement (AMR), to mitigate challenges with conventional mesh-based GNNs. We use the framework to accelerate phase field (PF) fracture problems involving coupled partial differential equations with a near-singular operator due to near-zero modulus inside the crack. We define the initial graph representation using all mesh resolution levels. We perform a series of downsampling steps using Transformer MP GNNs to reach the coarsest graph followed by upsampling steps to reach the original graph. We use skip connectors from the generated embedding during coarsening to prevent over-smoothing. We use Transfer Learning (TL) to significantly reduce the size of training datasets needed to simulate different crack configurations and loading conditions. The trained framework showed accelerated simulation times, while maintaining high accuracy for all cases compared to physics-based PF fracture model. Finally, this work provides a new approach to accelerate a variety of mesh-based engineering multiphysics problems

Inner products of neural network feature maps arises in a wide variety of machine learning frameworks as a method of modeling relations between inputs. This work studies the approximation properties of inner products of neural networks. It is shown that the inner product of a multi-layer perceptron with itself is a universal approximator for symmetric positive-definite relation functions. In the case of asymmetric relation functions, it is shown that the inner product of two different multi-layer perceptrons is a universal approximator. In both cases, a bound is obtained on the number of neurons required to achieve a given accuracy of approximation. In the symmetric case, the function class can be identified with kernels of reproducing kernel Hilbert spaces, whereas in the asymmetric case the function class can be identified with kernels of reproducing kernel Banach spaces. Finally, these approximation results are applied to analyzing the attention mechanism underlying Transformers, showing that any retrieval mechanism defined by an abstract preorder can be approximated by attention through its inner product relations. This result uses the Debreu representation theorem in economics to represent preference relations in terms of utility functions.

Consistency models, which were proposed to mitigate the high computational overhead during the sampling phase of diffusion models, facilitate single-step sampling while attaining state-of-the-art empirical performance. When integrated into the training phase, consistency models attempt to train a sequence of consistency functions capable of mapping any point at any time step of the diffusion process to its starting point. Despite the empirical success, a comprehensive theoretical understanding of consistency training remains elusive. This paper takes a first step towards establishing theoretical underpinnings for consistency models. We demonstrate that, in order to generate samples within $\varepsilon$ proximity to the target in distribution (measured by some Wasserstein metric), it suffices for the number of steps in consistency learning to exceed the order of $d^{5/2}/\varepsilon$, with $d$ the data dimension. Our theory offers rigorous insights into the validity and efficacy of consistency models, illuminating their utility in downstream inference tasks.

Neural operators have been explored as surrogate models for simulating physical systems to overcome the limitations of traditional partial differential equation (PDE) solvers. However, most existing operator learning methods assume that the data originate from a single physical mechanism, limiting their applicability and performance in more realistic scenarios. To this end, we propose Physical Invariant Attention Neural Operator (PIANO) to decipher and integrate the physical invariants (PI) for operator learning from the PDE series with various physical mechanisms. PIANO employs self-supervised learning to extract physical knowledge and attention mechanisms to integrate them into dynamic convolutional layers. Compared to existing techniques, PIANO can reduce the relative error by 13.6\%-82.2\% on PDE forecasting tasks across varying coefficients, forces, or boundary conditions. Additionally, varied downstream tasks reveal that the PI embeddings deciphered by PIANO align well with the underlying invariants in the PDE systems, verifying the physical significance of PIANO. The source code will be publicly available at: //github.com/optray/PIANO.

The evaluation of text-generative vision-language models is a challenging yet crucial endeavor. By addressing the limitations of existing Visual Question Answering (VQA) benchmarks and proposing innovative evaluation methodologies, our research seeks to advance our understanding of these models' capabilities. We propose a novel VQA benchmark based on well-known visual classification datasets which allows a granular evaluation of text-generative vision-language models and their comparison with discriminative vision-language models. To improve the assessment of coarse answers on fine-grained classification tasks, we suggest using the semantic hierarchy of the label space to ask automatically generated follow-up questions about the ground-truth category. Finally, we compare traditional NLP and LLM-based metrics for the problem of evaluating model predictions given ground-truth answers. We perform a human evaluation study upon which we base our decision on the final metric. We apply our benchmark to a suite of vision-language models and show a detailed comparison of their abilities on object, action, and attribute classification. Our contributions aim to lay the foundation for more precise and meaningful assessments, facilitating targeted progress in the exciting field of vision-language modeling.

A recurring problem in software development is incorrect decision making on the techniques, methods and tools to be used. Mostly, these decisions are based on developers' perceptions about them. A factor influencing people's perceptions is past experience, but it is not the only one. In this research, we aim to discover how well the perceptions of the defect detection effectiveness of different techniques match their real effectiveness in the absence of prior experience. To do this, we conduct an empirical study plus a replication. During the original study, we conduct a controlled experiment with students applying two testing techniques and a code review technique. At the end of the experiment, they take a survey to find out which technique they perceive to be most effective. The results show that participants' perceptions are wrong and that this mismatch is costly in terms of quality. In order to gain further insight into the results, we replicate the controlled experiment and extend the survey to include questions about participants' opinions on the techniques and programs. The results of the replicated study confirm the findings of the original study and suggest that participants' perceptions might be based not on their opinions about complexity or preferences for techniques but on how well they think that they have applied the techniques.

When generating in-silico clinical electrophysiological outputs, such as electrocardiograms (ECGs) and body surface potential maps (BSPMs), mathematical models have relied on single physics, i.e. of the cardiac electrophysiology (EP), neglecting the role of the heart motion. Since the heart is the most powerful source of electrical activity in the human body, its motion dynamically shifts the position of the principal electrical sources in the torso, influencing electrical potential distribution and potentially altering the EP outputs. In this work, we propose a computational model for the simulation of ECGs and BSPMs by coupling a cardiac electromechanical model with a model that simulates the propagation of the EP signal in the torso, thanks to a flexible numerical approach, that simulates the torso domain deformation induced by the myocardial displacement. Our model accounts for the major mechano-electrical feedbacks, along with unidirectional displacement and potential couplings from the heart to the surrounding body. For the numerical discretization, we employ a versatile intergrid transfer operator that allows for the use of different Finite Element spaces to be used in the cardiac and torso domains. Our numerical results are obtained on a realistic 3D biventricular-torso geometry, and cover both cases of sinus rhythm and ventricular tachycardia (VT), solving both the electromechanical-torso model in dynamical domains, and the classical electrophysiology-torso model in static domains. By comparing standard 12-lead ECG and BSPMs, we highlight the non-negligible effects of the myocardial contraction on the EP-outputs, especially in pathological conditions, such as the VT.

Computationally efficient surrogates for parametrized physical models play a crucial role in science and engineering. Operator learning provides data-driven surrogates that map between function spaces. However, instead of full-field measurements, often the available data are only finite-dimensional parametrizations of model inputs or finite observables of model outputs. Building off of Fourier Neural Operators, this paper introduces the Fourier Neural Mappings (FNMs) framework that is able to accommodate such finite-dimensional inputs and outputs. The paper develops universal approximation theorems for the method. Moreover, in many applications the underlying parameter-to-observable (PtO) map is defined implicitly through an infinite-dimensional operator, such as the solution operator of a partial differential equation. A natural question is whether it is more data-efficient to learn the PtO map end-to-end or first learn the solution operator and subsequently compute the observable from the full-field solution. A theoretical analysis of Bayesian nonparametric regression of linear functionals, which is of independent interest, suggests that the end-to-end approach can actually have worse sample complexity. Extending beyond the theory, numerical results for the FNM approximation of three nonlinear PtO maps demonstrate the benefits of the operator learning perspective that this paper adopts.

Generalized linear models (GLMs) form one of the most popular classes of models in statistics. The gamma variant is used, for instance, in actuarial science for the modelling of claim amounts in insurance. A flaw of GLMs is that they are not robust against outliers (i.e., against erroneous or extreme data points). A difference in trends in the bulk of the data and the outliers thus yields skewed inference and predictions. To address this problem, robust methods have been introduced. The most commonly applied robust method is frequentist and consists in an estimator which is derived from a modification of the derivative of the log-likelihood. We propose an alternative approach which is modelling-based and thus fundamentally different. It allows for an understanding and interpretation of the modelling, and it can be applied for both frequentist and Bayesian statistical analyses. The approach possesses appealing theoretical and empirical properties.

北京阿比特科技有限公司