亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Positive semidefinite (PSD) matrices are indispensable in many fields of science. A similarity measurement for such matrices is usually an essential ingredient in the mathematical modelling of a scientific problem. This paper proposes a unified framework to construct similarity measurements for PSD matrices. The framework is obtained by exploring the fiber bundle structure of the cone of PSD matrices and generalizing the idea of the point-set distance previously developed for linear subsapces and positive definite (PD) matrices. The framework demonstrates both theoretical advantages and computational convenience: (1) We prove that the similarity measurement constructed by the framework can be recognized either as the cost of a parallel transport or as the length of a quasi-geodesic curve. (2) We extend commonly used divergences for equidimensional PD matrices to the non-equidimensional case. Examples include Kullback-Leibler divergence, Bhattacharyya divergence and R\'enyi divergence. We prove that these extensions enjoy the same consistency property as their counterpart for geodesic distance. (3) We apply our geometric framework to further extend those in (2) to similarity measurements for arbitrary PSD matrices. We also provide simple formulae to compute these similarity measurements in most situations.

相關內容

When generating in-silico clinical electrophysiological outputs, such as electrocardiograms (ECGs) and body surface potential maps (BSPMs), mathematical models have relied on single physics, i.e. of the cardiac electrophysiology (EP), neglecting the role of the heart motion. Since the heart is the most powerful source of electrical activity in the human body, its motion dynamically shifts the position of the principal electrical sources in the torso, influencing electrical potential distribution and potentially altering the EP outputs. In this work, we propose a computational model for the simulation of ECGs and BSPMs by coupling a cardiac electromechanical model with a model that simulates the propagation of the EP signal in the torso, thanks to a flexible numerical approach, that simulates the torso domain deformation induced by the myocardial displacement. Our model accounts for the major mechano-electrical feedbacks, along with unidirectional displacement and potential couplings from the heart to the surrounding body. For the numerical discretization, we employ a versatile intergrid transfer operator that allows for the use of different Finite Element spaces to be used in the cardiac and torso domains. Our numerical results are obtained on a realistic 3D biventricular-torso geometry, and cover both cases of sinus rhythm and ventricular tachycardia (VT), solving both the electromechanical-torso model in dynamical domains, and the classical electrophysiology-torso model in static domains. By comparing standard 12-lead ECG and BSPMs, we highlight the non-negligible effects of the myocardial contraction on the EP-outputs, especially in pathological conditions, such as the VT.

The design of particle simulation methods for collisional plasma physics has always represented a challenge due to the unbounded total collisional cross section, which prevents a natural extension of the classical Direct Simulation Monte Carlo (DSMC) method devised for the Boltzmann equation. One way to overcome this problem is to consider the design of Monte Carlo algorithms that are robust in the so-called grazing collision limit. In the first part of this manuscript, we will focus on the construction of collision algorithms for the Landau-Fokker-Planck equation based on the grazing collision asymptotics and which avoids the use of iterative solvers. Subsequently, we discuss problems involving uncertainties and show how to develop a stochastic Galerkin projection of the particle dynamics which permits to recover spectral accuracy for smooth solutions in the random space. Several classical numerical tests are reported to validate the present approach.

The deformed energy method has shown to be a good option for dimensional synthesis of mechanisms. In this paper the introduction of some new features to such approach is proposed. First, constraints fixing dimensions of certain links are introduced in the error function of the synthesis problem. Second, requirements on distances between determinate nodes are included in the error function for the analysis of the deformed position problem. Both the overall synthesis error function and the inner analysis error function are optimized using a Sequential Quadratic Problem (SQP) approach. This also reduces the probability of branch or circuit defects. In the case of the inner function analytical derivatives are used, while in the synthesis optimization approximate derivatives have been introduced. Furthermore, constraints are analyzed under two formulations, the Euclidean distance and an alternative approach that uses the previous raised to the power of two. The latter approach is often used in kinematics, and simplifies the computation of derivatives. Some examples are provided to show the convergence order of the error function and the fulfilment of the constraints in both formulations studied under different topological situations or achieved energy levels.

The method of the lower deformation energy has been successfully used for the synthesis of mechanisms for quite a while. It has shown to be a versatile, yet powerful method for assisting in the design of mechanisms. Until now, most of the implementations of this method used the dimensions of the mechanism as the synthesis variables, which has some advantages and some drawbacks. For example, the assembly configuration is not taken into account in the optimization process, and this means that the same initial configuration is used when computing the deformed positions in each synthesis point. This translates into a reduction of the total search space. A possible solution to this problem is the use of a set of initial coordinates as variables for the synthesis, which has been successfully applied to other methods. This also has some additional advantages, such as the fact that any generated mechanism can be assembled. Another advantage is that the fixed joint locations are also included in the optimization at no additional cost. But the change from dimensions to initial coordinates means a reformulation of the optimization problem when using derivatives if one wants them to be analytically derived. This paper tackles this reformulation, along with a proper comparison of the use of both alternatives using sequential quadratic programming methods. In order to do so, some examples are developed and studied.

Objective: Prediction models are popular in medical research and practice. By predicting an outcome of interest for specific patients, these models may help inform difficult treatment decisions, and are often hailed as the poster children for personalized, data-driven healthcare. Many prediction models are deployed for decision support based on their prediction accuracy in validation studies. We investigate whether this is a safe and valid approach. Materials and Methods: We show that using prediction models for decision making can lead to harmful decisions, even when the predictions exhibit good discrimination after deployment. These models are harmful self-fulfilling prophecies: their deployment harms a group of patients but the worse outcome of these patients does not invalidate the predictive power of the model. Results: Our main result is a formal characterization of a set of such prediction models. Next we show that models that are well calibrated before and after deployment are useless for decision making as they made no change in the data distribution. Discussion: Our results point to the need to revise standard practices for validation, deployment and evaluation of prediction models that are used in medical decisions. Conclusion: Outcome prediction models can yield harmful self-fulfilling prophecies when used for decision making, a new perspective on prediction model development, deployment and monitoring is needed.

The methodological contribution in this paper is motivated by biomechanical studies where data characterizing human movement are waveform curves representing joint measures such as flexion angles, velocity, acceleration, and so on. In many cases the aim consists of detecting differences in gait patterns when several independent samples of subjects walk or run under different conditions (repeated measures). Classic kinematic studies often analyse discrete summaries of the sample curves discarding important information and providing biased results. As the sample data are obviously curves, a Functional Data Analysis approach is proposed to solve the problem of testing the equality of the mean curves of a functional variable observed on several independent groups under different treatments or time periods. A novel approach for Functional Analysis of Variance (FANOVA) for repeated measures that takes into account the complete curves is introduced. By assuming a basis expansion for each sample curve, two-way FANOVA problem is reduced to Multivariate ANOVA for the multivariate response of basis coefficients. Then, two different approaches for MANOVA with repeated measures are considered. Besides, an extensive simulation study is developed to check their performance. Finally, two applications with gait data are developed.

We have developed an efficient and unconditionally energy-stable method for simulating droplet formation dynamics. Our approach involves a novel time-marching scheme based on the scalar auxiliary variable technique, specifically designed for solving the Cahn-Hilliard-Navier-Stokes phase field model with variable density and viscosity. We have successfully applied this method to simulate droplet formation in scenarios where a Newtonian fluid is injected through a vertical tube into another immiscible Newtonian fluid. To tackle the challenges posed by nonhomogeneous Dirichlet boundary conditions at the tube entrance, we have introduced additional nonlocal auxiliary variables and associated ordinary differential equations. These additions effectively eliminate the influence of boundary terms. Moreover, we have incorporated stabilization terms into the scheme to enhance its numerical effectiveness. Notably, our resulting scheme is fully decoupled, requiring the solution of only linear systems at each time step. We have also demonstrated the energy decaying property of the scheme, with suitable modifications. To assess the accuracy and stability of our algorithm, we have conducted extensive numerical simulations. Additionally, we have examined the dynamics of droplet formation and explored the impact of dimensionless parameters on the process. Overall, our work presents a refined method for simulating droplet formation dynamics, offering improved efficiency, energy stability, and accuracy.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.

北京阿比特科技有限公司