亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Manifold-valued functional data analysis (FDA) recently becomes an active area of research motivated by the raising availability of trajectories or longitudinal data observed on non-linear manifolds. The challenges of analyzing such data come from many aspects, including infinite dimensionality and nonlinearity, as well as time-domain or phase variability. In this paper, we study the amplitude part of manifold-valued functions on $\mathbb{S}^2$, which is invariant to random time warping or re-parameterization. Utilizing the nice geometry of $\mathbb{S}^2$, we develop a set of efficient and accurate tools for temporal alignment of functions, geodesic computing, and sample mean calculation. At the heart of these tools, they rely on gradient descent algorithms with carefully derived gradients. We show the advantages of these newly developed tools over its competitors with extensive simulations and real data and demonstrate the importance of considering the amplitude part of functions instead of mixing it with phase variability in manifold-valued FDA.

相關內容

這個新版本的工具會議系列恢復了從1989年到2012年的50個會議的傳統。工具最初是“面向對象語言和系統的技術”,后來發展到包括軟件技術的所有創新方面。今天許多最重要的軟件概念都是在這里首次引入的。2019年TOOLS 50+1在俄羅斯喀山附近舉行,以同樣的創新精神、對所有與軟件相關的事物的熱情、科學穩健性和行業適用性的結合以及歡迎該領域所有趨勢和社區的開放態度,延續了該系列。 官網鏈接: · MoDELS · 數據分析 · 約束 · 數值分析 ·
2021 年 10 月 14 日

The liver has a unique blood supply system and plays an important role in the human blood circulatory system. Thus, hemodynamic problems related to the liver serve as an important part in clinical diagnosis and treatment. Although estimating parameters in these hemodynamic models is essential to the study of liver models, due to the limitations of medical measurement methods and constraints of ethics on clinical studies, it is impossible to directly measure the parameters of blood vessels in livers. Furthermore, as an important part of the systemic blood circulation, livers' studies are supposed to be in conjunction with other blood vessels. In this article, we present an innovative method to fix parameters of an individual liver in a human blood circulation using non-invasive clinical measurements. The method consists of a 1-D blood flow model of human arteries and veins, a 0-D model reflecting the peripheral resistance of capillaries and a lumped parameter circuit model for human livers. We apply the finite element method in fluid mechanics of these models to a numerical study, based on non-invasive blood related measures of 33 individuals. The estimated results of human blood vessel characteristic and liver model parameters are verified from the perspective of Stroke Value Variation, which shows the effectiveness of our estimation method.

We extend and analyze the energy-based discontinuous Galerkin method for second order wave equations on staggered and structured meshes. By combining spatial staggering with local time-stepping near boundaries, the method overcomes the typical numerical stiffness associated with high order piecewise polynomial approximations. In one space dimension with periodic boundary conditions and suitably chosen numerical fluxes, we prove bounds on the spatial operators that establish stability for CFL numbers $c \frac {\Delta t}{h} < C$ independent of order when stability-enhanced explicit time-stepping schemes of matching order are used. For problems on bounded domains and in higher dimensions we demonstrate numerically that one can march explicitly with large time steps at high order temporal and spatial accuracy.

We present a general methodology to construct triplewise independent sequences of random variables having a common but arbitrary marginal distribution $F$ (satisfying very mild conditions). For two specific sequences, we obtain in closed form the asymptotic distribution of the sample mean. It is non-Gaussian (and depends on the specific choice of $F$). This allows us to illustrate the extent of the 'failure' of the classical central limit theorem (CLT) under triplewise independence. Our methodology is simple and can also be used to create, for any integer $K$, new $K$-tuplewise independent sequences that are not mutually independent. For $K \geq 4$, it appears that the sequences created using our methodology do verify a CLT, and we explain heuristically why this is the case.

We propose a new penalty, named as the springback penalty, for constructing models to recover an unknown signal from incomplete and inaccurate measurements. Mathematically, the springback penalty is a weakly convex function, and it bears various theoretical and computational advantages of both the benchmark convex $\ell_1$ penalty and many of its non-convex surrogates that have been well studied in the literature. For the recovery model using the springback penalty, we establish the exact and stable recovery theory for both sparse and nearly sparse signals, respectively, and derive an easily implementable difference-of-convex algorithm. In particular, we show its theoretical superiority to some existing models with a sharper recovery bound for some scenarios where the level of measurement noise is large or the amount of measurements is limited, and demonstrate its numerical robustness regardless of varying coherence of the sensing matrix. Because of its theoretical guarantee of recovery with severe measurements, computational tractability, and numerical robustness for ill-conditioned sensing matrices, the springback penalty is particularly favorable for the scenario where the incomplete and inaccurate measurements are collected by coherence-hidden or -static sensing hardware.

We analyse an energy minimisation problem recently proposed for modelling smectic-A liquid crystals. The optimality conditions give a coupled nonlinear system of partial differential equations, with a second-order equation for the tensor-valued nematic order parameter $\mathbf{Q}$ and a fourth-order equation for the scalar-valued smectic density variation $u$. Our two main results are a proof of the existence of solutions to the minimisation problem, and the derivation of a priori error estimates for its discretisation using the $\mathcal{C}^0$ interior penalty method. More specifically, optimal rates in the $H^1$ and $L^2$ norms are obtained for $\mathbf{Q}$, while optimal rates in a mesh-dependent norm and $L^2$ norm are obtained for $u$. Numerical experiments confirm the rates of convergence.

Synthetic control methods are commonly used to estimate the treatment effect on a single treated unit in panel data settings. A synthetic control (SC) is a weighted average of control units built to match the treated unit's pre-treatment outcome trajectory, with weights typically estimated by regressing pre-treatment outcomes of the treated unit to those of the control units. However, it has been established that such regression estimators can fail to be consistent. In this paper, we introduce a proximal causal inference framework to formalize identification and inference for both the SC weights and the treatment effect on the treated. We show that control units previously perceived as unusable can be repurposed to consistently estimate the SC weights. We also propose to view the difference in the post-treatment outcomes between the treated unit and the SC as a time series, which opens the door to a rich literature on time-series analysis for treatment effect estimation. We further extend the traditional linear model to accommodate general nonlinear models allowing for binary and count outcomes which are understudied in the SC literature. We illustrate our proposed methods with simulation studies and an application to evaluation of the 1990 German Reunification.

The assignment flow recently introduced in the J. Math. Imaging and Vision 58/2 (2017), constitutes a high-dimensional dynamical system that evolves on an elementary statistical manifold and performs contextual labeling (classification) of data given in any metric space. Vertices of a given graph index the data points and define a system of neighborhoods. These neighborhoods together with nonnegative weight parameters define regularization of the evolution of label assignments to data points, through geometric averaging induced by the affine e-connection of information geometry. Regarding evolutionary game dynamics, the assignment flow may be characterized as a large system of replicator equations that are coupled by geometric averaging. This paper establishes conditions on the weight parameters that guarantee convergence of the continuous-time assignment flow to integral assignments (labelings), up to a negligible subset of situations that will not be encountered when working with real data in practice. Furthermore, we classify attractors of the flow and quantify corresponding basins of attraction. This provides convergence guarantees for the assignment flow which are extended to the discrete-time assignment flow that results from applying a Runge-Kutta-Munthe-Kaas scheme for numerical geometric integration of the assignment flow. Several counter-examples illustrate that violating the conditions may entail unfavorable behavior of the assignment flow regarding contextual data classification.

We introduce and analyze various Regularized Combined Field Integral Equations (CFIER) formulations of time-harmonic Navier equations in media with piece-wise constant material properties. These formulations can be derived systematically starting from suitable coercive approximations of Dirichlet-to-Neumann operators (DtN), and we present a periodic pseudodifferential calculus framework within which the well posedness of CIER formulations can be established. We also use the DtN approximations to derive and analyze Optimized Schwarz (OS) methods for the solution of elastodynamics transmission problems. The pseudodifferential calculus we develop in this paper relies on careful singularity splittings of the kernels of Navier boundary integral operators which is also the basis of high-order Nystr\"om quadratures for their discretizations. Based on these high-order discretizations we investigate the rate of convergence of iterative solvers applied to CFIER and OS formulations of scattering and transmission problems. We present a variety of numerical results that illustrate that the CFIER methodology leads to important computational savings over the classical CFIE one, whenever iterative solvers are used for the solution of the ensuing discretized boundary integral equations. Finally, we show that the OS methods are competitive in the high-frequency high-contrast regime.

New Vapnik and Chervonkis type concentration inequalities are derived for the empirical distribution of an independent random sample. Focus is on the maximal deviation over classes of Borel sets within a low probability region. The constants are explicit, enabling numerical comparisons.

Packet replication and elimination functions are used by time-sensitive networks (as in the context of IEEE TSN and IETF DetNet) to increase the reliability of the network. Packets are replicated onto redundant paths by a replication function. Later the paths merge again and an elimination function removes the duplicates. This redundancy scheme has an effect on the timing behavior of time-sensitive networks and many challenges arise from conducting timing analyses. The replication can induce a burstiness increase along the paths of replicates, as well as packet mis-ordering that could increase the delays in the crossed bridges or routers. The induced packet mis-ordering could also negatively affect the interactions between the redundancy and scheduling mechanisms such as traffic regulators (as with per-flow regulators and interleaved regulators, implemented by TSN asynchronous traffic shaping). Using the network calculus framework, we provide a method of worst-case timing analysis for time-sensitive networks that implement redundancy mechanisms in the general use case, i.e., at end-devices and/or intermediate nodes. We first provide a network calculus toolbox for bounding the burstiness increase and the amount of reordering caused by the elimination function of duplicate packets. We then analyze the interactions with traffic regulators and show that their shaping-for-free property does not hold when placed after a packet elimination function. We provide a bound for the delay penalty when using per-flow regulators and prove that the penalty is not bounded with interleaved regulators. Finally, we use an industrial use-case to show the applicability and the benefits of our findings.

北京阿比特科技有限公司