亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A significant part of modern topological data analysis is concerned with the design and study of algebraic invariants of poset representations -- often referred to as multi-parameter persistence modules. One such invariant is the minimal rank decomposition, which encodes the ranks of all the structure morphisms of the persistence module by a single ordered pair of rectangle-decomposable modules, interpreted as a signed barcode. This signed barcode generalizes the concept of persistence barcode from one-parameter persistence to any number of parameters, raising the question of its bottleneck stability. We show in this paper that the minimal rank decomposition is not stable under the natural notion of signed bottleneck matching between signed barcodes. We remedy this by turning our focus to the rank exact decomposition, a related signed barcode induced by the minimal projective resolution of the module relative to the so-called rank exact structure, which we prove to be bottleneck stable under signed matchings. As part of our proof, we obtain two intermediate results of independent interest: we compute the global dimension of the rank exact structure on the category of finitely presentable multi-parameter persistence modules, and we prove a bottleneck stability result for hook-decomposable modules. We also give a bound for the size of the rank exact decomposition that is polynomial in the size of the usual minimal projective resolution, we prove a universality result for the dissimilarity function induced by the notion of signed matching, and we compute, in the two-parameter case, the global dimension of a different exact structure related to the upsets of the indexing poset. This set of results combines concepts from topological data analysis and from the representation theory of posets, and we believe is relevant to both areas.

相關內容

One of the key elements of probabilistic seismic risk assessment studies is the fragility curve, which represents the conditional probability of failure of a mechanical structure for a given scalar measure derived from seismic ground motion. For many structures of interest, estimating these curves is a daunting task because of the limited amount of data available; data which is only binary in our framework, i.e., only describing the structure as being in a failure or non-failure state. A large number of methods described in the literature tackle this challenging framework through parametric log-normal models. Bayesian approaches, on the other hand, allow model parameters to be learned more efficiently. However, the impact of the choice of the prior distribution on the posterior distribution cannot be readily neglected and, consequently, neither can its impact on any resulting estimation. This paper proposes a comprehensive study of this parametric Bayesian estimation problem for limited and binary data. Using the reference prior theory as a cornerstone, this study develops an objective approach to choosing the prior. This approach leads to the Jeffreys prior, which is derived for this problem for the first time. The posterior distribution is proven to be proper with the Jeffreys prior but improper with some traditional priors found in the literature. With the Jeffreys prior, the posterior distribution is also shown to vanish at the boundaries of the parameters' domain, which means that sampling the posterior distribution of the parameters does not result in anomalously small or large values. Therefore, the use of the Jeffreys prior does not result in degenerate fragility curves such as unit-step functions, and leads to more robust credibility intervals. The numerical results obtained from different case studies-including an industrial example-illustrate the theoretical predictions.

We introduce a high-dimensional cubical complex, for any dimension t>0, and apply it to the design of quantum locally testable codes. Our complex is a natural generalization of the constructions by Panteleev and Kalachev and by Dinur et. al of a square complex (case t=2), which have been applied to the design of classical locally testable codes (LTC) and quantum low-density parity check codes (qLDPC) respectively. We turn the geometric (cubical) complex into a chain complex by relying on constant-sized local codes $h_1,\ldots,h_t$ as gadgets. A recent result of Panteleev and Kalachev on existence of tuples of codes that are product expanding enables us to prove lower bounds on the cycle and co-cycle expansion of our chain complex. For t=4 our construction gives a new family of "almost-good" quantum LTCs -- with constant relative rate, inverse-polylogarithmic relative distance and soundness, and constant-size parity checks. Both the distance of the quantum code and its local testability are proven directly from the cycle and co-cycle expansion of our chain complex.

Baker and Norine initiated the study of graph divisors as a graph-theoretic analogue of the Riemann-Roch theory for Riemann surfaces. One of the key concepts of graph divisor theory is the {\it rank} of a divisor on a graph. The importance of the rank is well illustrated by Baker's {\it Specialization lemma}, stating that the dimension of a linear system can only go up under specialization from curves to graphs, leading to a fruitful interaction between divisors on graphs and curves. Due to its decisive role, determining the rank is a central problem in graph divisor theory. Kiss and T\'othm\'eresz reformulated the problem using chip-firing games, and showed that computing the rank of a divisor on a graph is NP-hard via reduction from the Minimum Feedback Arc Set problem. In this paper, we strengthen their result by establishing a connection between chip-firing games and the Minimum Target Set Selection problem. As a corollary, we show that the rank is difficult to approximate to within a factor of $O(2^{\log^{1-\varepsilon}n})$ for any $\varepsilon > 0$ unless $P=NP$. Furthermore, assuming the Planted Dense Subgraph Conjecture, the rank is difficult to approximate to within a factor of $O(n^{1/4-\varepsilon})$ for any $\varepsilon>0$.

Dynamical low-rank (DLR) approximation has gained interest in recent years as a viable solution to the curse of dimensionality in the numerical solution of kinetic equations including the Boltzmann and Vlasov equations. These methods include the projector-splitting and Basis-update & Galerkin (BUG) DLR integrators, and have shown promise at greatly improving the computational efficiency of kinetic solutions. However, this often comes at the cost of conservation of charge, current and energy. In this work we show how a novel macro-micro decomposition may be used to separate the distribution function into two components, one of which carries the conserved quantities, and the other of which is orthogonal to them. We apply DLR approximation to the latter, and thereby achieve a clean and extensible approach to a conservative DLR scheme which retains the computational advantages of the base scheme. Moreover, our approach requires no change to the mechanics of the DLR approximation, so it is compatible with both the BUG family of integrators and the projector-splitting integrator which we use here. We describe a first-order integrator which can exactly conserve charge and either current or energy, as well as an integrator which exactly conserves charge and energy and exhibits second-order accuracy on our test problems. To highlight the flexibility of the proposed macro-micro decomposition, we implement a pair of velocity space discretizations, and verify the claimed accuracy and conservation properties on a suite of plasma benchmark problems.

The growth pattern of an invasive cell-to-cell propagation (called the successive coronas) on the square grid is a tilted square. On the triangular and hexagonal grids, it is an hexagon. It is remarkable that, on the aperiodic structure of Penrose tilings, this cell-to-cell diffusion process tends to a regular decagon (at the limit). In this article we generalize this result to any regular multigrid dual tiling, by defining the characteristic polygon of a multigrid and its dual tiling. Exploiting this elegant duality allows to fully understand why such surprising phenomena, of seeing highly regular polygonal shapes emerge from aperiodic underlying structures, happen.

In this work we consider the Allen--Cahn equation, a prototypical model problem in nonlinear dynamics that exhibits bifurcations corresponding to variations of a deterministic bifurcation parameter. Going beyond the state-of-the-art, we introduce a random coefficient function in the linear reaction part of the equation, thereby accounting for random, spatially-heterogeneous effects. Importantly, we assume a spatially constant, deterministic mean value of the random coefficient. We show that this mean value is in fact a bifurcation parameter in the Allen--Cahn equation with random coefficients. Moreover, we show that the bifurcation points and bifurcation curves become random objects. We consider two distinct modelling situations: (i) for a spatially homogeneous coefficient we derive analytical expressions for the distribution of the bifurcation points and show that the bifurcation curves are random shifts of a fixed reference curve; (ii) for a spatially heterogeneous coefficient we employ a generalized polynomial chaos expansion to approximate the statistical properties of the random bifurcation points and bifurcation curves. We present numerical examples in 1D physical space, where we combine the popular software package Continuation Core and Toolboxes (CoCo) for numerical continuation and the Sparse Grids Matlab Kit for the polynomial chaos expansion. Our exposition addresses both, dynamical systems and uncertainty quantification, highlighting how analytical and numerical tools from both areas can be combined efficiently for the challenging uncertainty quantification analysis of bifurcations in random differential equations.

This paper shows how to use the shooting method, a classical numerical algorithm for solving boundary value problems, to compute the Riemannian distance on the Stiefel manifold $ \mathrm{St}(n,p) $, the set of $ n \times p $ matrices with orthonormal columns. The proposed method is a shooting method in the sense of the classical shooting methods for solving boundary value problems; see, e.g., Stoer and Bulirsch, 1991. The main feature is that we provide an approximate formula for the Fr\'{e}chet derivative of the geodesic involved in our shooting method. Numerical experiments demonstrate the algorithms' accuracy and performance. Comparisons with existing state-of-the-art algorithms for solving the same problem show that our method is competitive and even beats several algorithms in many cases.

Mediation analysis aims to decipher the underlying causal mechanisms between an exposure, an outcome, and intermediate variables called mediators. Initially developed for fixed-time mediator and outcome, it has been extended to the framework of longitudinal data by discretizing the assessment times of mediator and outcome. Yet, processes in play in longitudinal studies are usually defined in continuous time and measured at irregular and subject-specific visits. This is the case in dementia research when cerebral and cognitive changes measured at planned visits in cohorts are of interest. We thus propose a methodology to estimate the causal mechanisms between a time-fixed exposure ($X$), a mediator process ($\mathcal{M}_t$) and an outcome process ($\mathcal{Y}_t$) both measured repeatedly over time in the presence of a time-dependent confounding process ($\mathcal{L}_t$). We consider three types of causal estimands, the natural effects, path-specific effects and randomized interventional analogues to natural effects, and provide identifiability assumptions. We employ a dynamic multivariate model based on differential equations for their estimation. The performance of the methods are explored in simulations, and we illustrate the method in two real-world examples motivated by the 3C cerebral aging study to assess: (1) the effect of educational level on functional dependency through depressive symptomatology and cognitive functioning, and (2) the effect of a genetic factor on cognitive functioning potentially mediated by vascular brain lesions and confounded by neurodegeneration.

This work introduces a new general approach for the numerical analysis of stable equilibria to second order mean field games systems in cases where the uniqueness of solutions may fail. For the sake of simplicity, we focus on a simple stationary case. We propose an abstract framework to study these solutions by reformulating the mean field game system as an abstract equation in a Banach space. In this context, stable equilibria turn out to be regular solutions to this equation, meaning that the linearized system is well-posed. We provide three applications of this property: we study the sensitivity analysis of stable solutions, establish error estimates for their finite element approximations, and prove the local converge of Newton's method in infinite dimensions.

Functional data analysis is an important research field in statistics which treats data as random functions drawn from some infinite-dimensional functional space, and functional principal component analysis (FPCA) based on eigen-decomposition plays a central role for data reduction and representation. After nearly three decades of research, there remains a key problem unsolved, namely, the perturbation analysis of covariance operator for diverging number of eigencomponents obtained from noisy and discretely observed data. This is fundamental for studying models and methods based on FPCA, while there has not been substantial progress since Hall, M\"uller and Wang (2006)'s result for a fixed number of eigenfunction estimates. In this work, we aim to establish a unified theory for this problem, obtaining upper bounds for eigenfunctions with diverging indices in both the $\mathcal{L}^2$ and supremum norms, and deriving the asymptotic distributions of eigenvalues for a wide range of sampling schemes. Our results provide insight into the phenomenon when the $\mathcal{L}^{2}$ bound of eigenfunction estimates with diverging indices is minimax optimal as if the curves are fully observed, and reveal the transition of convergence rates from nonparametric to parametric regimes in connection to sparse or dense sampling. We also develop a double truncation technique to handle the uniform convergence of estimated covariance and eigenfunctions. The technical arguments in this work are useful for handling the perturbation series with noisy and discretely observed functional data and can be applied in models or those involving inverse problems based on FPCA as regularization, such as functional linear regression.

北京阿比特科技有限公司