亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a definition of persistent Stiefel-Whitney classes of vector bundle filtrations. It relies on seeing vector bundles as subsets of some Euclidean spaces. The usual \v{C}ech filtration of such a subset can be endowed with a vector bundle structure, that we call a \v{C}ech bundle filtration. We show that this construction is stable and consistent. When the dataset is a finite sample of a line bundle, we implement an effective algorithm to compute its persistent Stiefel-Whitney classes. In order to use simplicial approximation techniques in practice, we develop a notion of weak simplicial approximation. As a theoretical example, we give an in-depth study of the normal bundle of the circle, which reduces to understanding the persistent cohomology of the torus knot (1,2). We illustrate our method on several datasets inspired by image analysis.

相關內容

In the graphical calculus of planar string diagrams, equality is generated by exchange moves, which swap the heights of adjacent vertices. We show that left- and right-handed exchanges each give strongly normalizing rewrite strategies for connected string diagrams. We use this result to give a linear-time solution to the equivalence problem in the connected case, and a quadratic solution in the general case. We also give a stronger proof of the Joyal-Street coherence theorem, settling Selinger's conjecture on recumbent isotopy.

Two words are $k$-binomially equivalent, if each word of length at most $k$ occurs as a subword, or scattered factor, the same number of times in both words. The $k$-binomial complexity of an infinite word maps the natural $n$ to the number of $k$-binomial equivalence classes represented by its factors of length $n$. Inspired by questions raised by Lejeune, we study the relationships between the $k$ and $(k+1)$-binomial complexities; as well as the link with the usual factor complexity. We show that pure morphic words obtained by iterating a Parikh-collinear morphism, i.e. a morphism mapping all words to words with bounded abelian complexity, have bounded $k$-binomial complexity. In particular, we study the properties of the image of a Sturmian word by an iterate of the Thue-Morse morphism.

This paper devises a novel lowest-order conforming virtual element method (VEM) for planar linear elasticity with the pure displacement/traction boundary condition. The main trick is to view a generic polygon $K$ as a new one $\widetilde{K}$ with additional vertices consisting of interior points on edges of $K$, so that the discrete admissible space is taken as the $V_1$ type virtual element space related to the partition $\{\widetilde{K}\}$ instead of $\{K\}$. The method is shown to be uniformly convergent with the optimal rates both in $H^1$ and $L^2$ norms with respect to the Lam\'{e} constant $\lambda$. Numerical tests are presented to illustrate the good performance of the proposed VEM and confirm the theoretical results.

We present a four-field Virtual Element discretization for the time-dependent resistive Magnetohydrodynamics equations in three space dimensions, focusing on the semi-discrete formulation. The proposed method employs general polyhedral meshes and guarantees velocity and magnetic fields that are divergence free up to machine precision. We provide a full convergence analysis under suitable regularity assumptions, which is validated by some numerical tests.

We establish how the coefficients of a sparse polynomial system influence the sum (or the trace) of its zeros. As an application, we develop numerical tests for verifying whether a set of solutions to a sparse system is complete. These algorithms extend the classical trace test in numerical algebraic geometry. Our results rely on both the analysis of the structure of sparse resultants as well as an extension of Esterov's results on monodromy groups of sparse systems.

We present an algorithm for computing the barcode of the image of a morphisms in persistent homology induced by an inclusion of filtered finite-dimensional chain complexes. These algorithms make use of the clearing optimization and can be applied to inclusion-induced maps in persistent absolute homology and persistent relative cohomology for filtrations of pairs of simplicial complexes. They form the basis for our implementation for Vietoris-Rips complexes in the framework of the software Ripser.

We consider the problem of minimizing a differentiable function with locally Lipschitz continuous gradient on the real determinantal variety, and present a first-order algorithm designed to find stationary points of that problem. This algorithm applies steps of steepest descent with backtracking line search on the variety, as proposed by Schneider and Uschmajew (2015), but by taking the numerical rank into account to perform suitable rank reductions. We prove that this algorithm produces sequences of iterates the accumulation points of which are stationary, and therefore does not follow the so-called apocalypses described by Levin, Kileel, and Boumal (2021).

In the group testing problem, the goal is to identify a subset of defective items within a larger set of items based on tests whose outcomes indicate whether any defective item is present. This problem is relevant in areas such as medical testing, DNA sequencing, and communications. In this paper, we study a doubly-regular design in which the number of tests-per-item and the number of items-per-test are fixed. We analyze the performance of this test design alongside the Definite Defectives (DD) decoding algorithm in several settings, namely, (i) the sub-linear regime $k=o(n)$ with exact recovery, (ii) the linear regime $k=\Theta(n)$ with approximate recovery, and (iii) the size-constrained setting, where the number of items per test is constrained. Under setting (i), we show that our design together with the DD algorithm, matches an existing achievability result for the DD algorithm with the near-constant tests-per-item design, which is known to be asymptotically optimal in broad scaling regimes. Under setting (ii), we provide novel approximate recovery bounds that complement a hardness result regarding exact recovery. Lastly, under setting (iii), we improve on the best known upper and lower bounds in scaling regimes where the maximum allowed test size grows with the total number of items.

We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.

Reward is the driving force for reinforcement-learning agents. This paper is dedicated to understanding the expressivity of reward as a way to capture tasks that we would want an agent to perform. We frame this study around three new abstract notions of "task" that might be desirable: (1) a set of acceptable behaviors, (2) a partial ordering over behaviors, or (3) a partial ordering over trajectories. Our main results prove that while reward can express many of these tasks, there exist instances of each task type that no Markov reward function can capture. We then provide a set of polynomial-time algorithms that construct a Markov reward function that allows an agent to optimize tasks of each of these three types, and correctly determine when no such reward function exists. We conclude with an empirical study that corroborates and illustrates our theoretical findings.

北京阿比特科技有限公司