亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Numerically solving multi-marginal optimal transport (MMOT) problems is computationally prohibitive, even for moderate-scale instances involving $l\ge4$ marginals with support sizes of $N\ge1000$. The cost in MMOT is represented as a tensor with $N^l$ elements. Even accessing each element once incurs a significant computational burden. In fact, many algorithms require direct computation of tensor-vector products, leading to a computational complexity of $O(N^l)$ or beyond. In this paper, inspired by our previous work [$Comm. \ Math. \ Sci.$, 20 (2022), pp. 2053 - 2057], we observe that the costly tensor-vector products in the Sinkhorn Algorithm can be computed with a recursive process by separating summations and dynamic programming. Based on this idea, we propose a fast tensor-vector product algorithm to solve the MMOT problem with $L^1$ cost, achieving a miraculous reduction in the computational cost of the entropy regularized solution to $O(N)$. Numerical experiment results confirm such high performance of this novel method which can be several orders of magnitude faster than the original Sinkhorn algorithm.

相關內容

In this work we propose a discretization of the second boundary condition for the Monge-Ampere equation arising in geometric optics and optimal transport. The discretization we propose is the natural generalization of the popular Oliker-Prussner method proposed in 1988. For the discretization of the differential operator, we use a discrete analogue of the subdifferential. Existence, unicity and stability of the solutions to the discrete problem are established. Convergence results to the continuous problem are given.

In relational verification, judicious alignment of computational steps facilitates proof of relations between programs using simple relational assertions. Relational Hoare logics (RHL) provide compositional rules that embody various alignments of executions. Seemingly more flexible alignments can be expressed in terms of product automata based on program transition relations. A single degenerate alignment rule (self-composition), atop a complete Hoare logic, comprises a RHL for $\forall\forall$ properties that is complete in the ordinary logical sense (Cook'78). The notion of alignment completeness was previously proposed as a more satisfactory measure, and some rules were shown to be alignment complete with respect to a few ad hoc forms of alignment automata. This paper proves alignment completeness with respect to a general class of $\forall\forall$ alignment automata, for a RHL comprised of standard rules together with a rule of semantics-preserving rewrites based on Kleene algebra with tests. A new logic for $\forall\exists$ properties is introduced and shown to be alignment complete. The $\forall\forall$ and $\forall\exists$ automata are shown to be semantically complete. Thus the logics are both complete in the ordinary sense. Recent work by D'Osualdo et al highlights the importance of completeness relative to assumptions (which we term entailment completeness), and presents $\forall\forall$ examples seemingly beyond the scope of RHLs. Additional rules enable these examples to be proved in our RHL, shedding light on the open problem of entailment completeness.

We consider a non-linear Bayesian data assimilation model for the periodic two-dimensional Navier-Stokes equations with initial condition modelled by a Gaussian process prior. We show that if the system is updated with sufficiently many discrete noisy measurements of the velocity field, then the posterior distribution eventually concentrates near the ground truth solution of the time evolution equation, and in particular that the initial condition is recovered consistently by the posterior mean vector field. We further show that the convergence rate can in general not be faster than inverse logarithmic in sample size, but describe specific conditions on the initial conditions when faster rates are possible. In the proofs we provide an explicit quantitative estimate for backward uniqueness of solutions of the two-dimensional Navier-Stokes equations.

The "Harmony Lemma", as formulated by Sangiorgi & Walker, establishes the equivalence between the labelled transition semantics and the reduction semantics in the $\pi$-calculus. Despite being a widely known and accepted result for the standard $\pi$-calculus, this assertion has never been rigorously proven, formally or informally. Hence, its validity may not be immediately apparent when considering extensions of the $\pi$-calculus. Contributing to the second challenge of the Concurrent Calculi Formalization Benchmark -- a set of challenges tackling the main issues related to the mechanization of concurrent systems -- we present a formalization of this result for the fragment of the $\pi$-calculus examined in the Benchmark. Our formalization is implemented in Beluga and draws inspiration from the HOAS formalization of the LTS semantics popularized by Honsell et al. In passing, we introduce a couple of useful encoding techniques for handling telescopes and lexicographic induction.

We consider a prototypical problem of Bayesian inference for a structured spiked model: a low-rank signal is corrupted by additive noise. While both information-theoretic and algorithmic limits are well understood when the noise is a Gaussian Wigner matrix, the more realistic case of structured noise still proves to be challenging. To capture the structure while maintaining mathematical tractability, a line of work has focused on rotationally invariant noise. However, existing studies either provide sub-optimal algorithms or are limited to special cases of noise ensembles. In this paper, using tools from statistical physics (replica method) and random matrix theory (generalized spherical integrals) we establish the first characterization of the information-theoretic limits for a noise matrix drawn from a general trace ensemble. Remarkably, our analysis unveils the asymptotic equivalence between the rotationally invariant model and a surrogate Gaussian one. Finally, we show how to saturate the predicted statistical limits using an efficient algorithm inspired by the theory of adaptive Thouless-Anderson-Palmer (TAP) equations.

We present a novel data-driven strategy to choose the hyperparameter $k$ in the $k$-NN regression estimator without using any hold-out data. We treat the problem of choosing the hyperparameter as an iterative procedure (over $k$) and propose using an easily implemented in practice strategy based on the idea of early stopping and the minimum discrepancy principle. This model selection strategy is proven to be minimax-optimal over some smoothness function classes, for instance, the Lipschitz functions class on a bounded domain. The novel method often improves statistical performance on artificial and real-world data sets in comparison to other model selection strategies, such as the Hold-out method, 5-fold cross-validation, and AIC criterion. The novelty of the strategy comes from reducing the computational time of the model selection procedure while preserving the statistical (minimax) optimality of the resulting estimator. More precisely, given a sample of size $n$, if one should choose $k$ among $\left\{ 1, \ldots, n \right\}$, and $\left\{ f^1, \ldots, f^n \right\}$ are the estimators of the regression function, the minimum discrepancy principle requires the calculation of a fraction of the estimators, while this is not the case for the generalized cross-validation, Akaike's AIC criteria, or Lepskii principle.

We prove the following type of discrete entropy monotonicity for sums of isotropic, log-concave, independent and identically distributed random vectors $X_1,\dots,X_{n+1}$ on $\mathbb{Z}^d$: $$ H(X_1+\cdots+X_{n+1}) \geq H(X_1+\cdots+X_{n}) + \frac{d}{2}\log{\Bigl(\frac{n+1}{n}\Bigr)} +o(1), $$ where $o(1)$ vanishes as $H(X_1) \to \infty$. Moreover, for the $o(1)$-term, we obtain a rate of convergence $ O\Bigl({H(X_1)}{e^{-\frac{1}{d}H(X_1)}}\Bigr)$, where the implied constants depend on $d$ and $n$. This generalizes to $\mathbb{Z}^d$ the one-dimensional result of the second named author (2023). As in dimension one, our strategy is to establish that the discrete entropy $H(X_1+\cdots+X_{n})$ is close to the differential (continuous) entropy $h(X_1+U_1+\cdots+X_{n}+U_{n})$, where $U_1,\dots, U_n$ are independent and identically distributed uniform random vectors on $[0,1]^d$ and to apply the theorem of Artstein, Ball, Barthe and Naor (2004) on the monotonicity of differential entropy. In fact, we show this result under more general assumptions than log-concavity, which are preserved up to constants under convolution. In order to show that log-concave distributions satisfy our assumptions in dimension $d\ge2$, more involved tools from convex geometry are needed because a suitable position is required. We show that, for a log-concave function on $\mathbb{R}^d$ in isotropic position, its integral, barycenter and covariance matrix are close to their discrete counterparts. Moreover, in the log-concave case, we weaken the isotropicity assumption to what we call almost isotropicity. One of our technical tools is a discrete analogue to the upper bound on the isotropic constant of a log-concave function, which extends to dimensions $d\ge1$ a result of Bobkov, Marsiglietti and Melbourne (2022).

Fast encoding and decoding of codes have been always an important topic in code theory as well as complexity theory. Although encoding is easier than decoding in general, designing an encoding algorithm of codes of length $N$ with quasi-linear complexity $O(N\log N)$ is not an easy task. Despite the fact that algebraic geometry codes were discovered in the early of 1980s, encoding algorithms of algebraic geometry codes with quasi-linear complexity $O(N\log N)$ have not been found except for the simplest algebraic geometry codes--Reed-Solomon codes. The best-known encoding algorithm of algebraic geometry codes based on a class of plane curves has quasi-linear complexity at least $O(N\log^2 N)$. In this paper, we design an encoding algorithm of algebraic geometry codes with quasi-linear complexity $O(N\log N)$. Our algorithm works well for a large class of algebraic geometry codes based on both plane and non-plane curves. The main idea of this paper is to generalize the divide-and-conquer method from the fast Fourier Transform over finite fields to algebraic curves. Suppose we consider encoding of algebraic geometry codes based on an algebraic curve ${\mathcal X}$ over $\mathbb{F}_q$. We first consider a tower of Galois coverings ${\mathcal X}={\mathcal X}_0\rightarrow{\mathcal X}_1\rightarrow\cdots\rightarrow{\mathcal X}_r$ over a finite field $\mathbb{F}_q$, i.e., their function field tower $\mathbb{F}_q({\mathcal X}_0)\supsetneq\mathbb{F}_q({\mathcal X}_{1})\supsetneq\cdots \supsetneq\mathbb{F}_q({\mathcal X}_r)$ satisfies that each of extension \mathbb{F}_q({\mathcal X}_{i-1})/\mathbb{F}_q({\mathcal X}_i)$ is a Galois extension and the extension degree $[\mathbb{F}_q({\mathcal X}_{i-1}):\mathbb{F}_q({\mathcal X}_i)]$ {is a constant}. Then encoding of an algebraic geometry code based on ${\mathcal X}$ is reduced to the encoding of an algebraic geometry code based on ${\mathcal X}_r$.

We propose an extremely versatile approach to address a large family of matrix nearness problems, possibly with additional linear constraints. Our method is based on splitting a matrix nearness problem into two nested optimization problems, of which the inner one can be solved either exactly or cheaply, while the outer one can be recast as an unconstrained optimization task over a smooth real Riemannian manifold. We observe that this paradigm applies to many matrix nearness problems of practical interest appearing in the literature, thus revealing that they are equivalent in this sense to a Riemannian optimization problem. We also show that the objective function to be minimized on the Riemannian manifold can be discontinuous, thus requiring regularization techniques, and we give conditions for this to happen. Finally, we demonstrate the practical applicability of our method by implementing it for a number of matrix nearness problems that are relevant for applications and are currently considered very demanding in practice. Extensive numerical experiments demonstrate that our method often greatly outperforms its predecessors, including algorithms specifically designed for those particular problems.

A new, more efficient, numerical method for the SDOF problem is presented. Its construction is based on the weak form of the equation of motion, as obtained in part I of the paper, using piece-wise polynomial functions as interpolation functions. The approximation rate can be arbitrarily high, proportional to the degree of the interpolation functions, tempered only by numerical instability. Moreover, the mechanical energy of the system is conserved. Consequently, all significant drawbacks of existing algorithms, such as the limitations imposed by the Dahlqvist Barrier theorem and the need for introduction of numerical damping, have been overcome.

北京阿比特科技有限公司