亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider additive Schwarz methods for boundary value problems involving the $p$-Laplacian. While existing theoretical estimates suggest a sublinear convergence rate for these methods, empirical evidence from numerical experiments demonstrates a linear convergence rate. In this paper, we narrow the gap between these theoretical and empirical results by presenting a novel convergence analysis. Firstly, we present a new convergence theory for additive Schwarz methods written in terms of a quasi-norm. This quasi-norm exhibits behavior akin to the Bregman distance of the convex energy functional associated with the problem. Secondly, we provide a quasi-norm version of the Poincar'{e}--Friedrichs inequality, which plays a crucial role in deriving a quasi-norm stable decomposition for a two-level domain decomposition setting. By utilizing these key elements, we establish the linear convergence.

相關內容

We incorporate strong negation in the theory of computable functionals TCF, a common extension of Plotkin's PCF and G\"{o}del's system $\mathbf{T}$, by defining simultaneously strong negation $A^{\mathbf{N}}$ of a formula $A$ and strong negation $P^{\mathbf{N}}$ of a predicate $P$ in TCF. As a special case of the latter, we get strong negation of an inductive and a coinductive predicate of TCF. We prove appropriate versions of the Ex falso quodlibet and of double negation elimination for strong negation in TCF. We introduce the so-called tight formulas of TCF i.e., formulas implied from the weak negation of their strong negation, and the relative tight formulas. We present various case-studies and examples, which reveal the naturality of our definition of strong negation in TCF and justify the use of TCF as a formal system for a large part of Bishop-style constructive mathematics.

This paper studies the convergence of a spatial semidiscretization of a three-dimensional stochastic Allen-Cahn equation with multiplicative noise. For non-smooth initial values, the regularity of the mild solution is investigated, and an error estimate is derived with the spatial $ L^2 $-norm. For smooth initial values, two error estimates with the general spatial $ L^q $-norms are established.

The problems of optimal recovering univariate functions and their derivatives are studied. To solve these problems, two variants of the truncation method are constructed, which are order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information. For numerical summation, it has been established how the parameters characterizing the problem being solved affect its stability.

Numerically solving multi-marginal optimal transport (MMOT) problems is computationally prohibitive, even for moderate-scale instances involving $l\ge4$ marginals with support sizes of $N\ge1000$. The cost in MMOT is represented as a tensor with $N^l$ elements. Even accessing each element once incurs a significant computational burden. In fact, many algorithms require direct computation of tensor-vector products, leading to a computational complexity of $O(N^l)$ or beyond. In this paper, inspired by our previous work [$Comm. \ Math. \ Sci.$, 20 (2022), pp. 2053 - 2057], we observe that the costly tensor-vector products in the Sinkhorn Algorithm can be computed with a recursive process by separating summations and dynamic programming. Based on this idea, we propose a fast tensor-vector product algorithm to solve the MMOT problem with $L^1$ cost, achieving a miraculous reduction in the computational cost of the entropy regularized solution to $O(N)$. Numerical experiment results confirm such high performance of this novel method which can be several orders of magnitude faster than the original Sinkhorn algorithm.

We consider the equivalence between the two main categorical models for the type-theoretical operation of context comprehension, namely P. Dybjer's categories with families and B. Jacobs' comprehension categories, and generalise it to the non-discrete case. The classical equivalence can be summarised in the slogan: "terms as sections". By recognising "terms as coalgebras", we show how to use the structure-semantics adjunction to prove that a 2-category of comprehension categories is biequivalent to a 2-category of (non-discrete) categories with families. The biequivalence restricts to the classical one proved by Hofmann in the discrete case. It also provides a framework where to compare different morphisms of these structures that have appeared in the literature, varying on the degree of preservation of the relevant structure. We consider in particular morphisms defined by Claraimbault-Dybjer, Jacobs, Larrea, and Uemura.

This paper is a significant step forward in understanding dependency equilibria within the framework of real algebraic geometry encompassing both pure and mixed equilibria. We start by breaking down the concept for a general audience, using concrete examples to illustrate the main results. In alignment with Spohn's original definition of dependency equilibria, we propose three alternative definitions, allowing for an algebro-geometric comprehensive study of all dependency equilibria. We give a sufficient condition for the existence of a pure dependency equilibrium and show that every Nash equilibrium lies on the Spohn variety, the algebraic model for dependency equilibria. For generic games, the set of real points of the Spohn variety is Zariski dense. Furthermore, every Nash equilibrium in this case is a dependency equilibrium. Finally, we present a detailed analysis of the geometric structure of dependency equilibria for $(2\times2)$-games.

In many statistical modeling problems, such as classification and regression, it is common to encounter sparse and blocky coefficients. Sparse fused Lasso is specifically designed to recover these sparse and blocky structured features, especially in cases where the design matrix has ultrahigh dimensions, meaning that the number of features significantly surpasses the number of samples. Quantile loss is a well-known robust loss function that is widely used in statistical modeling. In this paper, we propose a new sparse fused lasso classification model, and develop a unified multi-block linearized alternating direction method of multipliers algorithm that effectively selects sparse and blocky features for regression and classification. Our algorithm has been proven to converge with a derived linear convergence rate. Additionally, our algorithm has a significant advantage over existing methods for solving ultrahigh dimensional sparse fused Lasso regression and classification models due to its lower time complexity. Note that the algorithm can be easily extended to solve various existing fused Lasso models. Finally, we present numerical results for several synthetic and real-world examples, which demonstrate the robustness, scalability, and accuracy of the proposed classification model and algorithm

Sequences of parametrized Lyapunov equations can be encountered in many application settings. Moreover, solutions of such equations are often intermediate steps of an overall procedure whose main goal is the computation of $\text{trace}(EX)$ where $X$ denotes the solution of a Lyapunov equation and $E$ is a given matrix. We are interested in addressing problems where the parameter dependency of the coefficient matrix is encoded as a low-rank modification to a \emph{seed}, fixed matrix. We propose two novel numerical procedures that fully exploit such a common structure. The first one builds upon the Sherman-Morrison-Woodbury (SMW) formula and recycling Krylov techniques, and it is well-suited for small dimensional problems as it makes use of dense numerical linear algebra tools. The second algorithm can instead address large-scale problems by relying on state-of-the-art projection techniques based on the extended Krylov subspace. We test the new algorithms on several problems arising in the study of damped vibrational systems and the analyses of output synchronization problems for multi-agent systems. Our results show that the algorithms we propose are superior to state-of-the-art techniques as they are able to remarkably speed up the computation of accurate solutions.

We present and analyze a simple numerical method that diagonalizes a complex normal matrix A by diagonalizing the Hermitian matrix obtained from a random linear combination of the Hermitian and skew-Hermitian parts of A.

The problem of combining p-values is an old and fundamental one, and the classic assumption of independence is often violated or unverifiable in many applications. There are many well-known rules that can combine a set of arbitrarily dependent p-values (for the same hypothesis) into a single p-value. We show that essentially all these existing rules can be strictly improved when the p-values are exchangeable, or when external randomization is allowed (or both). For example, we derive randomized and/or exchangeable improvements of well known rules like "twice the median" and "twice the average", as well as geometric and harmonic means. Exchangeable p-values are often produced one at a time (for example, under repeated tests involving data splitting), and our rules can combine them sequentially as they are produced, stopping when the combined p-values stabilize. Our work also improves rules for combining arbitrarily dependent p-values, since the latter becomes exchangeable if they are presented to the analyst in a random order. The main technical advance is to show that all existing combination rules can be obtained by calibrating the p-values to e-values (using an $\alpha$-dependent calibrator), averaging those e-values, converting to a level-$\alpha$ test using Markov's inequality, and finally obtaining p-values by combining this family of tests; the improvements are delivered via recent randomized and exchangeable variants of Markov's inequality.

北京阿比特科技有限公司