亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We give a comprehensive description of Wasserstein gradient flows of maximum mean discrepancy (MMD) functionals $\mathcal F_\nu := \text{MMD}_K^2(\cdot, \nu)$ towards given target measures $\nu$ on the real line, where we focus on the negative distance kernel $K(x,y) := -|x-y|$. In one dimension, the Wasserstein-2 space can be isometrically embedded into the cone $\mathcal C(0,1) \subset L_2(0,1)$ of quantile functions leading to a characterization of Wasserstein gradient flows via the solution of an associated Cauchy problem on $L_2(0,1)$. Based on the construction of an appropriate counterpart of $\mathcal F_\nu$ on $L_2(0,1)$ and its subdifferential, we provide a solution of the Cauchy problem. For discrete target measures $\nu$, this results in a piecewise linear solution formula. We prove invariance and smoothing properties of the flow on subsets of $\mathcal C(0,1)$. For certain $\mathcal F_\nu$-flows this implies that initial point measures instantly become absolutely continuous, and stay so over time. Finally, we illustrate the behavior of the flow by various numerical examples using an implicit Euler scheme and demonstrate differences to the explicit Euler scheme, which is easier to compute, but comes with limited convergence guarantees.

相關內容

We introduce the concept of an imprecise Markov semigroup $\mathbf{Q}$. It is a tool that allows to represent ambiguity around both the initial and the transition probabilities of a Markov process via a compact collection of plausible Markov semigroups, each associated with a (different, plausible) Markov process. We use techniques from geometry, functional analysis, and (high dimensional) probability to study the ergodic behavior of $\mathbf{Q}$. We show that, if the initial distribution of the Markov processes associated with the elements of $\mathbf{Q}$ is known and invariant, under some conditions that also involve the geometry of the state space, eventually the ambiguity around their transition probability fades. We call this property ergodicity of the imprecise Markov semigroup, and we relate it to the classical notion of ergodicity. We prove ergodicity both when the state space is Euclidean or a Riemannian manifold, and when it is an arbitrary measurable space. The importance of our findings for the fields of machine learning and computer vision is also discussed.

Order-invariant first-order logic is an extension of first-order logic (FO) where formulae can make use of a linear order on the structures, under the proviso that they are order-invariant, i.e. that their truth value is the same for all linear orders. We continue the study of the two-variable fragment of order-invariant first-order logic initiated by Zeume and Harwath, and study its complexity and expressive power. We first establish coNExpTime-completeness for the problem of deciding if a given two-variable formula is order-invariant, which tightens and significantly simplifies the coN2ExpTime proof by Zeume and Harwath. Second, we address the question of whether every property expressible in order-invariant two-variable logic is also expressible in first-order logic without the use of a linear order. While we were not able to provide a satisfactory answer to the question, we suspect that the answer is ``no''. To justify our claim, we present a class of finite tree-like structures (of unbounded degree) in which a relaxed variant of order-invariant two-variable FO expresses properties that are not definable in plain FO. On the other hand, we show that if one restricts their attention to classes of structures of bounded degree, then the expressive power of order-invariant two-variable FO is contained within FO.

A property $\Pi$ on a finite set $U$ is \emph{monotone} if for every $X \subseteq U$ satisfying $\Pi$, every superset $Y \subseteq U$ of $X$ also satisfies $\Pi$. Many combinatorial properties can be seen as monotone properties. The problem of finding a minimum subset of $U$ satisfying $\Pi$ is a central problem in combinatorial optimization. Although many approximate/exact algorithms have been developed to solve this kind of problem on numerous properties, a solution obtained by these algorithms is often unsuitable for real-world applications due to the difficulty of building accurate mathematical models on real-world problems. A promising approach to overcome this difficulty is to \emph{enumerate} multiple small solutions rather than to \emph{find} a single small solution. To this end, given a weight function $w: U \to \mathbb N$ and an integer $k$, we devise algorithms that \emph{approximately} enumerate all minimal subsets of $U$ with weight at most $k$ satisfying $\Pi$ for various monotone properties $\Pi$, where "approximate enumeration" means that algorithms output all minimal subsets satisfying $\Pi$ whose weight at most $k$ and may output some minimal subsets satisfying $\Pi$ whose weight exceeds $k$ but is at most $ck$ for some constant $c \ge 1$. These algorithms allow us to efficiently enumerate minimal vertex covers, minimal dominating sets in bounded degree graphs, minimal feedback vertex sets, minimal hitting sets in bounded rank hypergraphs, etc., of weight at most $k$ with constant approximation factors.

We study Clustered Planarity with Linear Saturators, which is the problem of augmenting an $n$-vertex planar graph whose vertices are partitioned into independent sets (called clusters) with paths - one for each cluster - that connect all the vertices in each cluster while maintaining planarity. We show that the problem can be solved in time $2^{O(n)}$ for both the variable and fixed embedding case. Moreover, we show that it can be solved in subexponential time $2^{O(\sqrt{n}\log n)}$ in the fixed embedding case if additionally the input graph is connected. The latter time complexity is tight under the Exponential-Time Hypothesis. We also show that $n$ can be replaced with the vertex cover number of the input graph by providing a linear (resp. polynomial) kernel for the variable-embedding (resp. fixed-embedding) case; these results contrast the NP-hardness of the problem on graphs of bounded treewidth (and even on trees). Finally, we complement known lower bounds for the problem by showing that Clustered Planarity with Linear Saturators is NP-hard even when the number of clusters is at most $3$, thus excluding the algorithmic use of the number of clusters as a parameter.

In a large cyber-physical system, a temporal inconsistency of an output value can arise if there is a non-negligible delay between the instant when a sensor value is acquired from the environment and the instant when a setpoint, based on this sensor value, is used in the environment. Such a temporal inconsistency can be the cause of a critical malfunction of the cyber-physical system. This paper presents a solution of this temporal consistency problem that can best be implemented in a time-triggered architecture (TTA). In a TTA, the instants of sensor value acquisition, setpoint calculation, and actuation on the environment are statically configured, and the cyber-physical system implements software and hardware mechanisms to execute the respective actions tightly at these configured instants.

In the general pattern formation (GPF) problem, a swarm of simple autonomous, disoriented robots must form a given pattern. The robots' simplicity imply a strong limitation: When the initial configuration is rotationally symmetric, only patterns with a similar symmetry can be formed [Yamashita, Suzyuki; TCS 2010]. The only known algorithm to form large patterns with limited visibility and without memory requires the robots to start in a near-gathering (a swarm of constant diameter) [Hahn et al.; SAND 2024]. However, not only do we not know any near-gathering algorithm guaranteed to preserve symmetry but most natural gathering strategies trivially increase symmetries [Castenow et al.; OPODIS 2022]. Thus, we study near-gathering without changing the swarm's rotational symmetry for disoriented, oblivious robots with limited visibility (the OBLOT-model, see [Flocchini et al.; 2019]). We introduce a technique based on the theory of dynamical systems to analyze how a given algorithm affects symmetry and provide sufficient conditions for symmetry preservation. Until now, it was unknown whether the considered OBLOT-model allows for any non-trivial algorithm that always preserves symmetry. Our first result shows that a variant of Go-to-the-Average always preserves symmetry but may sometimes lead to multiple, unconnected near-gathering clusters. Our second result is a symmetry-preserving near-gathering algorithm that works on swarms with a convex boundary (the outer boundary of the unit disc graph) and without holes (circles of diameter 1 inside the boundary without any robots).

In this paper, we extend the work of Liesen et al. (2002), which analyzes how the condition number of an orthonormal matrix Q changes when a column is added ([Q, c]), particularly focusing on the perpendicularity of c to the span of Q. Their result, presented in Theorem 2.3 of Liesen et al. (2002), assumes exact arithmetic and orthonormality of Q, which is a strong assumption when applying these results to numerical methods such as QR factorization algorithms. In our work, we address this gap by deriving bounds on the condition number increase for a matrix B without assuming perfect orthonormality, even when a column is not perfectly orthogonal to the span of B. This framework allows us to analyze QR factorization methods where orthogonalization is imperfect and subject to Gaussian noise. We also provide results on the performance of orthogonal projection and least squares under Gaussian noise, further supporting the development of this theory.

We consider two symmetry metrics to detect partisan gerrymandering: the Mean-Median Difference (MM) and Partisan Bias (PB). To lay the groundwork for our main results, we first assert that the foundation of a partisan gerrymander is to draw a map so that the preferred party wins an extreme number of seats, and that both the Mean-Median Difference and Partisan Bias have been used to detect partisan gerrymandering. We then provide both a theoretical and empirical analysis of the Mean-Median Difference and Partisan Bias. In our theoretical analysis, we consider vote-share, seat-share pairs (V,S) for which one can construct election data having vote share V and seat share S, and turnout is equal in each district. We calculate the range of values that MM and PB can achieve on that constructed election data. In the process, we find the range of vote-share, seat share pairs (V,S) for which there is constructed election data with vote share V , seat share S, and MM = 0, and see that the corresponding range for PB is the same set of (V,S) pairs. We show how the set of such (V,S) pairs allowing for MM = 0 (and PB = 0) changes when turnout in each district is allowed to be different. By observing the results of this theoretical analysis, we give examples of how these two metrics are unable to detect when a map has an extreme number of districts won. Because these examples are constructed, we follow this with our empirical study, in which we show on 18 different U.S. maps that these two metrics are unable to detect when a map has an extreme number of districts won.

We study the question of whether submodular functions of random variables satisfying various notions of negative dependence satisfy Chernoff-like concentration inequalities. We prove such a concentration inequality for the lower tail when the random variables satisfy negative association or negative regression, partially resolving an open problem raised in (Qiu and Singla [QS22]). Previous work showed such concentration results for random variables that come from specific dependent-rounding algorithms (Chekuri, Vondrak, and Zenklusen [CVZ10] and Harvey and Olver [HO14]). We discuss some applications of our results to combinatorial optimization and beyond. We also show applications to the concentration of read-k families [Gav+15] under certain forms of negative dependence; we further show a simplified proof of the entropy-method approach of [Gav+15].

The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.

北京阿比特科技有限公司