亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a new algorithm for variance reduction when estimating $f(X_T)$ where $X$ is the solution to some stochastic differential equation and $f$ is a test function. The new estimator is $(f(X^1_T) + f(X^2_T))/2$, where $X^1$ and $X^2$ have same marginal law as $X$ but are pathwise correlated so that to reduce the variance. The optimal correlation function $\rho$ is approximated by a deep neural network and is calibrated along the trajectories of $(X^1, X^2)$ by policy gradient and reinforcement learning techniques. Finding an optimal coupling given marginal laws has links with maximum optimal transport.

相關內容

Let $A$ and $B$ be sets of vertices in a graph $G$. Menger's theorem states that for every positive integer $k$, either there exists a collection of $k$ vertex-disjoint paths between $A$ and $B$, or $A$ can be separated from $B$ by a set of at most $k-1$ vertices. Let $\Delta$ be the maximum degree of $G$. We show that there exists a function $f(\Delta) = (\Delta+1)^{\Delta^2+1}$, so that for every positive integer $k$, either there exists a collection of $k$ vertex-disjoint and pairwise anticomplete paths between $A$ and $B$, or $A$ can be separated from $B$ by a set of at most $k \cdot f(\Delta)$ vertices. We also show that the result can be generalized from bounded-degree graphs to graphs excluding a topological minor. On the negative side, we show that no such relation holds on graphs that have degeneracy 2 and arbitrarily large girth, even when $k = 2$. Similar results were obtained independently and concurrently by Hendrey, Norin, Steiner, and Turcotte [arXiv:2309.07905].

The restoration lemma is a classic result by Afek, Bremler-Barr, Kaplan, Cohen, and Merritt [PODC '01], which relates the structure of shortest paths in a graph $G$ before and after some edges in the graph fail. Their work shows that, after one edge failure, any replacement shortest path avoiding this failing edge can be partitioned into two pre-failure shortest paths. More generally, this implies an additive tradeoff between fault tolerance and subpath count: for any $f, k$, we can partition any $f$-edge-failure replacement shortest path into $k+1$ subpaths which are each an $(f-k)$-edge-failure replacement shortest path. This generalized result has found applications in routing, graph algorithms, fault tolerant network design, and more. Our main result improves this to a multiplicative tradeoff between fault tolerance and subpath count. We show that for all $f, k$, any $f$-edge-failure replacement path can be partitioned into $O(k)$ subpaths that are each an $(f/k)$-edge-failure replacement path. We also show an asymptotically matching lower bound. In particular, our results imply that the original restoration lemma is exactly tight in the case $k=1$, but can be significantly improved for larger $k$. We also show an extension of this result to weighted input graphs, and we give efficient algorithms that compute path decompositions satisfying our improved restoration lemmas.

Most dynamics functions are not well-aligned to task requirements. Controllers, therefore, often invert the dynamics and reshape it into something more useful. The learning community has found that these controllers, such as Operational Space Control (OSC), can offer important inductive biases for training. However, OSC only captures straight line end-effector motion. There's a lot more behavior we could and should be packing into these systems. Earlier work [15][16][19] developed a theory that generalized these ideas and constructed a broad and flexible class of second-order dynamical systems which was simultaneously expressive enough to capture substantial behavior (such as that listed above), and maintained the types of stability properties that make OSC and controllers like it a good foundation for policy design and learning. This paper, motivated by the empirical success of the types of fabrics used in [20], reformulates the theory of fabrics into a form that's more general and easier to apply to policy learning problems. We focus on the stability properties that make fabrics a good foundation for policy synthesis. Fabrics create a fundamentally stable medium within which a policy can operate; they influence the system's behavior without preventing it from achieving tasks within its constraints. When a fabrics is geometric (path consistent) we can interpret the fabric as forming a road network of paths that the system wants to follow at constant speed absent a forcing policy, giving geometric intuition to its role as a prior. The policy operating over the geometric fabric acts to modulate speed and steers the system from one road to the next as it accomplishes its task. We reformulate the theory of fabrics here rigorously and develop theoretical results characterizing system behavior and illuminating how to design these systems, while also emphasizing intuition throughout.

$\textit{De Novo}$ Genome assembly is one of the most important tasks in computational biology. ELBA is the state-of-the-art distributed-memory parallel algorithm for overlap detection and layout simplification steps of $\textit{De Novo}$ genome assembly but exists a performance bottleneck in pairwise alignment. In this work, we introduce 3 GPU schedulers for ELBA to accommodate multiple MPI processes and multiple GPUs. The GPU schedulers enable multiple MPI processes to perform computation on GPUs in a round-robin fashion. Both strong and weak scaling experiments show that 3 schedulers are able to significantly improve the performance of baseline while there is a trade-off between parallelism and GPU scheduler overhead. For the best performance implementation, the one-to-one scheduler achieves $\sim$7-8$\times$ speed-up using 25 MPI processes compared with the baseline vanilla ELBA GPU scheduler.

Lawvere showed that generalised metric spaces are categories enriched over $[0, \infty]$, the quantale of the positive extended reals. The statement of enrichment is a quantitative analogue of being a preorder. Towards seeking a logic for quantitative metric reasoning, we investigate three $[0,\infty]$-valued propositional logics over the Lawvere quantale. The basic logical connectives shared by all three logics are those that can be interpreted in any quantale, viz finite conjunctions and disjunctions, tensor (addition for the Lawvere quantale) and linear implication (here a truncated subtraction); to these we add, in turn, the constant $1$ to express integer values, and scalar multiplication by a non-negative real to express general affine combinations. Quantitative equational logic can be interpreted in the third logic if we allow inference systems instead of axiomatic systems. For each of these logics we develop a natural deduction system which we prove to be decidably complete w.r.t. the quantale-valued semantics. The heart of the completeness proof makes use of the Motzkin transposition theorem. Consistency is also decidable; the proof makes use of Fourier-Motzkin elimination of linear inequalities. Strong completeness does not hold in general, even (as is known) for theories over finitely-many propositional variables; indeed even an approximate form of strong completeness in the sense of Pavelka or Ben Yaacov -- provability up to arbitrary precision -- does not hold. However, we can show it for theories axiomatized by a (not necessarily finite) set of judgements in normal form over a finite set of propositional variables when we restrict to models that do not map variables to $\infty$; the proof uses Hurwicz's general form of the Farkas' Lemma.

Clones of operations of arity $\omega$ (referred to as $\omega$-operations) have been employed by Neumann to represent varieties of infinitary algebras defined by operations of at most arity $\omega$. More recently, clone algebras have been introduced to study clones of functions, including $\omega$-operations, within the framework of one-sorted universal algebra. Additionally, polymorphisms of arity $\omega$, which are $\omega$-operations preserving the relations of a given first-order structure, have recently been used to establish model theory results with applications in the field of complexity of CSP problems. In this paper, we undertake a topological and algebraic study of polymorphisms of arity $\omega$ and their corresponding invariant relations. Given a Boolean ideal $X$ on the set $A^\omega$, we propose a method to endow the set of $\omega$-operations on $A$ with a topology, which we refer to as $X$-topology. Notably, the topology of pointwise convergence can be retrieved as a special case of this approach. Polymorphisms and invariant relations are then defined parametrically, with respect to the $X$-topology. We characterise the $X$-closed clones of $\omega$-operations in terms of $Pol^\omega$-$Inv^\omega$ and present a method to relate $Inv^\omega$-$Pol^\omega$ to the classical (finitary) $Inv$-$Pol$.

The integer complexity $f(n)$ of a positive integer $n$ is defined as the minimum number of 1's needed to represent $n$, using additions, multiplications and parentheses. We present two simple and faster algorithms for computing the integer complexity: 1) A near-optimal $O(N\mathop{\mathrm{polylog}} N)$-time algorithm for computing the integer complexity of all $n\leq N$, improving the previous $O(N^{1.223})$ one [Cordwell et al., 2017]. 2) The first sublinear-time algorithm for computing the integer complexity of a single $n$, with running time $O(n^{0.6154})$. The previous algorithms for computing a single $f(n)$ require computing all $f(1),\dots,f(n)$.

At the core of the quest for a logic for PTime is a mismatch between algorithms making arbitrary choices and isomorphism-invariant logics. One approach to overcome this problem is witnessed symmetric choice. It allows for choices from definable orbits which are certified by definable witnessing automorphisms. We consider the extension of fixed-point logic with counting (IFPC) with witnessed symmetric choice (IFPC+WSC) and a further extension with an interpretation operator (IFPC+WSC+I). The latter operator evaluates a subformula in the structure defined by an interpretation. This structure possibly has other automorphisms exploitable by the WSC-operator. For similar extensions of pure fixed-point logic (IFP) it is known that IFP+WSCI simulates counting which IFP+WSC fails to do. For IFPC+WSC it is unknown whether the interpretation operator increases expressiveness and thus allows studying the relation between WSC and interpretations beyond counting. We separate IFPC+WSC from IFPC+WSCI by showing that IFPC+WSC is not closed under FO-interpretations. By the same argument, we answer an open question of Dawar and Richerby regarding non-witnessed symmetric choice in IFP. Additionally, we prove that nesting WSC-operators increases the expressiveness using the so-called CFI graphs. We show that if IFPC+WSC+I canonizes a particular class of base graphs, then it also canonizes the corresponding CFI graphs. This differs from various other logics, where CFI graphs provide difficult instances.

In this paper, we present distributed fault-tolerant algorithms that approximate the centroid of a set of $n$ data points in $\mathbb{R}^d$. Our work falls into the broader area of approximate multidimensional Byzantine agreement. The standard approach used in existing algorithms is to agree on a vector inside the convex hull of all correct vectors. This strategy dismisses many possibly correct data points. As a result, the algorithm does not necessarily agree on a representative value. To find better convergence strategies for the algorithms, we use the novel concept of defining an approximation of the centroid in the presence of Byzantine adversaries. We show that the standard agreement algorithms do not allow us to compute a better approximation than $2d$ of the centroid in the synchronous case. We investigate the trade-off between the quality of the approximation, the resilience of the algorithm, and the validity of the solution in order to design better approximation algorithms. For the synchronous case, we show that it is possible to achieve an optimal approximation of the centroid with up to $t<n/(d+1)$ Byzantine data points. This approach however does not give any guarantee on the validity of the solution. Therefore, we develop a second approach that reaches a $2\sqrt{d}$-approximation of the centroid, while satisfying the standard validity condition for agreement protocols. We are even able to restrict the validity condition to agreement inside the box of correct data points, while achieving optimal resilience of $t< n/3$. For the asynchronous case, we can adapt all three algorithms to reach the same approximation results (up to a constant factor). Our results suggest that it is reasonable to study the trade-off between validity conditions and the quality of the solution.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司