亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study streaming algorithms for the fundamental geometric problem of computing the cost of the Euclidean Minimum Spanning Tree (MST) on an $n$-point set $X \subset \mathbb{R}^d$. In the streaming model, the points in $X$ can be added and removed arbitrarily, and the goal is to maintain an approximation in small space. In low dimensions, $(1+\epsilon)$ approximations are possible in sublinear space [Frahling, Indyk, Sohler, SoCG '05]. However, for high dimensional spaces the best known approximation for this problem was $\tilde{O}(\log n)$, due to [Chen, Jayaram, Levi, Waingarten, STOC '22], improving on the prior $O(\log^2 n)$ bound due to [Indyk, STOC '04] and [Andoni, Indyk, Krauthgamer, SODA '08]. In this paper, we break the logarithmic barrier, and give the first constant factor sublinear space approximation to Euclidean MST. For any $\epsilon\geq 1$, our algorithm achieves an $\tilde{O}(\epsilon^{-2})$ approximation in $n^{O(\epsilon)}$ space. We complement this by proving that any single pass algorithm which obtains a better than $1.10$-approximation must use $\Omega(\sqrt{n})$ space, demonstrating that $(1+\epsilon)$ approximations are not possible in high-dimensions, and that our algorithm is tight up to a constant. Nevertheless, we demonstrate that $(1+\epsilon)$ approximations are possible in sublinear space with $O(1/\epsilon)$ passes over the stream. More generally, for any $\alpha \geq 2$, we give a $\alpha$-pass streaming algorithm which achieves a $(1+O(\frac{\log \alpha + 1}{ \alpha \epsilon}))$ approximation in $n^{O(\epsilon)} d^{O(1)}$ space. Our streaming algorithms are linear sketches, and therefore extend to the massively-parallel computation model (MPC). Thus, our results imply the first $(1+\epsilon)$-approximation to Euclidean MST in a constant number of rounds in the MPC model.

相關內容

We study axiomatic foundations for different classes of constant-function automated market makers (CFMMs). We focus particularly on separability and on different invariance properties under scaling. Our main results are an axiomatic characterization of a natural generalization of constant product market makers (CPMMs), popular in decentralized finance, on the one hand, and a characterization of the Logarithmic Scoring Rule Market Makers (LMSR), popular in prediction markets, on the other hand. The first class is characterized by the combination of independence and scale invariance, whereas the second is characterized by the combination of independence and translation invariance. The two classes are therefore distinguished by a different invariance property that is motivated by different interpretations of the num\'eraire in the two applications. However, both are pinned down by the same separability property. Moreover, we characterize the CPMM as an extremal point within the class of scale invariant, independent, symmetric AMMs with non-concentrated liquidity provision. Our results add to a formal analysis of mechanisms that are currently used for decentralized exchanges and connect the most popular class of DeFi AMMs to the most popular class of prediction market AMMs.

A Peskun ordering between two samplers, implying a dominance of one over the other, is known among the Markov chain Monte Carlo community for being a remarkably strong result, but it is also known for being one that is notably difficult to establish. Indeed, one has to prove that the probability to reach a state $\mathbf{y}$ from a state $\mathbf{x}$, using a sampler, is greater than or equal to the probability using the other sampler, and this must hold for all pairs $(\mathbf{x}, \mathbf{y})$ such that $\mathbf{x} \neq \mathbf{y}$. We provide in this paper a weaker version that does not require an inequality between the probabilities for all these states: essentially, the dominance holds asymptotically, as a varying parameter grows without bound, as long as the states for which the probabilities are greater than or equal to belong to a mass-concentrating set. The weak ordering turns out to be useful to compare lifted samplers for partially-ordered discrete state-spaces with their Metropolis--Hastings counterparts. An analysis in great generality yields a qualitative conclusion: they asymptotically perform better in certain situations (and we are able to identify them), but not necessarily in others (and the reasons why are made clear). A thorough study in a specific context of graphical-model simulation is also conducted.

A tree search algorithm called successive cancellation ordered search (SCOS) is proposed for $\boldsymbol{G}_N$-coset codes that implements maximum-likelihood (ML) decoding with an adaptive complexity for transmission over binary-input AWGN channels. Unlike bit-flip decoders, no outer code is needed to terminate decoding; therefore, SCOS also applies to $\boldsymbol{G}_N$-coset codes modified with dynamic frozen bits. The average complexity is close to that of successive cancellation (SC) decoding at practical frame error rates (FERs) for codes with wide ranges of rate and lengths up to $512$ bits, which perform within $0.25$ dB or less from the random coding union bound and outperform Reed--Muller codes under ML decoding by up to $0.5$ dB. Simulations illustrate simultaneous gains for SCOS over SC-Fano, SC stack (SCS) and SC list (SCL) decoding in FER and the average complexity at various SNR regimes. SCOS is further extended by forcing it to look for candidates satisfying a threshold on the likelihood, thereby outperforming basic SCOS under complexity constraints. The modified SCOS enables strong error-detection capability without the need for an outer code. In particular, the $(128, 64)$ PAC code under modified SCOS provides gains in overall and undetected FER compared to CRC-aided polar codes under SCL/dynamic SC flip decoding at high SNR.

We study the numerical approximation by space-time finite element methods of a multi-physics system coupling hyperbolic elastodynamics with parabolic transport and modeling poro- and thermoelasticity. The equations are rewritten as a first-order system in time. Discretizations by continuous Galerkin methods in time and inf-sup stable pairs of finite element spaces for the spatial variables are investigated. Optimal order error estimates are proved by an analysis in weighted norms that depict the energy of the system's unknowns. A further important ingredient and challenge of the analysis is the control of the couplings terms. The techniques developed here can be generalized to other families of Galerkin space discretizations and advanced models. The error estimates are confirmed by numerical experiments, also for higher order piecewise polynomials in time and space. The latter lead to algebraic systems with complex block structure and put a facet of challenge on the design of iterative solvers. An efficient solution technique is referenced.

We present a novel framework based on semi-bounded spatial operators for analyzing and discretizing initial boundary value problems on moving and deforming domains. This development extends an existing framework for well-posed problems and energy stable discretizations from stationary domains to the general case including arbitrary mesh motion. In particular, we show that an energy estimate derived in the physical coordinate system is equivalent to a semi-bounded property with respect to a stationary reference domain. The continuous analysis leading up to this result is based on a skew-symmetric splitting of the material time derivative, and thus relies on the property of integration-by-parts. Following this, a mimetic energy stable arbitrary Lagrangian-Eulerian framework for semi-discretization is formulated, based on approximating the material time derivative in a way consistent with discrete summation-by-parts. Thanks to the semi-bounded property, a method-of-lines approach using standard explicit or implicit time integration schemes can be applied to march the system forward in time. The same type of stability arguments applies as for the corresponding stationary domain problem, without regards to additional properties such as discrete geometric conservation. As an additional bonus we demonstrate that discrete geometric conservation, in the sense of exact free-stream preservation, can still be achieved in an automatic way with the new framework. However, we stress that this is not necessary for stability.

The development of efficient sampling algorithms catering to non-Euclidean geometries has been a challenging endeavor, as discretization techniques which succeed in the Euclidean setting do not readily carry over to more general settings. We develop a non-Euclidean analog of the recent proximal sampler of [LST21], which naturally induces regularization by an object known as the log-Laplace transform (LLT) of a density. We prove new mathematical properties (with an algorithmic flavor) of the LLT, such as strong convexity-smoothness duality and an isoperimetric inequality, which are used to prove a mixing time on our proximal sampler matching [LST21] under a warm start. As our main application, we show our warm-started sampler improves the value oracle complexity of differentially private convex optimization in $\ell_p$ and Schatten-$p$ norms for $p \in [1, 2]$ to match the Euclidean setting [GLL22], while retaining state-of-the-art excess risk bounds [GLLST23]. We find our investigation of the LLT to be a promising proof-of-concept of its utility as a tool for designing samplers, and outline directions for future exploration.

We propose a new learning framework that captures the tiered structure of many real-world user-interaction applications, where the users can be divided into two groups based on their different tolerance on exploration risks and should be treated separately. In this setting, we simultaneously maintain two policies $\pi^{\text{O}}$ and $\pi^{\text{E}}$: $\pi^{\text{O}}$ ("O" for "online") interacts with more risk-tolerant users from the first tier and minimizes regret by balancing exploration and exploitation as usual, while $\pi^{\text{E}}$ ("E" for "exploit") exclusively focuses on exploitation for risk-averse users from the second tier utilizing the data collected so far. An important question is whether such a separation yields advantages over the standard online setting (i.e., $\pi^{\text{E}}=\pi^{\text{O}}$) for the risk-averse users. We individually consider the gap-independent vs.~gap-dependent settings. For the former, we prove that the separation is indeed not beneficial from a minimax perspective. For the latter, we show that if choosing Pessimistic Value Iteration as the exploitation algorithm to produce $\pi^{\text{E}}$, we can achieve a constant regret for risk-averse users independent of the number of episodes $K$, which is in sharp contrast to the $\Omega(\log K)$ regret for any online RL algorithms in the same setting, while the regret of $\pi^{\text{O}}$ (almost) maintains its online regret optimality and does not need to compromise for the success of $\pi^{\text{E}}$.

This paper brings mathematical tools to bear on the study of package dependencies in software systems. We introduce structures known as Dependency Structures with Choice (DSC) that provide a mathematical account of such dependencies, inspired by the definition of general event structures in the study of concurrency. We equip DSCs with a particular notion of morphism and show that the category of DSCs is isomorphic to the category of antimatroids. We study the exactness properties of these equivalent categories, and show that they are finitely complete, have finite coproducts but not all coequalizers. Further, we show construct a functor from a category of DSCs equipped with a certain subclass of morphisms to the opposite of the category of finite distributive lattices, making use of a simple finite characterization of the Bruns-Lakser completion, and finally, we introduce a formal account of versions of packages and introduce a mathematical account of package version-bound policies.

We study exact algorithms for Euclidean TSP in $\mathbb{R}^d$. In the early 1990s algorithms with $n^{O(\sqrt{n})}$ running time were presented for the planar case, and some years later an algorithm with $n^{O(n^{1-1/d})}$ running time was presented for any $d\geq 2$. Despite significant interest in subexponential exact algorithms over the past decade, there has been no progress on Euclidean TSP, except for a lower bound stating that the problem admits no $2^{O(n^{1-1/d-\epsilon})}$ algorithm unless ETH fails. Up to constant factors in the exponent, we settle the complexity of Euclidean TSP by giving a $2^{O(n^{1-1/d})}$ algorithm and by showing that a $2^{o(n^{1-1/d})}$ algorithm does not exist unless ETH fails.

Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.

北京阿比特科技有限公司