亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We prove that a polynomial fraction of the set of $k$-component forests in the $m \times n$ grid graph have equal numbers of vertices in each component, for any constant $k$. This resolves a conjecture of Charikar, Liu, Liu, and Vuong, and establishes the first provably polynomial-time algorithm for (exactly or approximately) sampling balanced grid graph partitions according to the spanning tree distribution, which weights each $k$-partition according to the product, across its $k$ pieces, of the number of spanning trees of each piece. Our result follows from a careful analysis of the probability a uniformly random spanning tree of the grid can be cut into balanced pieces. Beyond grids, we show that for a broad family of lattice-like graphs, we achieve balance up to any multiplicative $(1 \pm \varepsilon)$ constant with constant probability, and up to an additive constant with polynomial probability. More generally, we show that, with constant probability, components derived from uniform spanning trees can approximate any given partition of a planar region specified by Jordan curves. These results imply polynomial time algorithms for sampling approximately balanced tree-weighted partitions for lattice-like graphs. Our results have applications to understanding political districtings, where there is an underlying graph of indivisible geographic units that must be partitioned into $k$ population-balanced connected subgraphs. In this setting, tree-weighted partitions have interesting geometric properties, and this has stimulated significant effort to develop methods to sample them.

相關內容

Prior interpretability research studying narrow distributions has preliminarily identified self-repair, a phenomena where if components in large language models are ablated, later components will change their behavior to compensate. Our work builds off this past literature, demonstrating that self-repair exists on a variety of models families and sizes when ablating individual attention heads on the full training distribution. We further show that on the full training distribution self-repair is imperfect, as the original direct effect of the head is not fully restored, and noisy, since the degree of self-repair varies significantly across different prompts (sometimes overcorrecting beyond the original effect). We highlight two different mechanisms that contribute to self-repair, including changes in the final LayerNorm scaling factor (which can repair up to 30% of the direct effect) and sparse sets of neurons implementing Anti-Erasure. We additionally discuss the implications of these results for interpretability practitioners and close with a more speculative discussion on the mystery of why self-repair occurs in these models at all, highlighting evidence for the Iterative Inference hypothesis in language models, a framework that predicts self-repair.

Due to M\"{u}ller's theorem, the Kolmogorov complexity of a string was shown to be equal to its quantum Kolmogorov complexity. Thus there are no benefits to using quantum mechanics to compress classical information. The quantitative amount of information in classical sources is invariant to the physical model used. These consequences make this theorem arguably the most important result in the intersection of algorithmic information theory and physics. The original proof is quite extensive. This paper contains two simple proofs of this theorem.

We study dynamic $(1-\epsilon)$-approximate rounding of fractional matchings -- a key ingredient in numerous breakthroughs in the dynamic graph algorithms literature. Our first contribution is a surprisingly simple deterministic rounding algorithm in bipartite graphs with amortized update time $O(\epsilon^{-1} \log^2 (\epsilon^{-1} \cdot n))$, matching an (unconditional) recourse lower bound of $\Omega(\epsilon^{-1})$ up to logarithmic factors. Moreover, this algorithm's update time improves provided the minimum (non-zero) weight in the fractional matching is lower bounded throughout. Combining this algorithm with novel dynamic \emph{partial rounding} algorithms to increase this minimum weight, we obtain several algorithms that improve this dependence on $n$. For example, we give a high-probability randomized algorithm with $\tilde{O}(\epsilon^{-1}\cdot (\log\log n)^2)$-update time against adaptive adversaries. (We use Soft-Oh notation, $\tilde{O}$, to suppress polylogarithmic factors in the argument, i.e., $\tilde{O}(f)=O(f\cdot \mathrm{poly}(\log f))$.) Using our rounding algorithms, we also round known $(1-\epsilon)$-decremental fractional bipartite matching algorithms with no asymptotic overhead, thus improving on state-of-the-art algorithms for the decremental bipartite matching problem. Further, we provide extensions of our results to general graphs and to maintaining almost-maximal matchings.

Martin-L\"{o}f type theory $\mathbf{MLTT}$ was extended by Setzer with the so-called Mahlo universe types. This extension is called $\mathbf{MLM}$ and was introduced to develop a variant of $\mathbf{MLTT}$ equipped with an analogue of a large cardinal. Another instance of constructive systems extended with an analogue of a large set was formulated in the context of Aczel's constructive set theory: $\mathbf{CZF}$. Rathjen, Griffor and Palmgren extended $\mathbf{CZF}$ with inaccessible sets of all transfinite orders. It is unknown whether this extension of $\mathbf{CZF}$ is directly interpretable by Mahlo universes. In particular, how to construct the transfinite hierarchy of inaccessible sets using the reflection property of the Mahlo universe in $\mathbf{MLM}$ is not well understood. We extend $\mathbf{MLM}$ further by adding the accessibility predicate to it and show that the above extension of $\mathbf{CZF}$ is directly interpretable in $\mathbf{MLM}$ using the accessibility predicate.

The crossing number of a graph $G$ is the minimum number of crossings in a drawing of $G$ in the plane. A rectilinear drawing of a graph $G$ represents vertices of $G$ by a set of points in the plane and represents each edge of $G$ by a straight-line segment connecting its two endpoints. The rectilinear crossing number of $G$ is the minimum number of crossings in a rectilinear drawing of $G$. By the crossing lemma, the crossing number of an $n$-vertex graph $G$ can be $O(n)$ only if $|E(G)|\in O(n)$. Graphs of bounded genus and bounded degree (B\"{o}r\"{o}czky, Pach and T\'{o}th, 2006) and in fact all bounded degree proper minor-closed families (Wood and Telle, 2007) have been shown to admit linear crossing number, with tight $\Theta(\Delta n)$ bound shown by Dujmovi\'c, Kawarabayashi, Mohar and Wood, 2008. Much less is known about rectilinear crossing number. It is not bounded by any function of the crossing number. We prove that graphs that exclude a single-crossing graph as a minor have the rectilinear crossing number $O(\Delta n)$. This dependence on $n$ and $\Delta$ is best possible. A single-crossing graph is a graph whose crossing number is at most one. Thus the result applies to $K_5$-minor-free graphs, for example. It also applies to bounded treewidth graphs, since each family of bounded treewidth graphs excludes some fixed planar graph as a minor. Prior to our work, the only bounded degree minor-closed families known to have linear rectilinear crossing number were bounded degree graphs of bounded treewidth (Wood and Telle, 2007), as well as, bounded degree $K_{3,3}$-minor-free graphs (Dujmovi\'c, Kawarabayashi, Mohar and Wood, 2008). In the case of bounded treewidth graphs, our $O(\Delta n)$ result is again tight and improves on the previous best known bound of $O(\Delta^2 n)$ by Wood and Telle, 2007 (obtained for convex geometric drawings).

We present a parallel algorithm for the $(1-\epsilon)$-approximate maximum flow problem in capacitated, undirected graphs with $n$ vertices and $m$ edges, achieving $O(\epsilon^{-3}\text{polylog} n)$ depth and $O(m \epsilon^{-3} \text{polylog} n)$ work in the PRAM model. Although near-linear time sequential algorithms for this problem have been known for almost a decade, no parallel algorithms that simultaneously achieved polylogarithmic depth and near-linear work were known. At the heart of our result is a polylogarithmic depth, near-linear work recursive algorithm for computing congestion approximators. Our algorithm involves a recursive step to obtain a low-quality congestion approximator followed by a "boosting" step to improve its quality which prevents a multiplicative blow-up in error. Similar to Peng [SODA'16], our boosting step builds upon the hierarchical decomposition scheme of R\"acke, Shah, and T\"aubig [SODA'14]. A direct implementation of this approach, however, leads only to an algorithm with $n^{o(1)}$ depth and $m^{1+o(1)}$ work. To get around this, we introduce a new hierarchical decomposition scheme, in which we only need to solve maximum flows on subgraphs obtained by contracting vertices, as opposed to vertex-induced subgraphs used in R\"acke, Shah, and T\"aubig [SODA'14]. In particular, we are able to directly extract congestion approximators for the subgraphs from a congestion approximator for the entire graph, thereby avoiding additional recursion on those subgraphs. Along the way, we also develop a parallel flow-decomposition algorithm that is crucial to achieving polylogarithmic depth and may be of independent interest.

Despite the possibility to quickly compute reachable sets of large-scale linear systems, current methods are not yet widely applied by practitioners. The main reason for this is probably that current approaches are not push-button-capable and still require to manually set crucial parameters, such as time step sizes and the accuracy of the used set representation -- these settings require expert knowledge. We present a generic framework to automatically find near-optimal parameters for reachability analysis of linear systems given a user-defined accuracy. To limit the computational overhead as much as possible, our methods tune all relevant parameters during runtime. We evaluate our approach on benchmarks from the ARCH competition as well as on random examples. Our results show that our new framework verifies the selected benchmarks faster than manually-tuned parameters and is an order of magnitude faster compared to genetic algorithms.

The distance of a graph from being triangle-free is a fundamental graph parameter, counting the number of edges that need to be removed from a graph in order for it to become triangle-free. Its corresponding computational problem is the classic minimum triangle edge transversal problem, and its normalized value is the baseline for triangle-freeness testing algorithms. While triangle-freeness testing has been successfully studied in the distributed setting, computing the distance itself in a distributed setting is unknown, to the best of our knowledge, despite being well-studied in the centralized setting. This work addresses the computation of the minimum triangle edge transversal in distributed networks. We show with a simple warm-up construction that this is a global task, requiring $\Omega(D)$ rounds even in the $\mathsf{LOCAL}$ model with unbounded messages, where $D$ is the diameter of the network. However, we show that approximating this value can be done much faster. A $(1+\epsilon)$-approximation can be obtained in $\text{poly}\log{n}$ rounds, where $n$ is the size of the network graph. Moreover, faster approximations can be obtained, at the cost of increasing the approximation factor to roughly 3, by a reduction to the minimum hypergraph vertex cover problem. With a time overhead of the maximum degree $\Delta$, this can also be applied to the $\mathsf{CONGEST}$ model, in which messages are bounded. Our key technical contribution is proving that computing an exact solution is ``as hard as it gets'' in $\mathsf{CONGEST}$, requiring a near-quadratic number of rounds. Because this problem is an edge selection problem, as opposed to previous lower bounds that were for node selection problems, major challenges arise in constructing the lower bound, requiring us to develop novel ingredients.

The Riemann problem for first-order hyperbolic systems of partial differential equations is of fundamental importance for both theoretical and numerical purposes. Many approximate solvers have been developed for such systems; exact solution algorithms have received less attention because computation of the exact solution typically requires iterative solution of algebraic equations. Iterative algorithms may be less computationally efficient or might fail to converge in some cases. We investigate the achievable efficiency of robust iterative Riemann solvers for relatively simple systems, focusing on the shallow water and Euler equations. We consider a range of initial guesses and iterative schemes applied to an ensemble of test Riemann problems. For the shallow water equations, we find that Newton's method with a simple modification converges quickly and reliably. For the Euler equations we obtain similar results; however, when the required precision is high, a combination of Ostrowski and Newton iterations converges faster. These solvers are slower than standard approximate solvers like Roe and HLLE, but come within a factor of two in speed. We also provide a preliminary comparison of the accuracy of a finite volume discretization using an exact solver versus standard approximate solvers.

When constructing parametric models to predict the cost of future claims, several important details have to be taken into account: (i) models should be designed to accommodate deductibles, policy limits, and coinsurance factors, (ii) parameters should be estimated robustly to control the influence of outliers on model predictions, and (iii) all point predictions should be augmented with estimates of their uncertainty. The methodology proposed in this paper provides a framework for addressing all these aspects simultaneously. Using payment-per-payment and payment-per-loss variables, we construct the adaptive version of method of winsorized moments (MWM) estimators for the parameters of truncated and censored lognormal distribution. Further, the asymptotic distributional properties of this approach are derived and compared with those of the maximum likelihood estimator (MLE) and method of trimmed moments (MTM) estimators. The latter being a primary competitor to MWM. Moreover, the theoretical results are validated with extensive simulation studies and risk measure sensitivity analysis. Finally, practical performance of these methods is illustrated using the well-studied data set of 1500 U.S. indemnity losses. With this real data set, it is also demonstrated that the composite models do not provide much improvement in the quality of predictive models compared to a stand-alone fitted distribution specially for truncated and censored sample data.

北京阿比特科技有限公司