亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We prove that there is a randomized polynomial-time algorithm that given an edge-weighted graph $G$ excluding a fixed-minor $Q$ on $n$ vertices and an accuracy parameter $\varepsilon>0$, constructs an edge-weighted graph~$H$ and an embedding $\eta\colon V(G)\to V(H)$ with the following properties: * For any constant size $Q$, the treewidth of $H$ is polynomial in $\varepsilon^{-1}$, $\log n$, and the logarithm of the stretch of the distance metric in $G$. * The expected multiplicative distortion is $(1+\varepsilon)$: for every pair of vertices $u,v$ of $G$, we have $\mathrm{dist}_H(\eta(u),\eta(v))\geq \mathrm{dist}_G(u,v)$ always and $\mathrm{Exp}[\mathrm{dist}_H(\eta(u),\eta(v))]\leq (1+\varepsilon)\mathrm{dist}_G(u,v)$. Our embedding is the first to achieve polylogarithmic treewidth of the host graph and comes close to the lower bound by Carroll and Goel, who showed that any embedding of a planar graph with $\mathcal{O}(1)$ expected distortion requires the host graph to have treewidth $\Omega(\log n)$. It also provides a unified framework for obtaining randomized quasi-polynomial-time approximation schemes for a variety of problems including network design, clustering or routing problems, in minor-free metrics where the optimization goal is the sum of selected distances. Applications include the capacitated vehicle routing problem, and capacitated clustering problems.

相關內容

We present a novel stabilized isogeometric formulation for the Stokes problem, where the geometry of interest is obtained via overlapping NURBS (non-uniform rational B-spline) patches, i.e., one patch on top of another in an arbitrary but predefined hierarchical order. All the visible regions constitute the computational domain, whereas independent patches are coupled through visible interfaces using Nitsche's formulation. Such a geometric representation inevitably involves trimming, which may yield trimmed elements of extremely small measures (referred to as bad elements) and thus lead to the instability issue. Motivated by the minimal stabilization method that rigorously guarantees stability for trimmed geometries [1], in this work we generalize it to the Stokes problem on overlapping patches. Central to our method is the distinct treatments for the pressure and velocity spaces: Stabilization for velocity is carried out for the flux terms on interfaces, whereas pressure is stabilized in all the bad elements. We provide a priori error estimates with a comprehensive theoretical study. Through a suite of numerical tests, we first show that optimal convergence rates are achieved, which consistently agrees with our theoretical findings. Second, we show that the accuracy of pressure is significantly improved by several orders using the proposed stabilization method, compared to the results without stabilization. Finally, we also demonstrate the flexibility and efficiency of the proposed method in capturing local features in the solution field.

Understanding how convolutional neural networks (CNNs) can efficiently learn high-dimensional functions remains a fundamental challenge. A popular belief is that these models harness the local and hierarchical structure of natural data such as images. Yet, we lack a quantitative understanding of how such structure affects performance, e.g., the rate of decay of the generalisation error with the number of training samples. In this paper, we study infinitely-wide deep CNNs in the kernel regime. First, we show that the spectrum of the corresponding kernel inherits the hierarchical structure of the network, and we characterise its asymptotics. Then, we use this result together with generalisation bounds to prove that deep CNNs adapt to the spatial scale of the target function. In particular, we find that if the target function depends on low-dimensional subsets of adjacent input variables, then the decay of the error is controlled by the effective dimensionality of these subsets. Conversely, if the target function depends on the full set of input variables, then the error decay is controlled by the input dimension. We conclude by computing the generalisation error of a deep CNN trained on the output of another deep CNN with randomly-initialised parameters. Interestingly, we find that, despite their hierarchical structure, the functions generated by infinitely-wide deep CNNs are too rich to be efficiently learnable in high dimension.

We revisit the recent breakthrough result of Gkatzelis et al. on (single-winner) metric voting, which showed that the optimal distortion of 3 can be achieved by a mechanism called Plurality Matching. The rule picks an arbitrary candidate for whom a certain candidate-specific bipartite graph contains a perfect matching, and thus, it is not neutral (i.e, symmetric with respect to candidates). Subsequently, a much simpler rule called Plurality Veto was shown to achieve distortion 3 as well. This rule only constructs such a matching implicitly but the winner depends on the order that voters are processed, and thus, it is not anonymous (i.e., symmetric with respect to voters). We provide an intuitive interpretation of this matching by generalizing the classical notion of the (proportional) veto core in social choice theory. This interpretation opens up a number of immediate consequences. Previous methods for electing a candidate from the veto core can be interpreted simply as matching algorithms. Different election methods realize different matchings, in turn leading to different sets of candidates as winners. For a broad generalization of the veto core, we show that the generalized veto core is equal to the set of candidates who can emerge as winners under a natural class of matching algorithms reminiscent of Serial Dictatorship. Extending these matching algorithms into continuous time, we obtain a highly practical voting rule with optimal distortion 3, which is also intuitive and easy to explain: Each candidate starts off with public support equal to his plurality score. From time 0 to 1, every voter continuously brings down, at rate 1, the support of her bottom choice among not-yet-eliminated candidates. A candidate is eliminated if he is opposed by a voter after his support reaches 0. On top of being anonymous and neutral, this rule satisfies many other axioms desirable in practice.

A long line of work in the past two decades or so established close connections between several different pseudorandom objects and applications. These connections essentially show that an asymptotically optimal construction of one central object will lead to asymptotically optimal solutions to all the others. However, despite considerable effort, previous works can get close but still lack one final step to achieve truly asymptotically optimal constructions. In this paper we provide the last missing link, thus simultaneously achieving explicit, asymptotically optimal constructions and solutions for various well studied extractors and applications, that have been the subjects of long lines of research. Our results include: Asymptotically optimal seeded non-malleable extractors, which in turn give two source extractors for asymptotically optimal min-entropy of $O(\log n)$, explicit constructions of $K$-Ramsey graphs on $N$ vertices with $K=\log^{O(1)} N$, and truly optimal privacy amplification protocols with an active adversary. Two source non-malleable extractors and affine non-malleable extractors for some linear min-entropy with exponentially small error, which in turn give the first explicit construction of non-malleable codes against $2$-split state tampering and affine tampering with constant rate and \emph{exponentially} small error. Explicit extractors for affine sources, sumset sources, interleaved sources, and small space sources that achieve asymptotically optimal min-entropy of $O(\log n)$ or $2s+O(\log n)$ (for space $s$ sources). An explicit function that requires strongly linear read once branching programs of size $2^{n-O(\log n)}$, which is optimal up to the constant in $O(\cdot)$. Previously, even for standard read once branching programs, the best known size lower bound for an explicit function is $2^{n-O(\log^2 n)}$.

By using the notion of $d$-embedding $\Gamma$ of a (canonical) subgeometry $\Sigma$ and of exterior set with respect to the $h$-secant variety $\Omega_{h}(\mathcal{A})$ of a subset $\mathcal{A}$, $ 0 \leq h \leq n-1$, in the finite projective space $\mathrm{PG}(n-1,q^n)$, $n \geq 3$, in this article we construct a class of non-linear $(n,n,q;d)$-MRD codes for any $ 2 \leq d \leq n-1$. A code $\mathcal{C}_{\sigma,T}$ of this class, where $1\in T \subset \mathbb{F}_q^*$ and $\sigma$ is a generator of $\mathrm{Gal}(\mathbb{F}_{q^n}|\mathbb{F}_q)$, arises from a cone of $\mathrm{PG}(n-1,q^n)$ with vertex an $(n-d-2)$-dimensional subspace over a maximum exterior set $\mathcal{E}$ with respect to $\Omega_{d-2}(\Gamma)$. We prove that the codes introduced in [Cossidente, A., Marino, G., Pavese, F.: Non-linear maximum rank distance codes. Des. Codes Cryptogr. 79, 597--609 (2016); Durante, N., Siciliano, A.: Non-linear maximum rank distance codes in the cyclic model for the field reduction of finite geometries. Electron. J. Comb. (2017); Donati, G., Durante, N.: A generalization of the normal rational curve in $\mathrm{PG}(d,q^n)$ and its associated non-linear MRD codes. Des. Codes Cryptogr. 86, 1175--1184 (2018)] are appropriate punctured ones of $\mathcal{C}_{\sigma,T}$ and solve completely the inequivalence issue for this class showing that $\mathcal{C}_{\sigma,T}$ is neither equivalent nor adjointly equivalent to the non-linear MRD code $\mathcal{C}_{n,k,\sigma,I}$, $I \subseteq \mathbb{F}_q$, obtained in [Otal, K., \"Ozbudak, F.: Some new non-additive maximum rank distance codes. Finite Fields and Their Applications 50, 293--303 (2018).].

Behavior trees represent a modular way to create an overall controller from a set of sub-controllers solving different sub-problems. These sub-controllers can be created in different ways, such as classical model based control or reinforcement learning (RL). If each sub-controller satisfies the preconditions of the next sub-controller, the overall controller will achieve the overall goal. However, even if all sub-controllers are locally optimal in achieving the preconditions of the next, with respect to some performance metric such as completion time, the overall controller might be far from optimal with respect to the same performance metric. In this paper we show how the performance of the overall controller can be improved if we use approximations of value functions to inform the design of a sub-controller of the needs of the next one. We also show how, under certain assumptions, this leads to a globally optimal controller when the process is executed on all sub-controllers. Finally, this result also holds when some of the sub-controllers are already given, i.e., if we are constrained to use some existing sub-controllers the overall controller will be globally optimal given this constraint.

We establish quantitative compactness estimates for finite difference schemes used to solve nonlinear conservation laws. These equations involve a flux function $f(k(x,t),u)$, where the coefficient $k(x,t$ is $BV$-regular and may exhibit discontinuities along curves in the $(x,t)$ plane. Our approach, which is technically elementary, relies on a discrete interaction estimate and the existence of one strictly convex entropy. While the details are specifically outlined for the Lax-Friedrichs scheme, the same framework can be applied to other difference schemes. Notably, our compactness estimates are new even in the homogeneous case ($k\equiv 1$).

A piecewise linear function can be described in different forms: as an arbitrarily nested expression of $\min$- and $\max$-functions, as a difference of two convex piecewise linear functions, or as a linear combination of maxima of affine-linear functions. In this paper, we provide two main results: first, we show that for every piecewise linear function there exists a linear combination of $\max$-functions with at most $n+1$ arguments, and give an algorithm for its computation. Moreover, these arguments are contained in the finite set of affine-linear functions that coincide with the given function in some open set. Second, we prove that the piecewise linear function $\max(0, x_{1}, \ldots, x_{n})$ cannot be represented as a linear combination of maxima of less than $n+1$ affine-linear arguments. This was conjectured by Wang and Sun in 2005 in a paper on representations of piecewise linear functions as linear combination of maxima.

We study the fundamental problem of finding the best string to represent a given set, in the form of the Closest String problem: Given a set $X \subseteq \Sigma^d$ of $n$ strings, find the string $x^*$ minimizing the radius of the smallest Hamming ball around $x^*$ that encloses all the strings in $X$. In this paper, we investigate whether the Closest String problem admits algorithms that are faster than the trivial exhaustive search algorithm. We obtain the following results for the two natural versions of the problem: $\bullet$ In the continuous Closest String problem, the goal is to find the solution string $x^*$ anywhere in $\Sigma^d$. For binary strings, the exhaustive search algorithm runs in time $O(2^d poly(nd))$ and we prove that it cannot be improved to time $O(2^{(1-\epsilon) d} poly(nd))$, for any $\epsilon > 0$, unless the Strong Exponential Time Hypothesis fails. $\bullet$ In the discrete Closest String problem, $x^*$ is required to be in the input set $X$. While this problem is clearly in polynomial time, its fine-grained complexity has been pinpointed to be quadratic time $n^{2 \pm o(1)}$ whenever the dimension is $\omega(\log n) < d < n^{o(1)}$. We complement this known hardness result with new algorithms, proving essentially that whenever $d$ falls out of this hard range, the discrete Closest String problem can be solved faster than exhaustive search. In the small-$d$ regime, our algorithm is based on a novel application of the inclusion-exclusion principle. Interestingly, all of our results apply (and some are even stronger) to the natural dual of the Closest String problem, called the \emph{Remotest String} problem, where the task is to find a string maximizing the Hamming distance to all the strings in $X$.

The paper revisits the robust $s$-$t$ path problem, one of the most fundamental problems in robust optimization. In the problem, we are given a directed graph with $n$ vertices and $k$ distinct cost functions (scenarios) defined over edges, and aim to choose an $s$-$t$ path such that the total cost of the path is always provable no matter which scenario is realized. With the view of each cost function being associated with an agent, our goal is to find a common $s$-$t$ path minimizing the maximum objective among all agents, and thus create a fair solution for them. The problem is hard to approximate within $o(\log k)$ by any quasi-polynomial time algorithm unless $\mathrm{NP} \subseteq \mathrm{DTIME}(n^{\mathrm{poly}\log n})$, and the best approximation ratio known to date is $\widetilde{O}(\sqrt{n})$ which is based on the natural flow linear program. A longstanding open question is whether we can achieve a polylogarithmic approximation even when a quasi-polynomial running time is allowed. We give the first polylogarithmic approximation for robust $s$-$t$ path since the problem was proposed more than two decades ago. In particular, we introduce a $O(\log n \log k)$-approximate algorithm running in quasi-polynomial time. The algorithm is built on a novel linear program formulation for a decision-tree-type structure which enables us to get rid of the $\Omega(\max\{k,\sqrt{n}\})$ integrality gap of the natural flow LP. Further, we also consider some well-known graph classes, e.g., graphs with bounded treewidth, and show that the polylogarithmic approximation can be achieved polynomially on these graphs. We hope the new proposed techniques in the paper can offer new insights into the robust $s$-$t$ path problem and related problems in robust optimization.

北京阿比特科技有限公司