亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Erd\H{o}s distinct distance problem is a ubiquitous problem in discrete geometry. Less well known is Erd\H{o}s' distinct angle problem, the problem of finding the minimum number of distinct angles between $n$ non-collinear points in the plane. The standard problem is already well understood. However, it admits many of the same variants as the distinct distance problem, many of which are unstudied. We provide upper and lower bounds on a broad class of distinct angle problems. We show that the number of distinct angles formed by $n$ points in general position is $O(n^{\log_2(7)})$, providing the first non-trivial bound for this quantity. We introduce a new class of asymptotically optimal point configurations with no four cocircular points. Then, we analyze the sensitivity of asymptotically optimal point sets to perturbation, yielding a much broader class of asymptotically optimal configurations. In higher dimensions we show that a variant of Lenz's construction admits fewer distinct angles than the optimal configurations in two dimensions. We also show that the minimum size of a maximal subset of $n$ points in general position admitting only unique angles is $\Omega(n^{1/5})$ and $O(n^{\log_2(7)/3})$. We also provide bounds on the partite variants of the standard distinct angle problem.

相關內容

This paper considers the numerical treatment of the time-dependent Gross-Pitaevskii equation. In order to conserve the time invariants of the equation as accurately as possible, we propose a Crank-Nicolson-type time discretization that is combined with a suitable generalized finite element discretization in space. The space discretization is based on the technique of Localized Orthogonal Decompositions (LOD) and allows to capture the time invariants with an accuracy of order $\mathcal{O}(H^6)$ with respect to the chosen mesh size $H$. This accuracy is preserved due to the conservation properties of the time stepping method. Furthermore, we prove that the resulting scheme approximates the exact solution in the $L^{\infty}(L^2)$-norm with order $\mathcal{O}(\tau^2 + H^4)$, where $\tau$ denotes the step size. The computational efficiency of the method is demonstrated in numerical experiments for a benchmark problem with known exact solution.

The framework of distribution testing is currently ubiquitous in the field of property testing. In this model, the input is a probability distribution accessible via independently drawn samples from an oracle. The testing task is to distinguish a distribution that satisfies some property from a distribution that is far from satisfying it in the $\ell_1$ distance. The task of tolerant testing imposes a further restriction, that distributions close to satisfying the property are also accepted. This work focuses on the connection of the sample complexities of non-tolerant ("traditional") testing of distributions and tolerant testing thereof. When limiting our scope to label-invariant (symmetric) properties of distribution, we prove that the gap is at most quadratic. Conversely, the property of being the uniform distribution is indeed known to have an almost-quadratic gap. When moving to general, not necessarily label-invariant properties, the situation is more complicated, and we show some partial results. We show that if a property requires the distributions to be non-concentrated, then it cannot be non-tolerantly tested with $o(\sqrt{n})$ many samples, where $n$ denotes the universe size. Clearly, this implies at most a quadratic gap, because a distribution can be learned (and hence tolerantly tested against any property) using $\mathcal{O}(n)$ many samples. Being non-concentrated is a strong requirement on the property, as we also prove a close to linear lower bound against their tolerant tests. To provide evidence for other general cases (where the properties are not necessarily label-invariant), we show that if an input distribution is very concentrated, in the sense that it is mostly supported on a subset of size $s$ of the universe, then it can be learned using only $\mathcal{O}(s)$ many samples. The learning procedure adapts to the input, and works without knowing $s$ in advance.

The list-decodable code has been an active topic in theoretical computer science since the seminal papers of M. Sudan and V. Guruswami in 1997-1998. List-decodable codes are also considered in rank-metric, subspace metric, cover-metric, pair metric and insdel metric settings. In this paper we show that rates, list-decodable radius and list sizes are closely related to the classical topic of covering codes. We prove new general simple but strong upper bounds for list-decodable codes in general finite metric spaces based on various covering codes of finite metric spaces. The general covering code upper bounds can apply to the case when the volumes of the balls depend on the centers, not only on the radius case. Then any good upper bound on the covering radius or the size of covering code imply a good upper bound on the size of list-decodable codes.Our results give exponential improvements on the recent generalized Singleton upper bound in STOC 2020 for Hamming metric list-decodable codes, when the code lengths are large. Even for the list size $L=1$ case our covering code upper bounds give highly non-trivial upper bounds on the sizes of codes with the given minimum distance.The generalized Singleton upper bound for average-radius list-decodable codes is given. The asymptotic forms of covering code bounds can partially recover the Blinovsky bound and the combinatorial bound of Guruswami-H{\aa}stad-Sudan-Zuckerman in Hamming metric setting. We also suggest to study the combinatorial covering list-decodable codes as a natural generalization of combinatorial list-decodable codes. We apply our general covering code upper bounds for list-decodable rank-metric codes, list-decodable subspace codes, list-decodable insertion codes and list-decodable deletion codes. Some new better results about non-list-decodability of rank-metric codes and subspace codes are obtained.

In the Online Machine Covering problem jobs, defined by their sizes, arrive one by one and have to be assigned to $m$ parallel and identical machines, with the goal of maximizing the load of the least-loaded machine. In this work, we study the Machine Covering problem in the recently popular random-order model. Here no extra resources are present, but instead the adversary is weakened in that it can only decide upon the input set while jobs are revealed uniformly at random. It is particularly relevant to Machine Covering where lower bounds are usually associated to highly structured input sequences. We first analyze Graham's Greedy-strategy in this context and establish that its competitive ratio decreases slightly to $\Theta\left(\frac{m}{\log(m)}\right)$ which is asymptotically tight. Then, as our main result, we present an improved $\tilde{O}(\sqrt[4]{m})$-competitive algorithm for the problem. This result is achieved by exploiting the extra information coming from the random order of the jobs, using sampling techniques to devise an improved mechanism to distinguish jobs that are relatively large from small ones. We complement this result with a first lower bound showing that no algorithm can have a competitive ratio of $O\left(\frac{\log(m)}{\log\log(m)}\right)$ in the random-order model. This lower bound is achieved by studying a novel variant of the Secretary problem, which could be of independent interest.

Consider a system of $m$ polynomial equations $\{p_i(x) = b_i\}_{i \leq m}$ of degree $D\geq 2$ in $n$-dimensional variable $x \in \mathbb{R}^n$ such that each coefficient of every $p_i$ and $b_i$s are chosen at random and independently from some continuous distribution. We study the basic question of determining the smallest $m$ -- the algorithmic threshold -- for which efficient algorithms can find refutations (i.e. certificates of unsatisfiability) for such systems. This setting generalizes problems such as refuting random SAT instances, low-rank matrix sensing and certifying pseudo-randomness of Goldreich's candidate generators and generalizations. We show that for every $d \in \mathbb{N}$, the $(n+m)^{O(d)}$-time canonical sum-of-squares (SoS) relaxation refutes such a system with high probability whenever $m \geq O(n) \cdot (\frac{n}{d})^{D-1}$. We prove a lower bound in the restricted low-degree polynomial model of computation which suggests that this trade-off between SoS degree and the number of equations is nearly tight for all $d$. We also confirm the predictions of this lower bound in a limited setting by showing a lower bound on the canonical degree-$4$ sum-of-squares relaxation for refuting random quadratic polynomials. Together, our results provide evidence for an algorithmic threshold for the problem at $m \gtrsim \widetilde{O}(n) \cdot n^{(1-\delta)(D-1)}$ for $2^{n^{\delta}}$-time algorithms for all $\delta$.

We study the problem of algorithmically optimizing the Hamiltonian $H_N$ of a spherical or Ising mixed $p$-spin glass. The maximum asymptotic value $\mathsf{OPT}$ of $H_N/N$ is characterized by a variational principle known as the Parisi formula, proved first by Talagrand and in more generality by Panchenko. Recently developed approximate message passing algorithms efficiently optimize $H_N/N$ up to a value $\mathsf{ALG}$ given by an extended Parisi formula, which minimizes over a larger space of functional order parameters. These two objectives are equal for spin glasses exhibiting a no overlap gap property. However, $\mathsf{ALG} < \mathsf{OPT}$ can also occur, and no efficient algorithm producing an objective value exceeding $\mathsf{ALG}$ is known. We prove that for mixed even $p$-spin models, no algorithm satisfying an overlap concentration property can produce an objective larger than $\mathsf{ALG}$ with non-negligible probability. This property holds for all algorithms with suitably Lipschitz dependence on the disorder coefficients of $H_N$. It encompasses natural formulations of gradient descent, approximate message passing, and Langevin dynamics run for bounded time and in particular includes the algorithms achieving $\mathsf{ALG}$ mentioned above. To prove this result, we substantially generalize the overlap gap property framework introduced by Gamarnik and Sudan to arbitrary ultrametric forbidden structures of solutions.

This short note present a "proof" of $P\neq NP$. The "proof" with double quotation marks is to indicate that we do not know whether the proof is correct or not (We're confused because we do know in which we make the mistakes).

We show that it is provable in PA that there is an arithmetically definable sequence $\{\phi_{n}:n \in \omega\}$ of $\Pi^{0}_{2}$-sentences, such that - PRA+$\{\phi_{n}:n \in \omega\}$ is $\Pi^{0}_{2}$-sound and $\Pi^{0}_{1}$-complete - the length of $\phi_{n}$ is bounded above by a polynomial function of $n$ with positive leading coefficient - PRA+$\phi_{n+1}$ always proves 1-consistency of PRA+$\phi_{n}$. One has that the growth in logical strength is in some sense "as fast as possible", manifested in the fact that the total general recursive functions whose totality is asserted by the true $\Pi^{0}_{2}$-sentences in the sequence are cofinal growth-rate-wise in the set of all total general recursive functions. We then develop an argument which makes use of a sequence of sentences constructed by an application of the diagonal lemma, which are generalisations in a broad sense of Hugh Woodin's "Tower of Hanoi" construction as outlined in his essay "Tower of Hanoi" in Chapter 18 of the anthology "Truth in Mathematics". The argument establishes the result that it is provable in PA that $P \neq NP$. We indicate how to pull the argument all the way down into EFA.

We study population protocols, a model of distributed computing appropriate for modeling well-mixed chemical reaction networks and other physical systems where agents exchange information in pairwise interactions, but have no control over their schedule of interaction partners. The well-studied *majority* problem is that of determining in an initial population of $n$ agents, each with one of two opinions $A$ or $B$, whether there are more $A$, more $B$, or a tie. A *stable* protocol solves this problem with probability 1 by eventually entering a configuration in which all agents agree on a correct consensus decision of $\mathsf{A}$, $\mathsf{B}$, or $\mathsf{T}$, from which the consensus cannot change. We describe a protocol that solves this problem using $O(\log n)$ states ($\log \log n + O(1)$ bits of memory) and optimal expected time $O(\log n)$. The number of states $O(\log n)$ is known to be optimal for the class of polylogarithmic time stable protocols that are "output dominant" and "monotone". These are two natural constraints satisfied by our protocol, making it simultaneously time- and state-optimal for that class. We introduce a key technique called a "fixed resolution clock" to achieve partial synchronization. Our protocol is *nonuniform*: the transition function has the value $\left \lceil {\log n} \right \rceil$ encoded in it. We show that the protocol can be modified to be uniform, while increasing the state complexity to $\Theta(\log n \log \log n)$.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司