亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study a geometric facility location problem under imprecision. Given $n$ unit intervals in the real line, each with one of $k$ colors, the goal is to place one point in each interval such that the resulting \emph{minimum color-spanning interval} is as large as possible. A minimum color-spanning interval is an interval of minimum size that contains at least one point from a given interval of each color. We prove that if the input intervals are pairwise disjoint, the problem can be solved in $O(n)$ time, even for intervals of arbitrary length. For overlapping intervals, the problem becomes much more difficult. Nevertheless, we show that it can be solved in $O(n \log^2 n)$ time when $k=2$, by exploiting several structural properties of candidate solutions, combined with a number of advanced algorithmic techniques. Interestingly, this shows a sharp contrast with the 2-dimensional version of the problem, recently shown to be NP-hard.

相關內容

A permutation code is a nonlinear code whose codewords are permutation of a set of symbols. We consider the use of permutation code in the deletion channel, and consider the symbol-invariant error model, meaning that the values of the symbols that are not removed are not affected by the deletion. In 1992, Levenshtein gave a construction of perfect single-deletion-correcting permutation codes that attain the maximum code size. Furthermore, he showed in the same paper that the set of all permutations of a given length can be partitioned into permutation codes so constructed. This construction relies on the binary Varshamov-Tenengolts codes. In this paper we give an independent and more direct proof of Levenshtein's result that does not depend on the Varshamov-Tenengolts code. Using the new approach, we devise efficient encoding and decoding algorithms that correct one deletion.

For a subfamily ${F}\subseteq 2^{[n]}$ of the Boolean lattice, consider the graph $G_{F}$ on ${F}$ based on the pairwise inclusion relations among its members. Given a positive integer $t$, how large can ${F}$ be before $G_{F}$ must contain some component of order greater than $t$? For $t=1$, this question was answered exactly almost a century ago by Sperner: the size of a middle layer of the Boolean lattice. For $t=2^n$, this question is trivial. We are interested in what happens between these two extremes. For $t=2^{g}$ with $g=g(n)$ being any integer function that satisfies $g(n)=o(n/\log n)$ as $n\to\infty$, we give an asymptotically sharp answer to the above question: not much larger than the size of a middle layer. This constitutes a nontrivial generalisation of Sperner's theorem. We do so by a reduction to a Tur\'an-type problem for rainbow cycles in properly edge-coloured graphs. Among other results, we also give a sharp answer to the question, how large can ${F}$ be before $G_{F}$ must be connected?

This paper gives a self-contained introduction to the Hilbert projective metric $\mathcal{H}$ and its fundamental properties, with a particular focus on the space of probability measures. We start by defining the Hilbert pseudo-metric on convex cones, focusing mainly on dual formulations of $\mathcal{H}$ . We show that linear operators on convex cones contract in the distance given by the hyperbolic tangent of $\mathcal{H}$, which in particular implies Birkhoff's classical contraction result for $\mathcal{H}$. Turning to spaces of probability measures, where $\mathcal{H}$ is a metric, we analyse the dual formulation of $\mathcal{H}$ in the general setting, and explore the geometry of the probability simplex under $\mathcal{H}$ in the special case of discrete probability measures. Throughout, we compare $\mathcal{H}$ with other distances between probability measures. In particular, we show how convergence in $\mathcal{H}$ implies convergence in total variation, $p$-Wasserstein distance, and any $f$-divergence. Furthermore, we derive a novel sharp bound for the total variation between two probability measures in terms of their Hilbert distance.

We present a novel variational derivation of the Maxwell-GLM system, which augments the original vacuum Maxwell equations via a generalized Lagrangian multiplier approach (GLM) by adding two supplementary acoustic subsystems and which was originally introduced by Munz et al. for purely numerical purposes in order to treat the divergence constraints of the magnetic and the electric field in the vacuum Maxwell equations within general-purpose and non-structure-preserving numerical schemes for hyperbolic PDE. Among the many mathematically interesting features of the model are: i) its symmetric hyperbolicity, ii) the extra conservation law for the total energy density and, most importantly, iii) the very peculiar combination of the basic differential operators, since both, curl-curl and div-grad combinations are mixed within this kind of system. A similar mixture of Maxwell-type and acoustic-type subsystems has recently been also forwarded by Buchman et al. in the context of a reformulation of the Einstein field equations of general relativity in terms of tetrads. This motivates our interest in this class of PDE, since the system is by itself very interesting from a mathematical point of view and can therefore serve as useful prototype system for the development of new structure-preserving numerical methods. Up to now, to the best of our knowledge, there exists neither a rigorous variational derivation of this class of hyperbolic PDE systems, nor do exactly energy-conserving and asymptotic-preserving schemes exist for them. The objectives of this paper are to derive the Maxwell-GLM system from an underlying variational principle, show its consistency with Hamiltonian mechanics and special relativity, extend it to the general nonlinear case and to develop new exactly energy-conserving and asymptotic-preserving finite volume schemes for its discretization.

The Poisson compound decision problem is a classical problem in statistics, for which parametric and nonparametric empirical Bayes methodologies are available to estimate the Poisson's means in static or batch domains. In this paper, we consider the Poisson compound decision problem in a streaming or online domain. By relying on a quasi-Bayesian approach, often referred to as Newton's algorithm, we obtain sequential Poisson's mean estimates that are of easy evaluation, computationally efficient and with a constant computational cost as data increase, which is desirable for streaming data. Large sample asymptotic properties of the proposed estimates are investigated, also providing frequentist guarantees in terms of a regret analysis. We validate empirically our methodology, both on synthetic and real data, comparing against the most popular alternatives.

We study the numerical approximation of advection-diffusion equations with highly oscillatory coefficients and possibly dominant advection terms by means of the Multiscale Finite Element Method. The latter method is a now classical, finite element type method that performs a Galerkin approximation on a problem-dependent basis set, itself pre-computed in an offline stage. The approach is implemented here using basis functions that locally resolve both the diffusion and the advection terms. Variants with additional bubble functions and possibly weak inter-element continuity are proposed. Some theoretical arguments and a comprehensive set of numerical experiments allow to investigate and compare the stability and the accuracy of the approaches. The best approach constructed is shown to be adequate for both the diffusion- and advection-dominated regimes, and does not rely on an auxiliary stabilization parameter that would have to be properly adjusted.

We study the design of effort-maximizing grading schemes between agents with private abilities. Assuming agents derive value from the information their grade reveals about their ability, we find that more informative grading schemes induce more competitive contests. In the contest framework, we investigate the effect of manipulating individual prizes and increasing competition on expected effort, identifying conditions on ability distributions and cost functions under which these transformations may encourage or discourage effort. Our results suggest that more informative grading schemes encourage effort when agents of moderate ability are highly likely, and discourage effort when such agents are unlikely.

We first present a simple recursive algorithm that generates cyclic rotation Gray codes for stamp foldings and semi-meanders, where consecutive strings differ by a stamp rotation. These are the first known Gray codes for stamp foldings and semi-meanders, and we thus solve an open problem posted by Sawada and Li in [Electron. J. Comb. 19(2), 2012]. We then introduce an iterative algorithm that generates the same rotation Gray codes for stamp foldings and semi-meanders. Both the recursive and iterative algorithms generate stamp foldings and semi-meanders in constant amortized time and $O(n)$-amortized time per string respectively, using a linear amount of memory.

Popular word embedding methods such as GloVe and Word2Vec are related to the factorization of the pointwise mutual information (PMI) matrix. In this paper, we link correspondence analysis (CA) to the factorization of the PMI matrix. CA is a dimensionality reduction method that uses singular value decomposition (SVD), and we show that CA is mathematically close to the weighted factorization of the PMI matrix. In addition, we present variants of CA that turn out to be successful in the factorization of the word-context matrix, i.e. CA applied to a matrix where the entries undergo a square-root transformation (ROOT-CA) and a root-root transformation (ROOTROOT-CA). While this study focuses on traditional static word embedding methods, to extend the contribution of this paper, we also include a comparison of transformer-based encoder BERT, i.e. contextual word embedding, with these traditional methods. An empirical comparison among CA- and PMI-based methods as well as BERT shows that overall results of ROOT-CA and ROOTROOT-CA are slightly better than those of the PMI-based methods and are competitive with BERT.

We explore the theoretical possibility of learning $d$-dimensional targets with $W$-parameter models by gradient flow (GF) when $W<d$. Our main result shows that if the targets are described by a particular $d$-dimensional probability distribution, then there exist models with as few as two parameters that can learn the targets with arbitrarily high success probability. On the other hand, we show that for $W<d$ there is necessarily a large subset of GF-non-learnable targets. In particular, the set of learnable targets is not dense in $\mathbb R^d$, and any subset of $\mathbb R^d$ homeomorphic to the $W$-dimensional sphere contains non-learnable targets. Finally, we observe that the model in our main theorem on almost guaranteed two-parameter learning is constructed using a hierarchical procedure and as a result is not expressible by a single elementary function. We show that this limitation is essential in the sense that most models written in terms of elementary functions cannot achieve the learnability demonstrated in this theorem.

北京阿比特科技有限公司