亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We show that, for all $k\geq 1$, there exists a $k$-uniform $3^+$-free binary morphism. Furthermore, we revisit an old result of Currie and Rampersad on $3$-free binary morphisms and reprove it in a conceptually simpler (but computationally more intensive) way. Our proofs use the theorem-prover Walnut as an essential tool.

相關內容

We say that a (multi)graph $G = (V,E)$ has geometric thickness $t$ if there exists a straight-line drawing $\varphi : V \rightarrow \mathbb{R}^2$ and a $t$-coloring of its edges where no two edges sharing a point in their relative interior have the same color. The Geometric Thickness problem asks whether a given multigraph has geometric thickness at most $t$. This problem was shown to be NP-hard for $t=2$ [Durocher, Gethner, and Mondal, CG 2016]. In this paper, we settle the computational complexity of Geometric Thickness by showing that it is $\exists \mathbb{R}$-complete already for thickness $57$. Moreover, our reduction shows that the problem is $\exists \mathbb{R}$-complete for $8280$-planar graphs, where a graph is $k$-planar if it admits a topological drawing with at most $k$ crossings per edge. In the course of our paper, we answer previous questions on the geometric thickness and on other related problems, in particular, that simultaneous graph embeddings of $58$ edge-disjoint graphs and pseudo-segment stretchability with chromatic number $57$ are $\exists \mathbb{R}$-complete.

40 years ago, Conway and Sloane proposed using the highly symmetrical Coxeter-Todd lattice $K_{12}$ for quantization, and estimated its second moment. Since then, all published lists identify $K_{12}$ as the best 12-dimensional lattice quantizer. Surprisingly, $K_{12}$ is not optimal: we construct two new 12-dimensional lattices with lower normalized second moments. The new lattices are obtained by gluing together 6-dimensional lattices.

A simple graph on $n$ vertices may contain a lot of maximum cliques. But how many can it potentially contain? We will define prime and composite graphs, and we will show that if $n \ge 15$, then the grpahs with the maximum number of maximum cliques have to be composite. Moreover, we will show an edge bound from which we will prove that if any factor of a composite graph has $\omega(G_i) \ge 5$, then it cannot have the maximum number of maximum cliques. Using this we will show that the graph that contains $3^{\lfloor n/3 \rfloor}c$ maximum cliques has the most number of maximum cliques on $n$ vertices, where $c\in\{1,\frac{4}{3},2\}$, depending on $n \text{ mod } 3$.

For a fixed integer $k\ge 2$, a $k$-community structure in an undirected graph is a partition of its vertex set into $k$ sets called communities, each of size at least two, such that every vertex of the graph has proportionally at least as many neighbours in its own community as in any other community. In this paper, we give a necessary and sufficient condition for a forest on $n$ vertices to admit a $k$-community structure. Furthermore, we provide an $O(n^{2})$-time algorithm that computes such a $k$-community structure in a forest, if it exists. These results extend a result of [Bazgan et al., Structural and algorithmic properties of $2$-community structure, Algorithmica, 80(6):1890-1908, 2018]. We also show that if communities are allowed to have size one, then every forest with $n \geq k\geq 2$ vertices admits a $k$-community structure that can be found in time $O(n^{2})$. We then consider threshold graphs and show that every connected threshold graph admits a $2$-community structure if and only if it is not isomorphic to a star; also if such a $2$-community structure exists, we explain how to obtain it in linear time. We further describe two infinite families of disconnected threshold graphs, containing exactly one isolated vertex, that do not admit any $2$-community structure. Finally, we present a new infinite family of connected graphs that may contain an even or an odd number of vertices without $2$-community structures, even if communities are allowed to have size one.

We describe a novel algorithm for solving general parametric (nonlinear) eigenvalue problems. Our method has two steps: first, high-accuracy solutions of non-parametric versions of the problem are gathered at some values of the parameters; these are then combined to obtain global approximations of the parametric eigenvalues. To gather the non-parametric data, we use non-intrusive contour-integration-based methods, which, however, cannot track eigenvalues that migrate into/out of the contour as the parameter changes. Special strategies are described for performing the combination-over-parameter step despite having only partial information on such migrating eigenvalues. Moreover, we dedicate a special focus to the approximation of eigenvalues that undergo bifurcations. Finally, we propose an adaptive strategy that allows one to effectively apply our method even without any a priori information on the behavior of the sought-after eigenvalues. Numerical tests are performed, showing that our algorithm can achieve remarkably high approximation accuracy.

In this paper, we construct $3$-designs using extended quadratic residue codes over $\mathbb {F}_q$ and their dual codes. We give as a corollary $3$-designs that do not follow from the transitivity argument and the Assmus--Mattson Theorem.

We classify the {\it Boolean degree $1$ functions} of $k$-spaces in a vector space of dimension $n$ (also known as {\it Cameron-Liebler classes}) over the field with $q$ elements for $n \geq n_0(k, q)$, a problem going back to a work by Cameron and Liebler from 1982. This also implies that two-intersecting sets with respect to $k$-spaces do not exist for $n \geq n_0(k, q)$. Our main ingredient is the Ramsey theory for geometric lattices.

Given a natural number $k\ge 2$, we consider the $k$-submodular cover problem ($k$-SC). The objective is to find a minimum cost subset of a ground set $\mathcal{X}$ subject to the value of a $k$-submodular utility function being at least a certain predetermined value $\tau$. For this problem, we design a bicriteria algorithm with a cost at most $O(1/\epsilon)$ times the optimal value, while the utility is at least $(1-\epsilon)\tau/r$, where $r$ depends on the monotonicity of $g$.

Sample selection models represent a common methodology for correcting bias induced by data missing not at random. It is well known that these models are not empirically identifiable without exclusion restrictions. In other words, some variables predictive of missingness do not affect the outcome model of interest. The drive to establish this requirement often leads to the inclusion of irrelevant variables in the model. A recent proposal uses adaptive LASSO to circumvent this problem, but its performance depends on the so-called covariance assumption, which can be violated in small to moderate samples. Additionally, there are no tools yet for post-selection inference for this model. To address these challenges, we propose two families of spike-and-slab priors to conduct Bayesian variable selection in sample selection models. These prior structures allow for constructing a Gibbs sampler with tractable conditionals, which is scalable to the dimensions of practical interest. We illustrate the performance of the proposed methodology through a simulation study and present a comparison against adaptive LASSO and stepwise selection. We also provide two applications using publicly available real data. An implementation and code to reproduce the results in this paper can be found at //github.com/adam-iqbal/selection-spike-slab

In recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them. We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7\% relative improvement on the instance segmentation and 7.1\% on the object detection of small objects, compared to the current state of the art method on MS COCO.

北京阿比特科技有限公司