亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of untangling a given (non-planar) straight-line circular drawing $\delta_G$ of an outerplanar graph $G=(V, E)$ into a planar straight-line circular drawing by shifting a minimum number of vertices to a new position on the circle. For an outerplanar graph $G$, it is clear that such a crossing-free circular drawing always exists and we define the circular shifting number shift$(\delta_G)$ as the minimum number of vertices that are required to be shifted in order to resolve all crossings of $\delta_G$. We show that the problem Circular Untangling, asking whether shift$(\delta_G) \le K$ for a given integer $K$, is NP-complete. For $n$-vertex outerplanar graphs, we obtain a tight upper bound of shift$(\delta_G) \le n - \lfloor\sqrt{n-2}\rfloor -2$. Based on these results we study Circular Untangling for almost-planar circular drawings, in which a single edge is involved in all the crossings. In this case, we provide a tight upper bound shift$(\delta_G) \le \lfloor \frac{n}{2} \rfloor-1$ and present a constructive polynomial-time algorithm to compute the circular shifting number of almost-planar drawings.

相關內容

An old result of M\"uller and R\"odl states that a countable graph $G$ has a subgraph whose vertices all have infinite degree if and only if for any vertex labeling of $G$ by positive integers, an infinite increasing path can be found. They asked whether an analogous equivalence holds for edge labelings, which Reiterman answered in the affirmative. Recently, Arman, Elliott, and R\"odl extended this problem to linear $k$-uniform hypergraphs $H$ and generalized the original equivalence for vertex labelings. They asked whether Reiterman's result for edge labelings can similarly be extended. We confirm this for the case where $H$ admits finitely many $\beta$-cycles involving any fixed vertex.

The incompleteness of speech inputs severely degrades the performance of all the related speech signal processing applications. Although many researches have been proposed to address this issue, they controlled the data missing conditions by simulation with self-defined masking lengths or sizes. Besides, the masking definitions are different among all these experimental settings. This paper presents a novel intermittent speech recovery (ISR) system for real-world self-powered intermittent devices. Three contributive stages: interpolation, enhancement, and combination are applied to the ISR system for speech reconstruction. The experimental results show that our recovery system increases speech quality by up to 591.7%, while increasing speech intelligibility by up to 80.5%. Most importantly, the proposed ISR system improves the WER scores by up to 52.6%. The promising results not only confirm the effectiveness of the reconstruction but also encourage the utilization of these battery-free wearable/IoT devices.

Following Zhang and Grossi~(AAAI 2021), we study in more depth a variant of weighted voting games in which agents' weights are induced by a transitive support structure. This class of simple games is notably well suited to study the relative importance of agents in the liquid democracy framework. We first propose a pseudo-polynomial time algorithm to compute the Banzhaf and Shapley-Shubik indices for this class of game. Then, we study a bribery problem, in which one tries to maximize/minimize the voting power/weight of a given agent by changing the support structure under a budget constraint. We show that these problems are computationally hard and provide several parameterized complexity results.

To assess whether there is some signal in a big database, aggregate tests for the global null hypothesis of no effect are routinely applied in practice before more specialized analysis is carried out. Although a plethora of aggregate tests is available, each test has its strengths but also its blind spots. In a Gaussian sequence model, we study whether it is possible to obtain a test with substantially better consistency properties than the likelihood ratio (i.e., Euclidean norm based) test. We establish an impossibility result, showing that in the high-dimensional framework we consider, the set of alternatives for which a test may improve upon the likelihood ratio test -- that is, its superconsistency points -- is always asymptotically negligible in a relative volume sense.

We consider the problem of finding a near ground state of a $p$-spin model with Rademacher couplings by means of a low-depth circuit. As a direct extension of the authors' recent work [Gamarnik, Jagannath, Wein 2020], we establish that any poly-size $n$-output circuit that produces a spin assignment with objective value within a certain constant factor of optimality, must have depth at least $\log n/(2\log\log n)$ as $n$ grows. This is stronger than the known state of the art bounds of the form $\Omega(\log n/(k(n)\log\log n))$ for similar combinatorial optimization problems, where $k(n)$ depends on the optimality value. For example, for the largest clique problem $k(n)$ corresponds to the square of the size of the clique [Rossman 2010]. At the same time our results are not quite comparable since in our case the circuits are required to produce a solution itself rather than solving the associated decision problem. As in our earlier work, the approach is based on the overlap gap property (OGP) exhibited by random $p$-spin models, but the derivation of the circuit lower bound relies further on standard facts from Fourier analysis on the Boolean cube, in particular the Linial-Mansour-Nisan Theorem. To the best of our knowledge, this is the first instance when methods from spin glass theory have ramifications for circuit complexity.

Continuous determinantal point processes (DPPs) are a class of repulsive point processes on $\mathbb{R}^d$ with many statistical applications. Although an explicit expression of their density is known, it is too complicated to be used directly for maximum likelihood estimation. In the stationary case, an approximation using Fourier series has been suggested, but it is limited to rectangular observation windows and no theoretical results support it. In this contribution, we investigate a different way to approximate the likelihood by looking at its asymptotic behaviour when the observation window grows towards $\mathbb{R}^d$. This new approximation is not limited to rectangular windows, is faster to compute than the previous one, does not require any tuning parameter, and some theoretical justifications are provided. It moreover provides an explicit formula for estimating the asymptotic variance of the associated estimator. The performances are assessed in a simulation study on standard parametric models on $\mathbb{R}^d$ and compare favourably to common alternative estimation methods for continuous DPPs.

We study the Bahadur efficiency of several weighted L2--type goodness--of--fit tests based on the empirical characteristic function. The methods considered are for normality and exponentiality testing, and for testing goodness--of--fit to the logistic distribution. Our results are helpful in deciding which specific test a potential practitioner should apply. For the celebrated BHEP and energy tests for normality we obtain novel efficiency results, with some of them in the multivariate case, while in the case of the logistic distribution this is the first time that efficiencies are computed for any composite goodness--of--fit test.

For the approximation and simulation of twofold iterated stochastic integrals and the corresponding L\'{e}vy areas w.r.t. a multi-dimensional Wiener process, we review four algorithms based on a Fourier series approach. Especially, the very efficient algorithm due to Wiktorsson and a newly proposed algorithm due to Mrongowius and R\"ossler are considered. To put recent advances into context, we analyse the four Fourier-based algorithms in a unified framework to highlight differences and similarities in their derivation. A comparison of theoretical properties is complemented by a numerical simulation that reveals the order of convergence for each algorithm. Further, concrete instructions for the choice of the optimal algorithm and parameters for the simulation of solutions for stochastic (partial) differential equations are given. Additionally, we provide advice for an efficient implementation of the considered algorithms and incorporated these insights into an open source toolbox that is freely available for both Julia and MATLAB programming languages. The performance of this toolbox is analysed by comparing it to some existing implementations, where we observe a significant speed-up.

In this paper, we present a comprehensive review of the imbalance problems in object detection. To analyze the problems in a systematic manner, we introduce a problem-based taxonomy. Following this taxonomy, we discuss each problem in depth and present a unifying yet critical perspective on the solutions in the literature. In addition, we identify major open issues regarding the existing imbalance problems as well as imbalance problems that have not been discussed before. Moreover, in order to keep our review up to date, we provide an accompanying webpage which catalogs papers addressing imbalance problems, according to our problem-based taxonomy. Researchers can track newer studies on this webpage available at: //github.com/kemaloksuz/ObjectDetectionImbalance .

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

北京阿比特科技有限公司