An obstacle representation of a graph $G$ consists of a set of pairwise disjoint simply-connected closed regions and a one-to-one mapping of the vertices of $G$ to points such that two vertices are adjacent in $G$ if and only if the line segment connecting the two corresponding points does not intersect any obstacle. The obstacle number of a graph is the smallest number of obstacles in an obstacle representation of the graph in the plane such that all obstacles are simple polygons. It is known that the obstacle number of each $n$-vertex graph is $O(n \log n)$ [Balko, Cibulka, and Valtr, 2018] and that there are $n$-vertex graphs whose obstacle number is $\Omega(n/(\log\log n)^2)$ [Dujmovi\'c and Morin, 2015]. We improve this lower bound to $\Omega(n/\log\log n)$ for simple polygons and to $\Omega(n)$ for convex polygons. To obtain these stronger bounds, we improve known estimates on the number of $n$-vertex graphs with bounded obstacle number, solving a conjecture by Dujmovi\'c and Morin. We also show that if the drawing of some $n$-vertex graph is given as part of the input, then for some drawings $\Omega(n^2)$ obstacles are required to turn them into an obstacle representation of the graph. Our bounds are asymptotically tight in several instances. We complement these combinatorial bounds by two complexity results. First, we show that computing the obstacle number of a graph $G$ is fixed-parameter tractable in the vertex cover number of $G$. Second, we show that, given a graph $G$ and a simple polygon $P$, it is NP-hard to decide whether $G$ admits an obstacle representation using $P$ as the only obstacle.
This paper presents the error analysis of numerical methods on graded meshes for stochastic Volterra equations with weakly singular kernels. We first prove a novel regularity estimate for the exact solution via analyzing the associated convolution structure. This reveals that the exact solution exhibits an initial singularity in the sense that its H\"older continuous exponent on any neighborhood of $t=0$ is lower than that on every compact subset of $(0,T]$. Motivated by the initial singularity, we then construct the Euler--Maruyama method, fast Euler--Maruyama method, and Milstein method based on graded meshes. By establishing their pointwise-in-time error estimates, we give the grading exponents of meshes to attain the optimal uniform-in-time convergence orders, where the convergence orders improve those of the uniform mesh case. Numerical experiments are finally reported to confirm the sharpness of theoretical findings.
Learning classification tasks of (2^nx2^n) inputs typically consist of \le n (2x2) max-pooling (MP) operators along the entire feedforward deep architecture. Here we show, using the CIFAR-10 database, that pooling decisions adjacent to the last convolutional layer significantly enhance accuracies. In particular, average accuracies of the advanced-VGG with m layers (A-VGGm) architectures are 0.936, 0.940, 0.954, 0.955, and 0.955 for m=6, 8, 14, 13, and 16, respectively. The results indicate A-VGG8s' accuracy is superior to VGG16s', and that the accuracies of A-VGG13 and A-VGG16 are equal, and comparable to that of Wide-ResNet16. In addition, replacing the three fully connected (FC) layers with one FC layer, A-VGG6 and A-VGG14, or with several linear activation FC layers, yielded similar accuracies. These significantly enhanced accuracies stem from training the most influential input-output routes, in comparison to the inferior routes selected following multiple MP decisions along the deep architecture. In addition, accuracies are sensitive to the order of the non-commutative MP and average pooling operators adjacent to the output layer, varying the number and location of training routes. The results call for the reexamination of previously proposed deep architectures and their accuracies by utilizing the proposed pooling strategy adjacent to the output layer.
We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits very good performances and robustness with respect to all parameters of interest.
In 1972 Mykkeltveit proved that the maximum number of vertex-disjoint cycles in the de Bruijn graphs of order $n$ is attained by the pure cycling register rule, as conjectured by Golomb. We generalize this result to the tensor product of the de Bruijn graph of order $n$ and a simple cycle of size $k$, when $n$ divides $k$ or vice versa. We also develop counting formulae for a large family of cycling register rules, including the linear register rules proposed by Golomb.
In this article we provide a constant factor approximation for the $(p,3)$-flexible graph connectivity problem, improving on the previous best known $O(p)$-approximation.
Whether or not the Kronecker coefficients of the symmetric group count some set of combinatorial objects is a longstanding open question. In this work we show that a given Kronecker coefficient is proportional to the rank of a projector that can be measured efficiently using a quantum computer. In other words a Kronecker coefficient counts the dimension of the vector space spanned by the accepting witnesses of a QMA verifier, where QMA is the quantum analogue of NP. This implies that approximating the Kronecker coefficients to within a given relative error is not harder than a certain natural class of quantum approximate counting problems that captures the complexity of estimating thermal properties of quantum many-body systems. A second consequence is that deciding positivity of Kronecker coefficients is contained in QMA, complementing a recent NP-hardness result of Ikenmeyer, Mulmuley and Walter. We obtain similar results for the related problem of approximating row sums of the character table of the symmetric group. Finally, we discuss an efficient quantum algorithm that approximates normalized Kronecker coefficients to inverse-polynomial additive error.
We establish optimal error bounds on time-splitting methods for the nonlinear Schr\"odinger equation with low regularity potential and typical power-type nonlinearity $ f(\rho) = \rho^\sigma $, where $ \rho:=|\psi|^2 $ is the density with $ \psi $ the wave function and $ \sigma > 0 $ the exponent of the nonlinearity. For the first-order Lie-Trotter time-splitting method, optimal $ L^2 $-norm error bound is proved for $L^\infty$-potential and $ \sigma > 0 $, and optimal $H^1$-norm error bound is obtained for $ W^{1, 4} $-potential and $ \sigma \geq 1/2 $. For the second-order Strang time-splitting method, optimal $ L^2 $-norm error bound is established for $H^2$-potential and $ \sigma \geq 1 $, and optimal $H^1$-norm error bound is proved for $H^3$-potential and $ \sigma \geq 3/2 $. Compared to those error estimates of time-splitting methods in the literature, our optimal error bounds either improve the convergence rates under the same regularity assumptions or significantly relax the regularity requirements on potential and nonlinearity for optimal convergence orders. A key ingredient in our proof is to adopt a new technique called \textit{regularity compensation oscillation} (RCO), where low frequency modes are analyzed by phase cancellation, and high frequency modes are estimated by regularity of the solution. Extensive numerical results are reported to confirm our error estimates and to demonstrate that they are sharp.
Introduction: Oblique Target-rotation in the context of exploratory factor analysis is a relevant method for the investigation of the oblique independent clusters model. It was argued that minimizing single cross-loadings by means of target rotation may lead to large effects of sampling error on the target rotated factor solutions. Method: In order to minimize effects of sampling error on results of Target-rotation we propose to compute the mean cross-loadings for each block of salient loadings of the independent clusters model and to perform target rotation for the block-wise mean cross-loadings. The resulting transformation-matrix is than applied to the complete unrotated loading matrix in order to produce mean Target-rotated factors. Results: A simulation study based on correlated independent factor models revealed that mean oblique Target-rotation resulted in smaller negative bias of factor inter-correlations than conventional Target-rotation based on single loadings, especially when sample size was small and when the number of factors was large. An empirical example revealed that the similarity of Target-rotated factors computed for small subsamples with Target-rotated factors of the total sample was more pronounced for mean Target-rotation than for conventional Target-rotation. Discussion: Mean Target-rotation can be recommended in the context of oblique independent factor models, especially for small samples. An R-script and an SPSS-script for this form of Target-rotation are provided in the Appendix.
By a semi-Lagrangian change of coordinates, the hydrostatic Euler equations describing free-surface sheared flows is rewritten as a system of quasilinear equations, where stability conditions can be determined by the analysis of its hyperbolic structure. This new system can be written as a quasi linear system in time and horizontal variables and involves no more vertical derivatives. However, the coefficients in front of the horizontal derivatives include an integral operator acting on the new vertical variable. The spectrum of these operators is studied in detail, in particular it includes a continuous part. Riemann invariants are then determined as conserved quantities along the characteristic curves. Examples of solutions are provided, in particular stationary solutions and solutions blowing-up in finite time. Eventually, we propose an exact multi-layer $\mathbb{P}_0$-discretization, which could be used to solve numerically this semi-Lagrangian system, and analyze the eigenvalues of the corresponding discretized operator to investigate the hyperbolic nature of the approximated system.
A palindromic substring $T[i.. j]$ of a string $T$ is said to be a shortest unique palindromic substring (SUPS) in $T$ for an interval $[p, q]$ if $T[i.. j]$ is a shortest palindromic substring such that $T[i.. j]$ occurs only once in $T$, and $[i, j]$ contains $[p, q]$. The SUPS problem is, given a string $T$ of length $n$, to construct a data structure that can compute all the SUPSs for any given query interval. It is known that any SUPS query can be answered in $O(\alpha)$ time after $O(n)$-time preprocessing, where $\alpha$ is the number of SUPSs to output [Inoue et al., 2018]. In this paper, we first show that $\alpha$ is at most $4$, and the upper bound is tight. We also show that the total sum of lengths of minimal unique palindromic substrings of string $T$, which is strongly related to SUPSs, is $O(n)$. Then, we present the first $O(n)$-bits data structures that can answer any SUPS query in constant time. Also, we present an algorithm to solve the SUPS problem for a sliding window that can answer any query in $O(\log\log W)$ time and update data structures in amortized $O(\log\sigma + \log\log W)$ time, where $W$ is the size of the window, and $\sigma$ is the alphabet size. Furthermore, we consider the SUPS problem in the after-edit model and present an efficient algorithm. Namely, we present an algorithm that uses $O(n)$ time for preprocessing and answers any $k$ SUPS queries in $O(\log n\log\log n + k\log\log n)$ time after single character substitution. Finally, as a by-product, we propose a fully-dynamic data structure for range minimum queries (RmQs) with a constraint where the width of each query range is limited to poly-logarithmic. The constrained RmQ data structure can answer such a query in constant time and support a single-element edit operation in amortized constant time.