We show that for each reduced odd latin square of even order there exists at least one map such that its image is a reduced even latin square of the same order. We prove that this map is injective. As a consequence, we can show that the number of even latin squares of even order is bounded from below by the number of odd latin squares of the same order. This gives a positive answer to the Alon-Tarsi conjecture on even latin squares
Complexity classes defined by modifying the acceptance condition of NP computations have been extensively studied. For example, the class UP, which contains decision problems solvable by non-deterministic polynomial-time Turing machines (NPTMs) with at most one accepting path -- equivalently NP problems with at most one solution -- has played a significant role in cryptography, since P=/=UP is equivalent to the existence of one-way functions. In this paper, we define and examine variants of several such classes where the acceptance condition concerns the total number of computation paths of an NPTM, instead of the number of accepting ones. This direction reflects the relationship between the counting classes #P and TotP, which are the classes of functions that count the number of accepting paths and the total number of paths of NPTMs, respectively. The former is the well-studied class of counting versions of NP problems, introduced by Valiant (1979). The latter contains all self-reducible counting problems in #P whose decision version is in P, among them prominent #P-complete problems such as Non-negative Permanent, #PerfMatch, and #Dnf-Sat, thus playing a significant role in the study of approximable counting problems. We show that almost all classes introduced in this work coincide with their '# accepting paths'-definable counterparts. As a result, we present a novel family of complete problems for the classes parity-P, Modkp, SPP, WPP, C=P, and PP that are defined via TotP-complete problems under parsimonious reductions.
Variational regularization is commonly used to solve linear inverse problems, and involves augmenting a data fidelity by a regularizer. The regularizer is used to promote a priori information, and is weighted by a regularization parameter. Selection of an appropriate regularization parameter is critical, with various choices leading to very different reconstructions. Existing strategies such as the discrepancy principle and L-curve can be used to determine a suitable parameter value, but in recent years a supervised machine learning approach called bilevel learning has been employed. Bilevel learning is a powerful framework to determine optimal parameters, and involves solving a nested optimisation problem. While previous strategies enjoy various theoretical results, the well-posedness of bilevel learning in this setting is still a developing field. One necessary property is positivity of the determined regularization parameter. In this work, we provide a new condition that better characterises positivity of optimal regularization parameters than the existing theory. Numerical results verify and explore this new condition for both small and large dimensional problems.
Maximal regularity for the Stokes operator plays a crucial role in the theory of the non-stationary Navier--Stokes equations. In this paper, we consider the finite element semi-discretization of the non-stationary Stokes problem and establish the discrete counterpart of maximal regularity in $L^q$ for $q \in \left( \frac{2N}{N+2}, \frac{2N}{N-2} \right)$. For the proof of discrete maximal regularity, we introduce the temporally regularized Green's function. With the aid of this notion, we prove discrete maximal regularity without the Gaussian estimate. As an application, we present $L^p(0,T;L^q(\Omega))$-type error estimates for the approximation of the non-stationary Stokes problem.
We establish globally optimal solutions to a class of fractional optimization problems on a class of constraint sets, whose key characteristics are as follows: 1) The numerator and the denominator of the objective function are both convex, semi-algebraic, Lipschitz continuous and differentiable with Lipschitz continuous gradients on the constraint set. 2) The constraint set is closed, convex and semi-algebraic. Compared with Dinkelbach's approach, our novelty falls into the following aspects: 1) Dinkelbach's has to solve a concave maximization problem in each iteration, which is nontrivial to obtain a solution, while ours only needs to conduct one proximity gradient operation in each iteration. 2) Dinkelbach's requires at least one nonnegative point for the numerator to proceed the algorithm, but ours does not, which is available to a much wider class of situations. 3) Dinkelbach's requires a closed and bounded constraint set, while ours only needs the closedness but not necessarily the boundedness. Therefore, our approach is viable for many more practical models, like optimizing the Sharpe ratio (SR) or the Information ratio in mathematical finance. Numerical experiments show that our approach achieves the ground-truth solutions in two simple examples. For real-world financial data, it outperforms several existing approaches for SR maximization.
This paper considers an ML inspired approach to hypothesis testing known as classifier/classification-accuracy testing ($\mathsf{CAT}$). In $\mathsf{CAT}$, one first trains a classifier by feeding it labeled synthetic samples generated by the null and alternative distributions, which is then used to predict labels of the actual data samples. This method is widely used in practice when the null and alternative are only specified via simulators (as in many scientific experiments). We study goodness-of-fit, two-sample ($\mathsf{TS}$) and likelihood-free hypothesis testing ($\mathsf{LFHT}$), and show that $\mathsf{CAT}$ achieves (near-)minimax optimal sample complexity in both the dependence on the total-variation ($\mathsf{TV}$) separation $\epsilon$ and the probability of error $\delta$ in a variety of non-parametric settings, including discrete distributions, $d$-dimensional distributions with a smooth density, and the Gaussian sequence model. In particular, we close the high probability sample complexity of $\mathsf{LFHT}$ for each class. As another highlight, we recover the minimax optimal complexity of $\mathsf{TS}$ over discrete distributions, which was recently established by Diakonikolas et al. (2021). The corresponding $\mathsf{CAT}$ simply compares empirical frequencies in the first half of the data, and rejects the null when the classification accuracy on the second half is better than random.
We examine the problem of variance components testing in general mixed effects models using the likelihood ratio test. We account for the presence of nuisance parameters, i.e. the fact that some untested variances might also be equal to zero. Two main issues arise in this context leading to a non regular setting. First, under the null hypothesis the true parameter value lies on the boundary of the parameter space. Moreover, due to the presence of nuisance parameters the exact location of these boundary points is not known, which prevents from using classical asymptotic theory of maximum likelihood estimation. Then, in the specific context of nonlinear mixed-effects models, the Fisher information matrix is singular at the true parameter value. We address these two points by proposing a shrinked parametric bootstrap procedure, which is straightforward to apply even for nonlinear models. We show that the procedure is consistent, solving both the boundary and the singularity issues, and we provide a verifiable criterion for the applicability of our theoretical results. We show through a simulation study that, compared to the asymptotic approach, our procedure has a better small sample performance and is more robust to the presence of nuisance parameters. A real data application is also provided.
In this article we develop a feasible version of the assumption-lean tests in Liu et al. 20 that can falsify an analyst's justification for the validity of a reported nominal $(1 - \alpha)$ Wald confidence interval (CI) centered at a double machine learning (DML) estimator for any member of the class of doubly robust (DR) functionals studied by Rotnitzky et al. 21. The class of DR functionals is broad and of central importance in economics and biostatistics. It strictly includes both (i) the class of mean-square continuous functionals that can be written as an expectation of an affine functional of a conditional expectation studied by Chernozhukov et al. 22 and the class of functionals studied by Robins et al. 08. The present state-of-the-art estimators for DR functionals $\psi$ are DML estimators $\hat{\psi}_{1}$. The bias of $\hat{\psi}_{1}$ depends on the product of the rates at which two nuisance functions $b$ and $p$ are estimated. Most commonly an analyst justifies the validity of her Wald CIs by proving that, under her complexity-reducing assumptions, the Cauchy-Schwarz (CS) upper bound for the bias of $\hat{\psi}_{1}$ is $o (n^{- 1 / 2})$. Thus if the hypothesis $H_{0}$: the CS upper bound is $o (n^{- 1 / 2})$ is rejected by our test, we will have falsified the analyst's justification for the validity of her Wald CIs. In this work, we exhibit a valid assumption-lean falsification test of $H_{0}$, without relying on complexity-reducing assumptions on $b, p$, or their estimates $\hat{b}, \hat{p}$. Simulation experiments are conducted to demonstrate how the proposed assumption-lean test can be used in practice. An unavoidable limitation of our methodology is that no assumption-lean test of $H_{0}$, including ours, can be a consistent test. Thus failure of our test to reject is not meaningful evidence in favor of $H_{0}$.
A novel positive dependence property is introduced, called positive measure inducing (PMI for short), being fulfilled by numerous copula classes, including Gaussian, Fr\'echet, Farlie-Gumbel-Morgenstern and Frank copulas; it is conjectured that even all positive quadrant dependent Archimedean copulas meet this property. From a geometric viewpoint, a PMI copula concentrates more mass near the main diagonal than in the opposite diagonal. A striking feature of PMI copulas is that they impose an ordering on a certain class of copula-induced measures of concordance, the latter originating in Edwards et al. (2004) and including Spearman's rho $\rho$ and Gini's gamma $\gamma$, leading to numerous new inequalities such as $3 \gamma \geq 2 \rho$. The measures of concordance within this class are estimated using (classical) empirical copulas and the intrinsic construction via empirical checkerboard copulas, and the estimators' asymptotic behaviour is determined. Building upon the presented inequalities, asymptotic tests are constructed having the potential of being used for detecting whether the underlying dependence structure of a given sample is PMI, which in turn can be used for excluding certain copula families from model building. The excellent performance of the tests is demonstrated in a simulation study and by means of a real-data example.
In this paper, we introduce the following new concept in graph drawing. Our task is to find a small collection of drawings such that they all together satisfy some property that is useful for graph visualization. We propose investigating a property where each edge is not crossed in at least one drawing in the collection. We call such collection uncrossed. Such property is motivated by a quintessential problem of the crossing number, where one asks for a plane drawing where the number of edge crossings is minimum. Indeed, if we are allowed to visualize only one drawing, then the one which minimizes the number of crossings is probably the neatest for the first orientation. However, a collection of drawings where each highlights a different aspect of a graph without any crossings could shed even more light on the graph's structure. We propose two definitions. First, the uncrossed number, minimizes the number of graph drawings in a collection, satisfying the uncrossed property. Second, the uncrossed crossing number, minimizes the total number of crossings in the collection that satisfy the uncrossed property. For both definitions, we establish initial results. We prove that the uncrossed crossing number is NP-hard, but there is an FPT algorithm parameterized by the solution size.
We consider Shor's quantum factoring algorithm in the setting of noisy quantum gates. Under a generic model of random noise for (controlled) rotation gates, we prove that the algorithm does not factor integers of the form $pq$ when the noise exceeds a vanishingly small level in terms of $n$ -- the number of bits of the integer to be factored, where $p$ and $q$ are from a well-defined set of primes of positive density. We further prove that with probability $1 - o(1)$ over random prime pairs $(p,q)$, Shor's factoring algorithm does not factor numbers of the form $pq$, with the same level of random noise present.