This paper presents the first systematic study on the fundamental problem of seeking optimal cell average decomposition (OCAD), which arises from constructing efficient high-order bound-preserving (BP) numerical methods within Zhang--Shu framework. Since proposed in 2010, Zhang--Shu framework has attracted extensive attention and been applied to developing many high-order BP discontinuous Galerkin and finite volume schemes for various hyperbolic equations. An essential ingredient in the framework is the decomposition of the cell averages of the numerical solution into a convex combination of the solution values at certain quadrature points. The classic CAD originally proposed by Zhang and Shu has been widely used in the past decade. However, the feasible CADs are not unique, and different CAD would affect the theoretical BP CFL condition and thus the computational costs. Zhang and Shu only checked, for the 1D $\mathbb P^2$ and $\mathbb P^3$ spaces, that their classic CAD based on the Gauss--Lobatto quadrature is optimal in the sense of achieving the mildest BP CFL conditions. In this paper, we establish the general theory for studying the OCAD problem on Cartesian meshes in 1D and 2D. We rigorously prove that the classic CAD is optimal for general 1D $\mathbb P^k$ spaces and general 2D $\mathbb Q^k$ spaces of arbitrary $k$. For the widely used 2D $\mathbb P^k$ spaces, the classic CAD is not optimal, and we establish the general approach to find out the genuine OCAD and propose a more practical quasi-optimal CAD, both of which provide much milder BP CFL conditions than the classic CAD. As a result, our OCAD and quasi-optimal CAD notably improve the efficiency of high-order BP schemes for a large class of hyperbolic or convection-dominated equations, at little cost of only a slight and local modification to the implementation code.
In recent years, many connections have been made between minimal codes, a classical object in coding theory, and other remarkable structures in finite geometry and combinatorics. One of the main problems related to minimal codes is to give lower and upper bounds on the length $m(k,q)$ of the shortest minimal codes of a given dimension $k$ over the finite field $\mathbb{F}_q$. It has been recently proved that $m(k, q) \geq (q+1)(k-1)$. In this note, we prove that $\liminf_{k \rightarrow \infty} \frac{m(k, q)}{k} \geq (q+ \varepsilon(q) )$, where $\varepsilon$ is an increasing function such that $1.52 <\varepsilon(2)\leq \varepsilon(q) \leq \sqrt{2} + \frac{1}{2}$. Hence, the previously known lower bound is not tight for large enough $k$. We then focus on the binary case and prove some structural results on minimal codes of length $3(k-1)$. As a byproduct, we are able to show that, if $k = 5 \pmod 8$ and for other small values of $k$, the bound is not tight.
Multi-material design optimization problems can, after discretization, be solved by the iterative solution of simpler sub-problems which approximate the original problem at an expansion point to first order. In particular, models constructed from convex separable first order approximations have a long and successful tradition in the design optimization community and have led to powerful optimization tools like the prominently used method of moving asymptotes (MMA). In this paper, we introduce several new separable approximations to a model problem and examine them in terms of accuracy and fast evaluation. The models can, in general, be nonconvex and are based on the Sherman-Morrison-Woodbury matrix identity on the one hand, and on the mathematical concept of topological derivatives on the other hand. We show a surprising relation between two models originating from these two -- at a first sight -- very different concepts. Numerical experiments show a high level of accuracy for two of our proposed models while also their evaluation can be performed efficiently once enough data has been precomputed in an offline phase. Additionally it is demonstrated that suboptimal decisions can be avoided using our most accurate models.
In this paper, we study error bounds for {\em Bayesian quadrature} (BQ), with an emphasis on noisy settings, randomized algorithms, and average-case performance measures. We seek to approximate the integral of functions in a {\em Reproducing Kernel Hilbert Space} (RKHS), particularly focusing on the Mat\'ern-$\nu$ and squared exponential (SE) kernels, with samples from the function potentially being corrupted by Gaussian noise. We provide a two-step meta-algorithm that serves as a general tool for relating the average-case quadrature error with the $L^2$-function approximation error. When specialized to the Mat\'ern kernel, we recover an existing near-optimal error rate while avoiding the existing method of repeatedly sampling points. When specialized to other settings, we obtain new average-case results for settings including the SE kernel with noise and the Mat\'ern kernel with misspecification. Finally, we present algorithm-independent lower bounds that have greater generality and/or give distinct proofs compared to existing ones.
In this paper, we construct general machinery for proving Sum-of-Squares lower bounds on certification problems by generalizing the techniques used by Barak et al. [FOCS 2016] to prove Sum-of-Squares lower bounds for planted clique. Using this machinery, we prove degree $n^{\epsilon}$ Sum-of-Squares lower bounds for tensor PCA, the Wishart model of sparse PCA, and a variant of planted clique which we call planted slightly denser subgraph.
We propose in this paper efficient first/second-order time-stepping schemes for the evolutional Navier-Stokes-Nernst-Planck-Poisson equations. The proposed schemes are constructed using an auxiliary variable reformulation and sophisticated treatment of the terms coupling different equations. By introducing a dynamic equation for the auxiliary variable and reformulating the original equations into an equivalent system, we construct first- and second-order semi-implicit linearized schemes for the underlying problem. The main advantages of the proposed method are: (1) the schemes are unconditionally stable in the sense that a discrete energy keeps decay during the time stepping; (2) the concentration components of the discrete solution preserve positivity and mass conservation; (3) the delicate implementation shows that the proposed schemes can be very efficiently realized, with computational complexity close to a semi-implicit scheme. Some numerical examples are presented to demonstrate the accuracy and performance of the proposed method. As far as the best we know, this is the first second-order method which satisfies all the above properties for the Navier-Stokes-Nernst-Planck-Poisson equations.
Learning causal relationships between variables is a fundamental task in causal inference and directed acyclic graphs (DAGs) are a popular choice to represent the causal relationships. As one can recover a causal graph only up to its Markov equivalence class from observations, interventions are often used for the recovery task. Interventions are costly in general and it is important to design algorithms that minimize the number of interventions performed. In this work, we study the problem of identifying the smallest set of interventions required to learn the causal relationships between a subset of edges (target edges). Under the assumptions of faithfulness, causal sufficiency, and ideal interventions, we study this problem in two settings: when the underlying ground truth causal graph is known (subset verification) and when it is unknown (subset search). For the subset verification problem, we provide an efficient algorithm to compute a minimum sized interventional set; we further extend these results to bounded size non-atomic interventions and node-dependent interventional costs. For the subset search problem, in the worst case, we show that no algorithm (even with adaptivity or randomization) can achieve an approximation ratio that is asymptotically better than the vertex cover of the target edges when compared with the subset verification number. This result is surprising as there exists a logarithmic approximation algorithm for the search problem when we wish to recover the whole causal graph. To obtain our results, we prove several interesting structural properties of interventional causal graphs that we believe have applications beyond the subset verification/search problems studied here.
The nonconvex formulation of matrix completion problem has received significant attention in recent years due to its affordable complexity compared to the convex formulation. Gradient descent (GD) is the simplest yet efficient baseline algorithm for solving nonconvex optimization problems. The success of GD has been witnessed in many different problems in both theory and practice when it is combined with random initialization. However, previous works on matrix completion require either careful initialization or regularizers to prove the convergence of GD. In this work, we study the rank-1 symmetric matrix completion and prove that GD converges to the ground truth when small random initialization is used. We show that in logarithmic amount of iterations, the trajectory enters the region where local convergence occurs. We provide an upper bound on the initialization size that is sufficient to guarantee the convergence and show that a larger initialization can be used as more samples are available. We observe that implicit regularization effect of GD plays a critical role in the analysis, and for the entire trajectory, it prevents each entry from becoming much larger than the others.
Text-to-image models, which can generate high-quality images based on textual input, have recently enabled various content-creation tools. Despite significantly affecting a wide range of downstream applications, the distributions of these generated images are still not fully understood, especially when it comes to the potential stereotypical attributes of different genders. In this work, we propose a paradigm (Gender Presentation Differences) that utilizes fine-grained self-presentation attributes to study how gender is presented differently in text-to-image models. By probing gender indicators in the input text (e.g., "a woman" or "a man"), we quantify the frequency differences of presentation-centric attributes (e.g., "a shirt" and "a dress") through human annotation and introduce a novel metric: GEP. Furthermore, we propose an automatic method to estimate such differences. The automatic GEP metric based on our approach yields a higher correlation with human annotations than that based on existing CLIP scores, consistently across three state-of-the-art text-to-image models. Finally, we demonstrate the generalization ability of our metrics in the context of gender stereotypes related to occupations.
The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.