We design a deterministic subexponential time algorithm that takes as input a multivariate polynomial $f$ computed by a constant-depth circuit over rational numbers, and outputs a list $L$ of circuits (of unbounded depth and possibly with division gates) that contains all irreducible factors of $f$ computable by constant-depth circuits. This list $L$ might also include circuits that are spurious: they either do not correspond to factors of $f$ or are not even well-defined, e.g. the input to a division gate is a sub-circuit that computes the identically zero polynomial. The key technical ingredient of our algorithm is a notion of the pseudo-resultant of $f$ and a factor $g$, which serves as a proxy for the resultant of $g$ and $f/g$, with the advantage that the circuit complexity of the pseudo-resultant is comparable to that of the circuit complexity of $f$ and $g$. This notion, which might be of independent interest, together with the recent results of Limaye, Srinivasan and Tavenas, helps us derandomize one key step of multivariate polynomial factorization algorithms - that of deterministically finding a good starting point for Newton Iteration for the case when the input polynomial as well as the irreducible factor of interest have small constant-depth circuits.
Several numerical differential equation solvers have been employed effectively over the years as an alternative to analytical solvers to quickly and conveniently solve differential equations. One category of these is boundary value solvers, which are used to solve real-world problems formulated as differential equations with boundary conditions. These solvers require certain numerical settings to solve the differential equations that affect their solvability and performance. A systematic fine-tuning of these settings is required to obtain the desired solution and performance. Currently, these settings are either selected by trial and error or require domain expertise. In this paper, we propose a machine learning-based optimization workflow for fine-tuning the numerical settings to reduce the time and domain expertise required in the process. In the evaluation section, we discuss the scalability, stability, and reliability of the proposed workflow. We demonstrate our workflow on a numerical boundary value problem solver.
We consider the constrained sampling problem where the goal is to sample from a target distribution $\pi(x)\propto e^{-f(x)}$ when $x$ is constrained to lie on a convex body $\mathcal{C}$. Motivated by penalty methods from continuous optimization, we propose penalized Langevin Dynamics (PLD) and penalized underdamped Langevin Monte Carlo (PULMC) methods that convert the constrained sampling problem into an unconstrained sampling problem by introducing a penalty function for constraint violations. When $f$ is smooth and gradients are available, we get $\tilde{\mathcal{O}}(d/\varepsilon^{10})$ iteration complexity for PLD to sample the target up to an $\varepsilon$-error where the error is measured in the TV distance and $\tilde{\mathcal{O}}(\cdot)$ hides logarithmic factors. For PULMC, we improve the result to $\tilde{\mathcal{O}}(\sqrt{d}/\varepsilon^{7})$ when the Hessian of $f$ is Lipschitz and the boundary of $\mathcal{C}$ is sufficiently smooth. To our knowledge, these are the first convergence results for underdamped Langevin Monte Carlo methods in the constrained sampling that handle non-convex $f$ and provide guarantees with the best dimension dependency among existing methods with deterministic gradient. If unbiased stochastic estimates of the gradient of $f$ are available, we propose PSGLD and PSGULMC methods that can handle stochastic gradients and are scaleable to large datasets without requiring Metropolis-Hasting correction steps. For PSGLD and PSGULMC, when $f$ is strongly convex and smooth, we obtain $\tilde{\mathcal{O}}(d/\varepsilon^{18})$ and $\tilde{\mathcal{O}}(d\sqrt{d}/\varepsilon^{39})$ iteration complexity in W2 distance. When $f$ is smooth and can be non-convex, we provide finite-time performance bounds and iteration complexity results. Finally, we illustrate the performance on Bayesian LASSO regression and Bayesian constrained deep learning problems.
This paper considers the optimal boundary control of chemical systems described by advection-diffusion-reaction (ADR) equations. We use a discontinuous Galerkin finite element method (DG-FEM) for the spatial discretization of the governing partial differential equations, and the optimal control problem is directly discretized using multiple shooting. The temporal discretization and the corresponding sensitivity calculations are achieved by an explicit singly diagonally-implicit Runge Kutta (ESDIRK) method. ADR systems arise in process systems engineering and their operation can potentially be improved by nonlinear model predictive control (NMPC). We demonstrate a numerical approach for the solution to their optimal control problems (OCPs) in a chromatography case study. Preparative liquid chromatography is an important downstream process in biopharmaceutical manufacturing. We show that multi-step elution trajectories for batch processes can be optimized for economic objectives, providing superior performance compared to classical gradient elution trajectories.
We present a structure-preserving Eulerian algorithm for solving $L^2$-gradient flows and a structure-preserving Lagrangian algorithm for solving generalized diffusions. Both algorithms employ neural networks as tools for spatial discretization. Unlike most existing methods that construct numerical discretizations based on the strong or weak form of the underlying PDE, the proposed schemes are constructed based on the energy-dissipation law directly. This guarantees the monotonic decay of the system's free energy, which avoids unphysical states of solutions and is crucial for the long-term stability of numerical computations. To address challenges arising from nonlinear neural network discretization, we perform temporal discretizations on these variational systems before spatial discretizations. This approach is computationally memory-efficient when implementing neural network-based algorithms. The proposed neural network-based schemes are mesh-free, allowing us to solve gradient flows in high dimensions. Various numerical experiments are presented to demonstrate the accuracy and energy stability of the proposed numerical schemes.
We consider the problem of allocating $m$ indivisible items to a set of $n$ heterogeneous agents, aiming at computing a proportional allocation by introducing subsidy (money). It has been shown by Wu et al. (WINE 2023) that when agents are unweighted a total subsidy of $n/4$ suffices (assuming that each item has value/cost at most $1$ to every agent) to ensure proportionality. When agents have general weights, they proposed an algorithm that guarantees a weighted proportional allocation requiring a total subsidy of $(n-1)/2$, by rounding the fractional bid-and-take algorithm. In this work, we revisit the problem and the fractional bid-and-take algorithm. We show that by formulating the fractional allocation returned by the algorithm as a directed tree connecting the agents and splitting the tree into canonical components, there is a rounding scheme that requires a total subsidy of at most $n/3 - 1/6$.
We present a pseudopolynomial-time algorithm for the Knapsack problem that has running time $\widetilde{O}(n + t\sqrt{p_{\max}})$, where $n$ is the number of items, $t$ is the knapsack capacity, and $p_{\max}$ is the maximum item profit. This improves over the $\widetilde{O}(n + t \, p_{\max})$-time algorithm based on the convolution and prediction technique by Bateni et al.~(STOC 2018). Moreover, we give some evidence, based on a strengthening of the Min-Plus Convolution Hypothesis, that our running time might be optimal. Our algorithm uses two new technical tools, which might be of independent interest. First, we generalize the $\widetilde{O}(n^{1.5})$-time algorithm for bounded monotone min-plus convolution by Chi et al.~(STOC 2022) to the \emph{rectangular} case where the range of entries can be different from the sequence length. Second, we give a reduction from general knapsack instances to \emph{balanced} instances, where all items have nearly the same profit-to-weight ratio, up to a constant factor. Using these techniques, we can also obtain algorithms that run in time $\widetilde{O}(n + OPT\sqrt{w_{\max}})$, $\widetilde{O}(n + (nw_{\max}p_{\max})^{1/3}t^{2/3})$, and $\widetilde{O}(n + (nw_{\max}p_{\max})^{1/3} OPT^{2/3})$, where $OPT$ is the optimal total profit and $w_{\max}$ is the maximum item weight.
Let $\Omega = [0,1]^d$ be the unit cube in $\mathbb{R}^d$. We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev spaces $W^s(L_q(\Omega))$ and Besov spaces $B^s_r(L_q(\Omega))$, with error measured in the $L_p(\Omega)$ norm. This problem is important when studying the application of neural networks in a variety of fields, including scientific computing and signal processing, and has previously been solved only when $p=q=\infty$. Our contribution is to provide a complete solution for all $1\leq p,q\leq \infty$ and $s > 0$ for which the corresponding Sobolev or Besov space compactly embeds into $L_p$. The key technical tool is a novel bit-extraction technique which gives an optimal encoding of sparse vectors. This enables us to obtain sharp upper bounds in the non-linear regime where $p > q$. We also provide a novel method for deriving $L_p$-approximation lower bounds based upon VC-dimension when $p < \infty$. Our results show that very deep ReLU networks significantly outperform classical methods of approximation in terms of the number of parameters, but that this comes at the cost of parameters which are not encodable.
We investigate pseudo-polynomial time algorithms for Subset Sum. Given a multi-set $X$ of $n$ positive integers and a target $t$, Subset Sum asks whether some subset of $X$ sums to $t$. Bringmann proposes an $\tilde{O}(n + t)$-time algorithm [Bringmann SODA'17], and an open question has naturally arisen: can Subset Sum be solved in $O(n + w)$ time? Here $w$ is the maximum integer in $X$. We make a progress towards resolving the open question by proposing an $\tilde{O}(n + \sqrt{wt})$-time algorithm.
We study the algorithmic task of finding large independent sets in Erdos-Renyi $r$-uniform hypergraphs on $n$ vertices having average degree $d$. Krivelevich and Sudakov showed that the maximum independent set has density $\left(\frac{r\log d}{(r-1)d}\right)^{1/(r-1)}$. We show that the class of low-degree polynomial algorithms can find independent sets of density $\left(\frac{\log d}{(r-1)d}\right)^{1/(r-1)}$ but no larger. This extends and generalizes earlier results of Gamarnik and Sudan, Rahman and Virag, and Wein on graphs, and answers a question of Bal and Bennett. We conjecture that this statistical-computational gap holds for this problem. Additionally, we explore the universality of this gap by examining $r$-partite hypergraphs. A hypergraph $H=(V,E)$ is $r$-partite if there is a partition $V=V_1\cup\cdots\cup V_r$ such that each edge contains exactly one vertex from each set $V_i$. We consider the problem of finding large balanced independent sets (independent sets containing the same number of vertices in each partition) in random $r$-partite hypergraphs with $n$ vertices in each partition and average degree $d$. We prove that the maximum balanced independent set has density $\left(\frac{r\log d}{(r-1)d}\right)^{1/(r-1)}$ asymptotically. Furthermore, we prove an analogous low-degree computational threshold of $\left(\frac{\log d}{(r-1)d}\right)^{1/(r-1)}$. Our results recover and generalize recent work of Perkins and the second author on bipartite graphs. While the graph case has been extensively studied, this work is the first to consider statistical-computational gaps of optimization problems on random hypergraphs. Our results suggest that these gaps persist for larger uniformities as well as across many models. A somewhat surprising aspect of the gap for balanced independent sets is that the algorithm achieving the lower bound is a simple degree-1 polynomial.
We propose a supplement matrix method for computing eigenvalues of a dual Hermitian matrix, and discuss its application in multi-agent formation control. Suppose we have a ring, which can be the real field, the complex field, or the quaternion ring. We study dual number symmetric matrices, dual complex Hermitian matrices and dual quaternion Hermitian matrices in a unified frame of dual Hermitian matrices. An $n \times n$ dual Hermitian matrix has $n$ dual number eigenvalues. We define determinant, characteristic polynomial and supplement matrices for a dual Hermitian matrix. Supplement matrices are Hermitian matrices in the original ring. The standard parts of the eigenvalues of that dual Hermitian matrix are the eigenvalues of the standard part Hermitian matrix in the original ring, while the dual parts of the eigenvalues of that dual Hermitian matrix are the eigenvalues of those supplement matrices. Hence, by applying any practical method for computing eigenvalues of Hermitian matrices in the original ring, we have a practical method for computing eigenvalues of a dual Hermitian matrix. We call this method the supplement matrix method. In multi-agent formation control, a desired relative configuration scheme may be given. People need to know if this scheme is reasonable such that a feasible solution of configurations of these multi-agents exists. By exploring the eigenvalue problem of dual Hermitian matrices, and its link with the unit gain graph theory, we open a cross-disciplinary approach to solve the relative configuration problem. Numerical experiments are reported.