Bi-quadratic programming over unit spheres is a fundamental problem in quantum mechanics introduced by pioneer work of Einstein, Schr\"odinger, and others. It has been shown to be NP-hard; so it must be solve by efficient heuristic algorithms such as the block improvement method (BIM). This paper focuses on the maximization of bi-quadratic forms, which leads to a rank-one approximation problem that is equivalent to computing the M-spectral radius and its corresponding eigenvectors. Specifically, we provide a tight upper bound of the M-spectral radius for nonnegative fourth-order partially symmetric (PS) tensors, which can be considered as an approximation of the M-spectral radius. Furthermore, we showed that the proposed upper bound can be obtained more efficiently, if the nonnegative fourth-order PS-tensors is a member of certain monoid semigroups. Furthermore, as an extension of the proposed upper bound, we derive the exact solutions of the M-spectral radius and its corresponding M-eigenvectors for certain classes of fourth-order PS-tensors. Lastly, as an application of the proposed bound, we obtain a practically testable sufficient condition for nonsingular elasticity M-tensors with strong ellipticity condition. We conduct several numerical experiments to demonstrate the utility of the proposed results. The results show that: (a) our proposed method can attain a tight upper bound of the M-spectral radius with little computational burden, and (b) such tight and efficient upper bounds greatly enhance the convergence speed of the BIM-algorithm, allowing it to be applicable for large-scale problems in applications.
Simulated Moving Bed (SMB) chromatography is a well-known technique for the resolution of several high-value-added compounds. Parameters identification and model topology definition are arduous when one is dealing with complex systems such as a Simulated Moving Bed unit. Moreover, the large number of experiments necessary might be an expansive-long process. Hence, this work proposes a novel methodology for parameter estimation, screening the most suitable topology of the models sink-source (defined by the adsorption isotherm equation) and defining the minimum number of experiments necessary to identify the model. Therefore, a nested loop optimization problem is proposed with three levels considering the three main goals of the work: parameters estimation; topology screening by isotherm definition; minimum number of experiments necessary to yield a precise model. The proposed methodology emulated a real scenario by introducing noise in the data and using a Software-in-the-Loop (SIL) approach. Data reconciliation and uncertainty evaluation add robustness to the parameter estimation adding precision and reliability to the model. The methodology is validated considering experimental data from literature apart from the samples applied for parameter estimation, following a cross-validation. The results corroborate that it is possible to carry out trustworthy parameter estimation directly from an SMB unit with minimal system knowledge.
We study the problem of unbiased estimation of expectations with respect to (w.r.t.) $\pi$ a given, general probability measure on $(\mathbb{R}^d,\mathcal{B}(\mathbb{R}^d))$ that is absolutely continuous with respect to a standard Gaussian measure. We focus on simulation associated to a particular class of diffusion processes, sometimes termed the Schr\"odinger-F\"ollmer Sampler, which is a simulation technique that approximates the law of a particular diffusion bridge process $\{X_t\}_{t\in [0,1]}$ on $\mathbb{R}^d$, $d\in \mathbb{N}_0$. This latter process is constructed such that, starting at $X_0=0$, one has $X_1\sim \pi$. Typically, the drift of the diffusion is intractable and, even if it were not, exact sampling of the associated diffusion is not possible. As a result, \cite{sf_orig,jiao} consider a stochastic Euler-Maruyama scheme that allows the development of biased estimators for expectations w.r.t.~$\pi$. We show that for this methodology to achieve a mean square error of $\mathcal{O}(\epsilon^2)$, for arbitrary $\epsilon>0$, the associated cost is $\mathcal{O}(\epsilon^{-5})$. We then introduce an alternative approach that provides unbiased estimates of expectations w.r.t.~$\pi$, that is, it does not suffer from the time discretization bias or the bias related with the approximation of the drift function. We prove that to achieve a mean square error of $\mathcal{O}(\epsilon^2)$, the associated cost is, with high probability, $\mathcal{O}(\epsilon^{-2}|\log(\epsilon)|^{2+\delta})$, for any $\delta>0$. We implement our method on several examples including Bayesian inverse problems.
In this paper, we present a multiscale framework for solving the 2D Helmholtz equation in heterogeneous media without scale separation and in the high frequency regime where the wavenumber $k$ can be large. The main innovation is that our methods achieve a nearly exponential rate of convergence with respect to the computational degrees of freedom, using a coarse grid of mesh size $O(1/k)$ without suffering from the well-known pollution effect. The key idea is a non-overlapped domain decomposition and its associated coarse-fine scale decomposition of the solution space that adapts to the media property and wavenumber; this decomposition is inspired by the multiscale finite element method (MsFEM). We show that the coarse part is of low complexity in the sense that it can be approximated with a nearly exponential rate of convergence via local basis functions, due to the compactness of a restriction operator that maps Helmholtz-harmonic functions to their interpolation residues on edges, while the fine part is local such that it can be computed efficiently using the local information of the right hand side. The combination of the two parts yields the overall nearly exponential rate of convergence of our multiscale method. Our method draws many connections to multiscale methods in the literature, which we will comment in detail. We demonstrate the effectiveness of our methods theoretically and numerically; an exponential rate of convergence is consistently observed and confirmed. In addition, we observe the robustness of our methods regarding the high contrast in the media numerically.
We investigate the possibility of solving continuous non-convex optimization problems using a network of interacting quantum optical oscillators. We propose a native encoding of continuous variables in analog signals associated with the quadrature operators of a set of quantum optical modes. Optical coupling of the modes and noise introduced by vacuum fluctuations from external reservoirs or by weak measurements of the modes are used to optically simulate a diffusion process on a set of continuous random variables. The process is run sufficiently long for it to relax into the steady state of an energy potential defined on a continuous domain. As a first demonstration, we numerically benchmark solving box-constrained quadratic programming (BoxQP) problems using these settings. We consider delay-line and measurement-feedback variants of the experiment. Our benchmarking results demonstrate that in both cases the optical network is capable of solving BoxQP problems over three orders of magnitude faster than a state-of-the-art classical heuristic.
The $h$-index is a metric used to measure the impact of a user in a publication setting, such as a member of a social network with many highly liked posts or a researcher in an academic domain with many highly cited publications. Specifically, the $h$-index of a user is the largest integer $h$ such that at least $h$ publications of the user have at least $h$ units of positive feedback. We design an algorithm that, given query access to the $n$ publications of a user and each publication's corresponding positive feedback number, outputs a $(1\pm \varepsilon)$-approximation of the $h$-index of this user with probability at least $1-\delta$ in time \[ O(\frac{n \cdot \ln{(1/\delta)}}{\varepsilon^2 \cdot h}), \] where $h$ is the actual $h$-index which is unknown to the algorithm a-priori. We then design a novel lower bound technique that allows us to prove that this bound is in fact asymptotically optimal for this problem in all parameters $n,h,\varepsilon,$ and $\delta$. Our work is one of the first in sublinear time algorithms that addresses obtaining asymptotically optimal bounds, especially in terms of the error and confidence parameters. As such, we focus on designing novel techniques for this task. In particular, our lower bound technique seems quite general -- to showcase this, we also use our approach to prove an asymptotically optimal lower bound for the problem of estimating the number of triangles in a graph in sublinear time, which now is also optimal in the error and confidence parameters. This result improves upon prior lower bounds of Eden, Levi, Ron, and Seshadhri (FOCS'15) for this problem, as well as multiple follow-ups that extended this lower bound to other subgraph counting problems.
Asymptotic study on the partition function $p(n)$ began with the work of Hardy and Ramanujan. Later Rademacher obtained a convergent series for $p(n)$ and an error bound was given by Lehmer. Despite having this, a full asymptotic expansion for $p(n)$ with an explicit error bound is not known. Recently O'Sullivan studied the asymptotic expansion of $p^{k}(n)$-partitions into $k$th powers, initiated by Wright, and consequently obtained an asymptotic expansion for $p(n)$ along with a concise description of the coefficients involved in the expansion but without any estimation of the error term. Here we consider a detailed and comprehensive analysis on an estimation of the error term obtained by truncating the asymptotic expansion for $p(n)$ at any positive integer $n$. This gives rise to an infinite family of inequalities for $p(n)$ which finally answers to a question proposed by Chen. Our error term estimation predominantly relies on applications of algorithmic methods from symbolic summation.
In this paper, we develop a novel class of linear energy-preserving integrating factor methods for the 2D nonlinear Schr\"odinger equation with wave operator (NLSW), combining the scalar auxiliary variable approach and the integrating factor methods. A second-order scheme is first proposed, which is rigorously proved to be energy-preserving. By using the energy methods, we analyze its optimal convergence in the $H^1$ norm without any restrictions on the grid ratio, where a novel technique and an improved induction argument are proposed to overcome the difficulty posed by the unavailability of a priori $L^\infty$ estimates of numerical solutions. Based on the integrating factor Runge-Kutta methods, we extend the proposed scheme to arbitrarily high order, which is also linear and conservative. Numerical experiments are presented to confirm the theoretical analysis and demonstrate the advantages of the proposed methods.
We study the asymmetric matrix factorization problem under a natural nonconvex formulation with arbitrary overparametrization. The model-free setting is considered, with minimal assumption on the rank or singular values of the observed matrix, where the global optima provably overfit. We show that vanilla gradient descent with small random initialization sequentially recovers the principal components of the observed matrix. Consequently, when equipped with proper early stopping, gradient descent produces the best low-rank approximation of the observed matrix without explicit regularization. We provide a sharp characterization of the relationship between the approximation error, iteration complexity, initialization size and stepsize. Our complexity bound is almost dimension-free and depends logarithmically on the approximation error, with significantly more lenient requirements on the stepsize and initialization compared to prior work. Our theoretical results provide accurate prediction for the behavior gradient descent, showing good agreement with numerical experiments.
Much of the literature on optimal design of bandit algorithms is based on minimization of expected regret. It is well known that designs that are optimal over certain exponential families can achieve expected regret that grows logarithmically in the number of arm plays, at a rate governed by the Lai-Robbins lower bound. In this paper, we show that when one uses such optimized designs, the regret distribution of the associated algorithms necessarily has a very heavy tail, specifically, that of a truncated Cauchy distribution. Furthermore, for $p>1$, the $p$'th moment of the regret distribution grows much faster than poly-logarithmically, in particular as a power of the total number of arm plays. We show that optimized UCB bandit designs are also fragile in an additional sense, namely when the problem is even slightly mis-specified, the regret can grow much faster than the conventional theory suggests. Our arguments are based on standard change-of-measure ideas, and indicate that the most likely way that regret becomes larger than expected is when the optimal arm returns below-average rewards in the first few arm plays, thereby causing the algorithm to believe that the arm is sub-optimal. To alleviate the fragility issues exposed, we show that UCB algorithms can be modified so as to ensure a desired degree of robustness to mis-specification. In doing so, we also provide a sharp trade-off between the amount of UCB exploration and the tail exponent of the resulting regret distribution.
Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.