We study the convergence of specific inexact alternating projections for two non-convex sets in a Euclidean space. The $\sigma$-quasioptimal metric projection ($\sigma \geq 1$) of a point $x$ onto a set $A$ consists of points in $A$ the distance to which is at most $\sigma$ times larger than the minimal distance $\mathrm{dist}(x,A)$. We prove that quasioptimal alternating projections, when one or both projections are quasioptimal, converge locally and linearly for super-regular sets with transversal intersection. The theory is motivated by the successful application of alternating projections to low-rank matrix and tensor approximation. We focus on two problems -- nonnegative low-rank approximation and low-rank approximation in the maximum norm -- and develop fast alternating-projection algorithms for matrices and tensor trains based on cross approximation and acceleration techniques. The numerical experiments confirm that the proposed methods are efficient and suggest that they can be used to regularise various low-rank computational routines.
Let $G$ be a graph of order $n$. A classical upper bound for the domination number of a graph $G$ having no isolated vertices is $\lfloor\frac{n}{2}\rfloor$. However, for several families of graphs, we have $\gamma(G) \le \lfloor\sqrt{n}\rfloor$ which gives a substantially improved upper bound. In this paper, we give a condition necessary for a graph $G$ to have $\gamma(G) \le \lfloor\sqrt{n}\rfloor$, and some conditions sufficient for a graph $G$ to have $\gamma(G) \le \lfloor\sqrt{n}\rfloor$. We also present a characterization of all connected graphs $G$ of order $n$ with $\gamma(G) = \lfloor\sqrt{n}\rfloor$. Further, we prove that for a graph $G$ not satisfying $rad(G)=diam(G)=rad(\overline{G})=diam(\overline{G})=2$, deciding whether $\gamma(G) \le \lfloor\sqrt{n}\rfloor$ or $\gamma(\overline{G}) \le \lfloor\sqrt{n}\rfloor$ can be done in polynomial time. We conjecture that this decision problem can be solved in polynomial time for any graph $G$.
We consider the simultaneously fast and in-place computation of the Euclidean polynomial modular remainder $R(X) $\not\equiv$ A(X) \mod B(X)$ with $A$ and $B$ of respective degrees $n$ and $m $\le$ n$. But fast algorithms for this usually come at the expense of (potentially large) extra temporary space. To remain in-place a further issue is to avoid the storage of the whole quotient $Q(X)$ such that $A=BQ+R$. If the multiplication of two polynomials of degree $k$ can be performed with $M(k)$ operations and $O(k)$ extra space, and if it is allowed to use the input space of $A$ or $B$ for intermediate computations, but putting $A$ and $B$ back to their initial states after the completion of the remainder computation, we here propose an in-place algorithm (that is with its extra required space reduced to $O(1)$ only) using at most $O(n/m M(m)\log(m)$ arithmetic operations, if $\M(m)$ is quasi-linear, or $O(n/m M(m)}$ otherwise. We also propose variants that compute -- still in-place and with the same kind of complexity bounds -- the over-place remainder $A(X) $\not\equiv$ A(X) \mod B(X)$, the accumulated remainder $R(X) += A(X) \mod B(X)$ and the accumulated modular multiplication $R(X) += A(X)C(X) \mod B(X)$. To achieve this, we develop techniques for Toeplitz matrix operations which output is also part of the input. Fast and in-place accumulating versions are obtained for the latter, and thus for convolutions, and then used for polynomial remaindering. This is realized via further reductions to accumulated polynomial multiplication, for which fast in-place algorithms have recently been developed.
We quantify the minimax rate for a nonparametric regression model over a convex function class $\mathcal{F}$ with bounded diameter. We obtain a minimax rate of ${\varepsilon^{\ast}}^2\wedge\mathrm{diam}(\mathcal{F})^2$ where \[\varepsilon^{\ast} =\sup\{\varepsilon>0:n\varepsilon^2 \le \log M_{\mathcal{F}}^{\operatorname{loc}}(\varepsilon,c)\},\] where $M_{\mathcal{F}}^{\operatorname{loc}}(\cdot, c)$ is the local metric entropy of $\mathcal{F}$ and our loss function is the squared population $L_2$ distance over our input space $\mathcal{X}$. In contrast to classical works on the topic [cf. Yang and Barron, 1999], our results do not require functions in $\mathcal{F}$ to be uniformly bounded in sup-norm. In addition, we prove that our estimator is adaptive to the true point, and to the best of our knowledge this is the first such estimator in this general setting. This work builds on the Gaussian sequence framework of Neykov [2022] using a similar algorithmic scheme to achieve the minimax rate. Our algorithmic rate also applies with sub-Gaussian noise. We illustrate the utility of this theory with examples including multivariate monotone functions, linear functionals over ellipsoids, and Lipschitz classes.
We present a new and straightforward derivation of a family $\mathcal{F}(h,\tau)$ of exponential splittings of Strang-type for the general linear evolutionary equation with two linear components. One component is assumed to be a time-independent, unbounded operator, while the other is a bounded one with explicit time dependence. The family $\mathcal{F}(h,\tau)$ is characterized by the length of the time-step $h$ and a continuous parameter $\tau$, which defines each member of the family. It is shown that the derivation and error analysis follows from two elementary arguments: the variation of constants formula and specific quadratures for integrals over simplices. For these Strang-type splittings, we prove their convergence which, depending on some commutators of the relevant operators, may be of first or second order. As a result, error bounds appear in terms of commutator bounds. Based on the explicit form of the error terms, we establish the influence of $\tau$ on the accuracy of $\mathcal{F}(h,\tau)$, allowing us to investigate the optimal value of $\tau$. This simple yet powerful approach establishes the connection between exponential integrators and splitting methods. Furthermore, the present approach can be easily applied to the derivation of higher-order splitting methods under similar considerations. Needless to say, the obtained results also apply to Strang-type splittings in the case of time independent-operators. To complement rigorous results, we present numerical experiments with various values of $\tau$ based on the linear Schr\"odinger equation.
Gr\"{o}bner bases are nowadays central tools for solving various problems in commutative algebra and algebraic geometry. A typical use of Gr\"{o}bner bases is the multivariate polynomial system solving, which enables us to construct algebraic attacks against post-quantum cryptographic protocols. Therefore, the determination of the complexity of computing Gr\"{o}bner bases is very important both in theory and in practice: One of the most important cases is the case where input polynomials compose an (overdetermined) affine semi-regular sequence. The first part of this paper aims to present a survey on the Gr\"{o}bner basis computation and its complexity. In the second part, we shall give an explicit formula on the (truncated) Hilbert-Poincar\'{e} series associated to the homogenization of an affine semi-regular sequence. Based on the formula, we also study (reduced) Gr\"{o}bner bases of the ideals generated by an affine semi-regular sequence and its homogenization. Some of our results are considered to give mathematically rigorous proofs of the correctness of methods for computing Gr\"{o}bner bases of the ideal generated by an affine semi-regular sequence.
Given a graph $G$, the optimization version of the graph burning problem seeks for a sequence of vertices, $(u_1,u_2,...,u_k) \in V(G)^k$, with minimum $k$ and such that every $v \in V(G)$ has distance at most $k-i$ to some vertex $u_i$. The length $k$ of the optimal solution is known as the burning number and is denoted by $b(G)$, an invariant that helps quantify the graph's vulnerability to contagion. This paper explores the advantages and limitations of an $\mathcal{O}(mn + kn^2)$ deterministic greedy heuristic for this problem, where $n$ is the graph's order, $m$ is the graph's size, and $k$ is a guess on $b(G)$. This heuristic is based on the relationship between the graph burning problem and the clustered maximum coverage problem, and despite having limitations on paths and cycles, it found most of the optimal and best-known solutions of benchmark and synthetic graphs with up to 102400 vertices.
We present algorithms for computing the reduced Gr\"{o}bner basis of the vanishing ideal of a finite set of points in a frame of ideal interpolation. Ideal interpolation is defined by a linear projector whose kernel is a polynomial ideal. In this paper, we translate interpolation condition functionals into formal power series via Taylor expansion, then the reduced Gr\"{o}bner basis is read from formal power series by Gaussian elimination. Our algorithm has a polynomial time complexity. It compares favorably with MMM algorithm in single point ideal interpolation and some several points ideal interpolation.
We develop adaptive time-stepping strategies for It\^o-type stochastic differential equations (SDEs) with jump perturbations. Our approach builds on adaptive strategies for SDEs. Adaptive methods can ensure strong convergence of nonlinear SDEs with drift and diffusion coefficients that violate global Lipschitz bounds by adjusting the stepsize dynamically on each trajectory to prevent spurious growth that can lead to loss of convergence if it occurs with sufficiently high probability. In this article we demonstrate the use of a jump-adapted mesh that incorporates jump times into the adaptive time-stepping strategy. We prove that any adaptive scheme satisfying a particular mean-square consistency bound for a nonlinear SDE in the non-jump case may be extended to a strongly convergent scheme in the Poisson jump case where jump and diffusion perturbations are mutually independent and the jump coefficient satisfies a global Lipschitz condition.
We describe a new dependent-rounding algorithmic framework for bipartite graphs. Given a fractional assignment $y$ of values to edges of graph $G = (U \cup V, E)$, the algorithms return an integral solution $Y$ such that each right-node $v \in V$ has at most one neighboring edge $f$ with $Y_f = 1$, and where the variables $Y_e$ also satisfy broad nonpositive-correlation properties. In particular, for any edges $e_1, e_2$ sharing a left-node $u \in U$, the variables $Y_{e_1}, Y_{e_2}$ have strong negative-correlation properties, i.e. the expectation of $Y_{e_1} Y_{e_2}$ is significantly below $y_{e_1} y_{e_2}$. This algorithm is based on generating negatively-correlated Exponential random variables and using them in a contention-resolution scheme inspired by an algorithm Im & Shadloo (2020). Our algorithm gives stronger and much more flexible negative correlation properties. Dependent rounding schemes with negative correlation properties have been used for approximation algorithms for job-scheduling on unrelated machines to minimize weighted completion times (Bansal, Srinivasan, & Svensson (2021), Im & Shadloo (2020), Im & Li (2023)). Using our new dependent-rounding algorithm, among other improvements, we obtain a $1.398$-approximation for this problem. This significantly improves over the prior $1.45$-approximation ratio of Im & Li (2023).
Nurmuhammad et al. developed the Sinc-Nystr\"{o}m methods for initial value problems in which the solutions exhibit exponential decay end behavior. In these methods, the Single-Exponential (SE) transformation or the Double-Exponential (DE) transformation is combined with the Sinc approximation. Hara and Okayama improved on these transformations to attain a better convergence rate, which was later supported by theoretical error analyses. However, these methods have a computational drawback owing to the inclusion of a special function in the basis functions. To address this issue, Okayama and Hara proposed Sinc-collocation methods, which do not include any special function in the basis functions. This study conducts error analyses of these methods.