We propose novel optimal and parameter-free algorithms for computing an approximate solution for smooth optimization with small (projected) gradient norm. Specifically, for computing an approximate solution such that the norm of the (projected) gradient is not greater than $\varepsilon$, we have the following results for the cases of convex, strongly convex, and nonconvex problems: a) for the convex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{L\|x_0 - x^*\|\varepsilon}$, where $L$ is the Lipschitz constant of the gradient function and $x^*$ is any optimal solution; b) for the strongly convex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{L/\mu}\log(\|\nabla f(x_0)\|)$, where $\mu$ is the strong convexity constant; c) for the nonconvex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{Ll}(f(x_0) - f(x^*))/\varepsilon^2$, where $l$ is the lower curvature constant. Our complexity results match the lower complexity bounds of all three cases of problems. Our analysis can be applied to both unconstrained problems and problems with constrained feasible sets; we demonstrate our strategy for analyzing the complexity of computing solutions with small projected gradient norm in the convex case. For all the convex, strongly convex, and nonconvex cases, we also propose parameter-free algorithms that does not require the knowledge of any problem parameter. To the best of our knowledge, our paper is the first one that achieves the $O(1)\sqrt{L\|x_0 - x^*\|/\varepsilon}$ complexity for convex problems with constraint feasible sets, the $O(1)\sqrt{Ll}(f(x_0) - f(x^*))/\varepsilon$ complexity for nonconvex problems, and optimal complexities for convex, strongly convex, and nonconvex problems through parameter-free algorithms.
Enumeration problems are often encountered as key subroutines in the exact computation of graph parameters such as chromatic number, treewidth, or treedepth. In the case of treedepth computation, the enumeration of inclusion-wise minimal separators plays a crucial role. However and quite surprisingly, the complexity status of this problem has not been settled since it has been posed as an open direction by Kloks and Kratsch in 1998. Recently at the PACE 2020 competition dedicated to treedepth computation, solvers have been circumventing that by listing all minimal $a$-$b$ separators and filtering out those that are not inclusion-wise minimal, at the cost of efficiency. Naturally, having an efficient algorithm for listing inclusion-wise minimal separators would drastically improve such practical algorithms. In this note, however, we show that no efficient algorithm is to be expected from an output-sensitive perspective, namely, we prove that there is no output-polynomial time algorithm for inclusion-wise minimal separators enumeration unless P = NP.
We provide two families of algorithms to compute characteristic polynomials of endomorphisms and norms of isogenies of Drinfeld modules. Our algorithms work for Drinfeld modules of any rank, defined over any base curve. When the base curve is $\mathbb P^1_{\mathbb F_q}$, we do a thorough study of the complexity, demonstrating that our algorithms are, in many cases, the most asymptotically performant. The first family of algorithms relies on the correspondence between Drinfeld modules and Anderson motives, reducing the computation to linear algebra over a polynomial ring. The second family, available only for the Frobenius endomorphism, is based on a formula expressing the characteristic polynomial of the Frobenius as a reduced norm in a central simple algebra.
Motivated by the need for the rigorous analysis of the numerical stability of variational least-squares kernel-based methods for solving second-order elliptic partial differential equations, we provide previously lacking stability inequalities. This fills a significant theoretical gap in the previous work [Comput. Math. Appl. 103 (2021) 1-11], which provided error estimates based on a conjecture on the stability. With the stability estimate now rigorously proven, we complete the theoretical foundations and compare the convergence behavior to the proven rates. Furthermore, we establish another stability inequality involving weighted-discrete norms, and provide a theoretical proof demonstrating that the exact quadrature weights are not necessary for the weighted least-squares kernel-based collocation method to converge. Our novel theoretical insights are validated by numerical examples, which showcase the relative efficiency and accuracy of these methods on data sets with large mesh ratios. The results confirm our theoretical predictions regarding the performance of variational least-squares kernel-based method, least-squares kernel-based collocation method, and our new weighted least-squares kernel-based collocation method. Most importantly, our results demonstrate that all methods converge at the same rate, validating the convergence theory of weighted least-squares in our proven theories.
This research note provides algebraic characterizations of the least model, subsumption, and uniform equivalence of propositional Krom logic programs.
This paper introduces a time-domain combined field integral equation for electromagnetic scattering by a perfect electric conductor. The new equation is obtained by leveraging the quasi-Helmholtz projectors, which separate both the unknown and the source fields into solenoidal and irrotational components. These two components are then appropriately rescaled to cure the solution from a loss of accuracy occurring when the time step is large. Yukawa-type integral operators of a purely imaginary wave number are also used as a Calderon preconditioner to eliminate the ill-conditioning of matrix systems. The stabilized time-domain electric and magnetic field integral equations are linearly combined in a Calderon-like fashion, then temporally discretized using a proper pair of trial functions, resulting in a marching-on-in-time linear system. The novel formulation is immune to spurious resonances, dense discretization breakdown, large-time step breakdown and dc instabilities stemming from non-trivial kernels. Numerical results for both simply-connected and multiply-connected scatterers corroborate the theoretical analysis.
Singularly perturbed boundary value problems pose a significant challenge for their numerical approximations because of the presence of sharp boundary layers. These sharp boundary layers are responsible for the stiffness of solutions, which leads to large computational errors, if not properly handled. It is well-known that the classical numerical methods as well as the Physics-Informed Neural Networks (PINNs) require some special treatments near the boundary, e.g., using extensive mesh refinements or finer collocation points, in order to obtain an accurate approximate solution especially inside of the stiff boundary layer. In this article, we modify the PINNs and construct our new semi-analytic SL-PINNs suitable for singularly perturbed boundary value problems. Performing the boundary layer analysis, we first find the corrector functions describing the singular behavior of the stiff solutions inside boundary layers. Then we obtain the SL-PINN approximations of the singularly perturbed problems by embedding the explicit correctors in the structure of PINNs or by training the correctors together with the PINN approximations. Our numerical experiments confirm that our new SL-PINN methods produce stable and accurate approximations for stiff solutions.
Human cognition operates on a "Global-first" cognitive mechanism, prioritizing information processing based on coarse-grained details. This mechanism inherently possesses an adaptive multi-granularity description capacity, resulting in computational traits such as efficiency, robustness, and interpretability. The analysis pattern reliance on the finest granularity and single-granularity makes most existing computational methods less efficient, robust, and interpretable, which is an important reason for the current lack of interpretability in neural networks. Multi-granularity granular-ball computing employs granular-balls of varying sizes to daptively represent and envelop the sample space, facilitating learning based on these granular-balls. Given that the number of coarse-grained "granular-balls" is fewer than sample points, granular-ball computing proves more efficient. Moreover, the inherent coarse-grained nature of granular-balls reduces susceptibility to fine-grained sample disturbances, enhancing robustness. The multi-granularity construct of granular-balls generates topological structures and coarse-grained descriptions, naturally augmenting interpretability. Granular-ball computing has successfully ventured into diverse AI domains, fostering the development of innovative theoretical methods, including granular-ball classifiers, clustering techniques, neural networks, rough sets, and evolutionary computing. This has notably ameliorated the efficiency, noise robustness, and interpretability of traditional methods. Overall, granular-ball computing is a rare and innovative theoretical approach in AI that can adaptively and simultaneously enhance efficiency, robustness, and interpretability. This article delves into the main application landscapes for granular-ball computing, aiming to equip future researchers with references and insights to refine and expand this promising theory.
Due to the dynamic characteristics of instantaneity and steepness, employing domain decomposition techniques for simulating rogue wave solutions is highly appropriate. Wherein, the backward compatible PINN (bc-PINN) is a temporally sequential scheme to solve PDEs over successive time segments while satisfying all previously obtained solutions. In this work, we propose improvements to the original bc-PINN algorithm in two aspects based on the characteristics of error propagation. One is to modify the loss term for ensuring backward compatibility by selecting the earliest learned solution for each sub-domain as pseudo reference solution. The other is to adopt the concatenation of solutions obtained from individual subnetworks as the final form of the predicted solution. The improved backward compatible PINN (Ibc-PINN) is applied to study data-driven higher-order rogue waves for the nonlinear Schr\"{o}dinger (NLS) equation and the AB system to demonstrate the effectiveness and advantages. Transfer learning and initial condition guided learning (ICGL) techniques are also utilized to accelerate the training. Moreover, the error analysis is conducted on each sub-domain and it turns out that the slowdown of Ibc-PINN in error accumulation speed can yield greater advantages in accuracy. In short, numerical results fully indicate that Ibc-PINN significantly outperforms bc-PINN in terms of accuracy and stability without sacrificing efficiency.
The accurate and efficient evaluation of Newtonian potentials over general 2-D domains is important for the numerical solution of Poisson's equation and volume integral equations. In this paper, we present a simple and efficient high-order algorithm for computing the Newtonian potential over a planar domain discretized by an unstructured mesh. The algorithm is based on the use of Green's third identity for transforming the Newtonian potential into a collection of layer potentials over the boundaries of the mesh elements, which can be easily evaluated by the Helsing-Ojala method. One important component of our algorithm is the use of high-order (up to order 20) bivariate polynomial interpolation in the monomial basis, for which we provide extensive justification. The performance of our algorithm is illustrated through several numerical experiments.
This paper considers the problem of robust iterative Bayesian smoothing in nonlinear state-space models with additive noise using Gaussian approximations. Iterative methods are known to improve smoothed estimates but are not guaranteed to converge, motivating the development of more robust versions of the algorithms. The aim of this article is to present Levenberg-Marquardt (LM) and line-search extensions of the classical iterated extended Kalman smoother (IEKS) as well as the iterated posterior linearisation smoother (IPLS). The IEKS has previously been shown to be equivalent to the Gauss-Newton (GN) method. We derive a similar GN interpretation for the IPLS. Furthermore, we show that an LM extension for both iterative methods can be achieved with a simple modification of the smoothing iterations, enabling algorithms with efficient implementations. Our numerical experiments show the importance of robust methods, in particular for the IEKS-based smoothers. The computationally expensive IPLS-based smoothers are naturally robust but can still benefit from further regularisation.