We present an efficient method for propagating the time-dependent Kohn-Sham equations in free space, based on the recently introduced Fourier contour deformation (FCD) approach. For potentials which are constant outside a bounded domain, FCD yields a high-order accurate numerical solution of the time-dependent Schrodinger equation directly in free space, without the need for artificial boundary conditions. Of the many existing artificial boundary condition schemes, FCD is most similar to an exact nonlocal transparent boundary condition, but it works directly on Cartesian grids in any dimension, and runs on top of the fast Fourier transform rather than fast algorithms for the application of nonlocal history integral operators. We adapt FCD to time-dependent density functional theory (TDDFT), and describe a simple algorithm to smoothly and automatically truncate long-range Coulomb-like potentials to a time-dependent constant outside of a bounded domain of interest, so that FCD can be used. This approach eliminates errors originating from the use of artificial boundary conditions, leaving only the error of the potential truncation, which is controlled and can be systematically reduced. The method enables accurate simulations of ultrastrong nonlinear electronic processes in molecular complexes in which the inteference between bound and continuum states is of paramount importance. We demonstrate results for many-electron TDDFT calculations of absorption and strong field photoelectron spectra for one and two-dimensional models, and observe a significant reduction in the size of the computational domain required to achieve high quality results, as compared with the popular method of complex absorbing potentials.
We present a novel reinforcement learning based algorithm for multi-robot task allocation problem in warehouse environments. We formulate it as a Markov Decision Process and solve via a novel deep multi-agent reinforcement learning method (called RTAW) with attention inspired policy architecture. Hence, our proposed policy network uses global embeddings that are independent of the number of robots/tasks. We utilize proximal policy optimization algorithm for training and use a carefully designed reward to obtain a converged policy. The converged policy ensures cooperation among different robots to minimize total travel delay (TTD) which ultimately improves the makespan for a sufficiently large task-list. In our extensive experiments, we compare the performance of our RTAW algorithm to state of the art methods such as myopic pickup distance minimization (greedy) and regret based baselines on different navigation schemes. We show an improvement of upto 14% (25-1000 seconds) in TTD on scenarios with hundreds or thousands of tasks for different challenging warehouse layouts and task generation schemes. We also demonstrate the scalability of our approach by showing performance with up to $1000$ robots in simulations.
Gradients have been exploited in proposal distributions to accelerate the convergence of Markov chain Monte Carlo algorithms on discrete distributions. However, these methods require a natural differentiable extension of the target discrete distribution, which often does not exist or does not provide effective gradient guidance. In this paper, we develop a gradient-like proposal for any discrete distribution without this strong requirement. Built upon a locally-balanced proposal, our method efficiently approximates the discrete likelihood ratio via Newton's series expansion to enable a large and efficient exploration in discrete spaces. We show that our method can also be viewed as a multilinear extension, thus inheriting its desired properties. We prove that our method has a guaranteed convergence rate with or without the Metropolis-Hastings step. Furthermore, our method outperforms a number of popular alternatives in several different experiments, including the facility location problem, extractive text summarization, and image retrieval.
We consider entropy conservative and dissipative discretizations of nonlinear conservation laws with implicit time discretizations and investigate the influence of iterative methods used to solve the arising nonlinear equations. We show that Newton's method can turn an entropy dissipative scheme into an anti-dissipative one, even when the iteration error is smaller than the time integration error. We explore several remedies, of which the most performant is a relaxation technique, originally designed to fix entropy errors in time integration methods. Thus, relaxation works well in consort with iterative solvers, provided that the iteration errors are on the order of the time integration method. To corroborate our findings, we consider Burgers' equation and nonlinear dispersive wave equations. We find that entropy conservation results in more accurate numerical solutions than non-conservative schemes, even when the tolerance is an order of magnitude larger.
Kinetic equations model the position-velocity distribution of particles subject to transport and collision effects. Under a diffusive scaling, these combined effects converge to a diffusion equation for the position density in the limit of an infinite collision rate. Despite this well-defined limit, numerical simulation is expensive when the collision rate is high but finite, as small time steps are then required. In this work, we present an asymptotic-preserving multilevel Monte Carlo particle scheme that makes use of this diffusive limit to accelerate computations. In this scheme, we first sample the diffusive limiting model to compute a biased initial estimate of a Quantity of Interest, using large time steps. We then perform a limited number of finer simulations with transport and collision dynamics to correct the bias. The efficiency of the multilevel method depends on being able to perform correlated simulations of particles on a hierarchy of discretization levels. We present a method for correlating particle trajectories and present both an analysis and numerical experiments. We demonstrate that our approach significantly reduces the cost of particle simulations in high-collisional regimes, compared with prior work, indicating significant potential for adopting these schemes in various areas of active research.
The research in this article aims to find conditions of an algorithmic nature that are necessary and sufficient to transform any Boolean function in conjunctive normal form into a specific form that guarantees the satisfiability of this function. To find such conditions, we use the concept of a special covering of a set introduced in [13], and investigate the connection between this concept and the notion of satisfiability of Boolean functions. As shown, the problem of existence of a special covering for a set is equivalent to the Boolean satisfiability problem. Thus, an important result is the proof of the existence of necessary and sufficient conditions that make it possible to find out if there is a special covering for the set under the special decomposition. This result allows us to formulate the necessary and sufficient algorithmic conditions for Boolean satisfiability, considering the function in conjunctive normal form as a set of clauses. In parallel, as a result of the aforementioned algorithmic procedure, we obtain the values of the variables that ensure the satisfiability of this function. The terminology used related to graph theory, set theory, Boolean functions and complexity theory is consistent with the terminology in [1], [2], [3], [4]. The newly introduced terms are not found in use by other authors and do not contradict to other terms.
Most research on preconditioners for time-dependent PDEs has focused on implicit multi-step or diagonally-implicit multi-stage temporal discretizations. In this paper, we consider monolithic multigrid preconditioners for fully-implicit multi-stage Runge-Kutta (RK) time integration methods. These temporal discretizations have very attractive accuracy and stability properties, but they couple the spatial degrees of freedom across multiple time levels, requiring the solution of very large linear systems. We extend the classical Vanka relaxation scheme to implicit RK discretizations of saddle point problems. We present numerical results for the incompressible Stokes, Navier-Stokes, and resistive magnetohydrodynamics equations, in two and three dimensions, confirming that these relaxation schemes lead to robust and scalable monolithic multigrid methods for a challenging range of incompressible fluid-flow models.
Digital terrain models of geological information occasionally require smooth data in domains with complex irregular boundaries due to its data distribution. Traditional thin plate splines produce visually pleasing surfaces, but they are too computationally expensive for data of large sizes. Finite element thin plate spline smoother (TPSFEM) is an alternative that uses first-order finite elements to efficiently interpolate and smooth large data sets. Previous studies focused on regular square domains, which are insufficient for real-world applications. This article builds on prior work and investigates the performance of the TPSFEM and adaptive mesh refinement for real-world data sets in irregular domains. The Dirichlet boundaries are approximated using the thin plate spline and data-dependent weights are applied to prevent over-refinement near boundaries. Three geological surveys (aerial, terrestrial and bathymetric) with distinct data distribution patterns were tested in the numerical experiments. We found that irregular domains with adaptive mesh refinement significantly improve the efficiency of the interpolation. While the inconsistency in approximated boundary conditions, we may prevent it using additional constraints like weights. This finding is also applicable to other finite element-based smoothers.
In this paper, we provide near-optimal accelerated first-order methods for minimizing a broad class of smooth nonconvex functions that are strictly unimodal on all lines through a minimizer. This function class, which we call the class of smooth quasar-convex functions, is parameterized by a constant $\gamma \in (0,1]$, where $\gamma = 1$ encompasses the classes of smooth convex and star-convex functions, and smaller values of $\gamma$ indicate that the function can be "more nonconvex." We develop a variant of accelerated gradient descent that computes an $\epsilon$-approximate minimizer of a smooth $\gamma$-quasar-convex function with at most $O(\gamma^{-1} \epsilon^{-1/2} \log(\gamma^{-1} \epsilon^{-1}))$ total function and gradient evaluations. We also derive a lower bound of $\Omega(\gamma^{-1} \epsilon^{-1/2})$ on the worst-case number of gradient evaluations required by any deterministic first-order method, showing that, up to a logarithmic factor, no deterministic first-order method can improve upon ours.
Model Predictive Path Integral (MPPI) control is a type of sampling-based model predictive control that simulates thousands of trajectories and uses these trajectories to synthesize optimal controls on-the-fly. In practice, however, MPPI encounters problems limiting its application. For instance, it has been observed that MPPI tends to make poor decisions if unmodeled dynamics or environmental disturbances exist, preventing its use in safety-critical applications. Moreover, the multi-threaded simulations used by MPPI require significant onboard computational resources, making the algorithm inaccessible to robots without modern GPUs. To alleviate these issues, we propose a novel (Shield-MPPI) algorithm that provides robustness against unpredicted disturbances and achieves real-time planning using a much smaller number of parallel simulations on regular CPUs. The novel Shield-MPPI algorithm is tested on an aggressive autonomous racing platform both in simulation and using experiments. The results show that the proposed controller greatly reduces the number of constraint violations compared to state-of-the-art robust MPPI variants and stochastic MPC methods.
One of the key tasks in the economy is forecasting the economic agents' expectations of the future values of economic variables using mathematical models. The behavior of mathematical models can be irregular, including chaotic, which reduces their predictive power. In this paper, we study the regimes of behavior of two economic models and identify irregular dynamics in them. Using these models as an example, we demonstrate the effectiveness of evolutionary algorithms and the continuous deep Q-learning method in combination with Pyragas control method for deriving a control action that stabilizes unstable periodic trajectories and suppresses chaotic dynamics. We compare qualitative and quantitative characteristics of the model's dynamics before and after applying control and verify the obtained results by numerical simulation. Proposed approach can improve the reliability of forecasting and tuning of the economic mechanism to achieve maximum decision-making efficiency.