We study the sharp interface limit of the stochastic Cahn-Hilliard equation with cubic double-well potential and additive space-time white noise $\epsilon^{\sigma}\dot{W}$ where $\epsilon>0$ is an interfacial width parameter. We prove that, for sufficiently large scaling constant $\sigma >0$, the stochastic Cahn-Hilliard equation converges to the deterministic Mullins-Sekerka/Hele-Shaw problem for $\epsilon\rightarrow 0$. The convergence is shown in suitable fractional Sobolev norms as well as in the $L^p$-norm for $p\in (2, 4]$ in spatial dimension $d=2,3$. This generalizes the existing result for the space-time white noise to dimension $d=3$ and improves the existing results for smooth noise, which were so far limited to $p\in \left(2, frac{2d+8}{d+2}\right]$ in spatial dimension $d=2,3$. As a byproduct of the analysis of the stochastic problem with space-time white noise, we identify minimal regularity requirements on the noise which allow convergence to the sharp interface limit in the $\mathbb{H}^1$-norm and also provide improved convergence estimates for the sharp interface limit of the deterministic problem.
The design of particle simulation methods for collisional plasma physics has always represented a challenge due to the unbounded total collisional cross section, which prevents a natural extension of the classical Direct Simulation Monte Carlo (DSMC) method devised for the Boltzmann equation. One way to overcome this problem is to consider the design of Monte Carlo algorithms that are robust in the so-called grazing collision limit. In the first part of this manuscript, we will focus on the construction of collision algorithms for the Landau-Fokker-Planck equation based on the grazing collision asymptotics and which avoids the use of iterative solvers. Subsequently, we discuss problems involving uncertainties and show how to develop a stochastic Galerkin projection of the particle dynamics which permits to recover spectral accuracy for smooth solutions in the random space. Several classical numerical tests are reported to validate the present approach.
In this work we propose tailored model order reduction for varying boundary optimal control problems governed by parametric partial differential equations. With varying boundary control, we mean that a specific parameter changes where the boundary control acts on the system. This peculiar formulation might benefit from model order reduction. Indeed, fast and reliable simulations of this model can be of utmost usefulness in many applied fields, such as geophysics and energy engineering. However, varying boundary control features very complicated and diversified parametric behaviour for the state and adjoint variables. The state solution, for example, changing the boundary control parameter, might feature transport phenomena. Moreover, the problem loses its affine structure. It is well known that classical model order reduction techniques fail in this setting, both in accuracy and in efficiency. Thus, we propose reduced approaches inspired by the ones used when dealing with wave-like phenomena. Indeed, we compare standard proper orthogonal decomposition with two tailored strategies: geometric recasting and local proper orthogonal decomposition. Geometric recasting solves the optimization system in a reference domain simplifying the problem at hand avoiding hyper-reduction, while local proper orthogonal decomposition builds local bases to increase the accuracy of the reduced solution in very general settings (where geometric recasting is unfeasible). We compare the various approaches on two different numerical experiments based on geometries of increasing complexity.
The main focus of this paper is to approximate time series data based on the closed-loop Volterra series representation. Volterra series expansions are a valuable tool for representing, analyzing, and synthesizing nonlinear dynamical systems. However, a major limitation of this approach is that as the order of the expansion increases, the number of terms that need to be estimated grows exponentially, posing a considerable challenge. This paper considers a practical solution for estimating the closed-loop Volterra series in stationary nonlinear time series using the concepts of Reproducing Kernel Hilbert Spaces (RKHS) and polynomial kernels. We illustrate the applicability of the suggested Volterra representation by means of simulations and real data analysis. Furthermore, we apply the Kolmogorov-Smirnov Predictive Accuracy (KSPA) test, to determine whether there exists a statistically significant difference between the distribution of estimated errors for concurring time series models, and secondly to determine whether the estimated time series with the lower error based on some loss function also has exhibits a stochastically smaller error than estimated time series from a competing method. The obtained results indicate that the closed-loop Volterra method can outperform the ARFIMA, ETS, and Ridge regression methods in terms of both smaller error and increased interpretability.
We consider leader election in clique networks, where $n$ nodes are connected by point-to-point communication links. For the synchronous clique under simultaneous wake-up, i.e., where all nodes start executing the algorithm in round $1$, we show a tradeoff between the number of messages and the amount of time. More specifically, we show that any deterministic algorithm with a message complexity of $n f(n)$ requires $\Omega\left(\frac{\log n}{\log f(n)+1}\right)$ rounds, for $f(n) = \Omega(\log n)$. Our result holds even if the node IDs are chosen from a relatively small set of size $\Theta(n\log n)$, as we are able to avoid using Ramsey's theorem. We also give an upper bound that improves over the previously-best tradeoff. Our second contribution for the synchronous clique under simultaneous wake-up is to show that $\Omega(n\log n)$ is in fact a lower bound on the message complexity that holds for any deterministic algorithm with a termination time $T(n)$. We complement this result by giving a simple deterministic algorithm that achieves leader election in sublinear time while sending only $o(n\log n)$ messages, if the ID space is of at most linear size. We also show that Las Vegas algorithms (that never fail) require $\Theta(n)$ messages. For the synchronous clique under adversarial wake-up, we show that $\Omega(n^{3/2})$ is a tight lower bound for randomized $2$-round algorithms. Finally, we turn our attention to the asynchronous clique: Assuming adversarial wake-up, we give a randomized algorithm that achieves a message complexity of $O(n^{1 + 1/k})$ and an asynchronous time complexity of $k+8$. For simultaneous wake-up, we translate the deterministic tradeoff algorithm of Afek and Gafni to the asynchronous model, thus partially answering an open problem they pose.
In this paper, by introducing a reconstruction operator based on the Legendre moments, we construct a reduced discontinuous Galerkin (RDG) space that could achieve the same approximation accuracy but using fewer degrees of freedom (DoFs) than the standard discontinuous Galerkin (DG) space. The design of the ``narrow-stencil-based'' reconstruction operator can preserve the local data structure property of the high-order DG methods. With the RDG space, we apply the local discontinuous Galerkin (LDG) method with the implicit-explicit time marching for the nonlinear unsteady convection-diffusion-reaction equation, where the reduction of the number of DoFs allows us to achieve higher efficiency. In terms of theoretical analysis, we give the well-posedness and approximation properties for the reconstruction operator and the $L^2$ error estimate for the semi-discrete LDG scheme. Several representative numerical tests demonstrate the accuracy and the performance of the proposed method in capturing the layers.
Recent successful generative models are trained by fitting a neural network to an a-priori defined tractable probability density path taking noise to training examples. In this paper we investigate the space of Gaussian probability paths, which includes diffusion paths as an instance, and look for an optimal member in some useful sense. In particular, minimizing the Kinetic Energy (KE) of a path is known to make particles' trajectories simple, hence easier to sample, and empirically improve performance in terms of likelihood of unseen data and sample generation quality. We investigate Kinetic Optimal (KO) Gaussian paths and offer the following observations: (i) We show the KE takes a simplified form on the space of Gaussian paths, where the data is incorporated only through a single, one dimensional scalar function, called the \emph{data separation function}. (ii) We characterize the KO solutions with a one dimensional ODE. (iii) We approximate data-dependent KO paths by approximating the data separation function and minimizing the KE. (iv) We prove that the data separation function converges to $1$ in the general case of arbitrary normalized dataset consisting of $n$ samples in $d$ dimension as $n/\sqrt{d}\rightarrow 0$. A consequence of this result is that the Conditional Optimal Transport (Cond-OT) path becomes \emph{kinetic optimal} as $n/\sqrt{d}\rightarrow 0$. We further support this theory with empirical experiments on ImageNet.
In this work, we aim at constructing numerical schemes, that are as efficient as possible in terms of cost and conservation of invariants, for the Vlasov--Fokker--Planck system coupled with Poisson or Amp\`ere equation. Splitting methods are used where the linear terms in space are treated by spectral or semi-Lagrangian methods and the nonlinear diffusion in velocity in the collision operator is treated using a stabilized Runge--Kutta--Chebyshev (RKC) integrator, a powerful alternative of implicit schemes. The new schemes are shown to exactly preserve mass and momentum. The conservation of total energy is obtained using a suitable approximation of the electric field. An H-theorem is proved in the semi-discrete case, while the entropy decay is illustrated numerically for the fully discretized problem. Numerical experiments that include investigation of Landau damping phenomenon and bump-on-tail instability are performed to illustrate the efficiency of the new schemes.
The $\alpha$-$\eta$-$\kappa$-$\mu$ is one of the most generalized and flexible channel models having an excellent fit to experimental data from diverse propagation environments. The existing statistical results on the envelope of $\alpha$-$\eta$-$\kappa$-$\mu$ model contain an infinite series involving regularized hypergeometric function and generalized Laguerre polynomial, prohibiting its widespread application in the performance analysis of wireless systems. In this paper, we employ a novel approach to derive density and distribution functions of the envelope of the $\alpha$-$\eta$-$\kappa$-$\mu$ fading channel without an infinite series approximation. The derived statistical results are presented using a single Fox's H-function for tractable performance analysis and efficient numerical computations, especially for high-frequency mmWave and terahertz wireless transmissions. To gain insight into the distribution of channel envelope, we develop an asymptotic analysis using a more straightforward Gamma function converging to the exact within a reasonable range of channel parameters. To further substantiate the proposed analysis, we present the exact outage probability and average bit-error-rate (BER) performance of a wireless link subjected to the $\alpha$-$\eta$-$\kappa$-$\mu$ fading model using a single tri-variate Fox's H-function. We obtain the diversity order of the system by analyzing the outage probability at a high signal-to-noise (SNR) ratio. We use numerical and simulation analysis to demonstrate the significance of the developed statistical results compared with the existing infinite series representation for the envelope of the $\alpha$-$\eta$-$\kappa$-$\mu$ model.
We consider the problem of online service with delay on a general metric space, first presented by Azar, Ganesh, Ge and Panigrahi (STOC 2017). The best known randomized algorithm for this problem, by Azar and Touitou (FOCS 2019), is $O(\log^2 n)$-competitive, where $n$ is the number of points in the metric space. This is also the best known result for the special case of online service with deadlines, which is of independent interest. In this paper, we present $O(\log n)$-competitive deterministic algorithms for online service with deadlines or delay, improving upon the results from FOCS 2019. Furthermore, our algorithms are the first deterministic algorithms for online service with deadlines or delay which apply to general metric spaces and have sub-polynomial competitiveness.
We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.