We consider a class of second-order Strang splitting methods for Allen-Cahn equations with polynomial or logarithmic nonlinearities. For the polynomial case both the linear and the nonlinear propagators are computed explicitly. We show that this type of Strang splitting scheme is unconditionally stable regardless of the time step. Moreover we establish strict energy dissipation for a judiciously modified energy which coincides with the classical energy up to $\mathcal O(\tau)$ where $\tau$ is the time step. For the logarithmic potential case, since the continuous-time nonlinear propagator no longer enjoys explicit analytic treatments, we employ a second order in time two-stage implicit Runge--Kutta (RK) nonlinear propagator together with an efficient Newton iterative solver. We prove a maximum principle which ensures phase separation and establish energy dissipation law under mild restrictions on the time step. These appear to be the first rigorous results on the energy dissipation of Strang-type splitting methods for Allen-Cahn equations.
We consider a Johnson-N\'ed\'elec FEM-BEM coupling, which is a direct and non-symmetric coupling of finite and boundary element methods, in order to solve interface problems for the magnetostatic Maxwell's equations with the magnetic vector potential ansatz. In the FEM-domain, equations may be non-linear, whereas they are exclusively linear in the BEM-part to guarantee the existence of a fundamental solution. First, the weak problem is formulated in quotient spaces to avoid resolving to a saddle point problem. Second, we establish in this setting well-posedness of the arising problem using the framework of Lipschitz and strongly monotone operators as well as a stability result for a special type of non-linearity, which is typically considered in magnetostatic applications. Then, the discretization is performed in the isogeometric context, i.e., the same type of basis functions that are used for geometry design are considered as ansatz functions for the discrete setting. In particular, NURBS are employed for geometry considerations, and B-Splines, which can be understood as a special type of NURBS, for analysis purposes. In this context, we derive a priori estimates w.r.t. h-refinement, and point out to an interesting behavior of BEM, which consists in an amelioration of the convergence rates, when a functional of the solution is evaluated in the exterior BEM-domain. This improvement may lead to a doubling of the convergence rate under certain assumptions. Finally, we end the paper with a numerical example to illustrate the theoretical results, along with a conclusion and an outlook.
In this paper, we develop a framework to construct energy-preserving methods for multi-components Hamiltonian systems, combining the exponential integrator and the partitioned averaged vector field method. This leads to numerical schemes with both advantages of long-time stability and excellent behavior for highly oscillatory or stiff problems. Compared to the existing energy-preserving exponential integrators (EP-EI) in practical implementation, our proposed methods are much efficient which can at least be computed by subsystem instead of handling a nonlinear coupling system at a time. Moreover, for most cases, such as the Klein-Gordon-Schr\"{o}dinger equations and the Klein-Gordon-Zakharov equations considered in this paper, the computational cost can be further reduced. Specifically, one part of the derived schemes is totally explicit, and the other is linearly implicit. In addition, we present rigorous proof of conserving the original energy of Hamiltonian systems, in which an alternative technique is utilized so that no additional assumptions are required, in contrast to the proof strategies used for the existing EP-EI. Numerical experiments are provided to demonstrate the significant advantages in accuracy, computational efficiency, and the ability to capture highly oscillatory solutions.
Since their introduction in Abadie and Gardeazabal (2003), Synthetic Control (SC) methods have quickly become one of the leading methods for estimating causal effects in observational studies in settings with panel data. Formal discussions often motivate SC methods by the assumption that the potential outcomes were generated by a factor model. Here we study SC methods from a design-based perspective, assuming a model for the selection of the treated unit(s) and period(s). We show that the standard SC estimator is generally biased under random assignment. We propose a Modified Unbiased Synthetic Control (MUSC) estimator that guarantees unbiasedness under random assignment and derive its exact, randomization-based, finite-sample variance. We also propose an unbiased estimator for this variance. We document in settings with real data that under random assignment, SC-type estimators can have root mean-squared errors that are substantially lower than that of other common estimators. We show that such an improvement is weakly guaranteed if the treated period is similar to the other periods, for example, if the treated period was randomly selected.
We prove new optimality results for adaptive mesh refinement algorithms for non-symmetric, indefinite, and time-dependent problems by proposing a generalization of quasi-orthogonality which follows directly from the inf-sup stability of the underlying problem. This completely removes a central technical difficulty in modern proofs of optimal convergence of adaptive mesh refinement algorithms and leads to simple optimality proofs for the Taylor-Hood discretization of the stationary Stokes problem, a finite-element/boundary-element discretization of an unbounded transmission problem, and an adaptive time-stepping scheme for parabolic equations. The main technical tool are new stability bounds for the $LU$-factorization of matrices together with a recently established connection between quasi-orthogonality and matrix factorization.
We consider the numerical approximation of the inertial Landau-Lifshitz-Gilbert equation (iLLG), which describes the dynamics of the magnetization in ferromagnetic materials at subpicosecond time scales. We propose and analyze two fully discrete numerical schemes: The first method is based on a reformulation of the problem as a linear constrained variational formulation for the linear velocity. The second method exploits a reformulation of the problem as a first order system in time for the magnetization and the angular momentum. Both schemes are implicit, based on first-order finite elements, and generate approximations satisfying the unit-length constraint of iLLG at the vertices of the underlying mesh. For both methods, we prove convergence of the approximations towards a weak solution of the problem. Numerical experiments validate the theoretical results and show the applicability of the methods for the simulation of ultrafast magnetic processes.
Fully implicit Runge-Kutta (IRK) methods have many desirable accuracy and stability properties as time integration schemes, but high-order IRK methods are not commonly used in practice with large-scale numerical PDEs because of the difficulty of solving the stage equations. This paper introduces a theoretical and algorithmic framework for solving the nonlinear equations that arise from IRK methods (and discontinuous Galerkin discretizations in time) applied to nonlinear numerical PDEs, including PDEs with algebraic constraints. Several new linearizations of the nonlinear IRK equations are developed, offering faster and more robust convergence than the often-considered simplified Newton, as well as an effective preconditioner for the true Jacobian if exact Newton iterations are desired. Inverting these linearizations requires solving a set of block 2x2 systems. Under quite general assumptions, it is proven that the preconditioned 2x2 operator's condition number is bounded by a small constant close to one, independent of the spatial discretization, spatial mesh, and time step, and with only weak dependence on the number of stages or integration accuracy. Moreover, the new method is built using the same preconditioners needed for backward Euler-type time stepping schemes, so can be readily added to existing codes. The new methods are applied to several challenging fluid flow problems, including the compressible Euler and Navier Stokes equations, and the vorticity-streamfunction formulation of the incompressible Euler and Navier Stokes equations. Up to 10th-order accuracy is demonstrated using Gauss IRK, while in all cases 4th-order Gauss IRK requires roughly half the number of preconditioner applications as required by standard SDIRK methods.
In two-phase image segmentation, convex relaxation has allowed global minimisers to be computed for a variety of data fitting terms. Many efficient approaches exist to compute a solution quickly. However, we consider whether the nature of the data fitting in this formulation allows for reasonable assumptions to be made about the solution that can improve the computational performance further. In particular, we employ a well known dual formulation of this problem and solve the corresponding equations in a restricted domain. We present experimental results that explore the dependence of the solution on this restriction and quantify imrovements in the computational performance. This approach can be extended to analogous methods simply and could provide an efficient alternative for problems of this type.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.
Methods that align distributions by minimizing an adversarial distance between them have recently achieved impressive results. However, these approaches are difficult to optimize with gradient descent and they often do not converge well without careful hyperparameter tuning and proper initialization. We investigate whether turning the adversarial min-max problem into an optimization problem by replacing the maximization part with its dual improves the quality of the resulting alignment and explore its connections to Maximum Mean Discrepancy. Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions. We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits. In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time.
Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.