The design of numerical approximations of the Cahn-Hilliard model preserving the maximum principle is a challenging problem, even more if considering additional transport terms. In this work we present a new upwind Discontinuous Galerkin scheme for the convective Cahn-Hilliard model with degenerate mobility which preserves the maximum principle and prevents non-physical spurious oscillations. Furthermore, we show some numerical experiments in agreement with the previous theoretical results. Finally, numerical comparisons with other schemes found in the literature are also carried out.
A trigonometrically approximated maximum likelihood estimation for $\alpha$-stable laws is proposed. The estimator solves the approximated likelihood equation, which is obtained by projecting a true score function on the space spanned by trigonometric functions. The projected score is expressed only by real and imaginary parts of the characteristic function and their derivatives, so that we can explicitly construct the targeting estimating equation. We study the asymptotic properties of the proposed estimator and show consistency and asymptotic normality. Furthermore, as the number of trigonometric functions increases, the estimator converges to the exact maximum likelihood estimator, in the sense that they have the same asymptotic law. Simulation studies show that our estimator outperforms other moment-type estimators, and its standard deviation almost achieves the Cram\'er--Rao lower bound. We apply our method to the estimation problem for $\alpha$-stable Ornstein--Uhlenbeck processes in a high-frequency setting. The obtained result demonstrates the theory of asymptotic mixed normality.
In this paper, we propose a new covering technique localized for the trajectories of SGD. This localization provides an algorithm-specific complexity measured by the covering number, which can have dimension-independent cardinality in contrast to standard uniform covering arguments that result in exponential dimension dependency. Based on this localized construction, we show that if the objective function is a finite perturbation of a piecewise strongly convex and smooth function with $P$ pieces, i.e. non-convex and non-smooth in general, the generalization error can be upper bounded by $O(\sqrt{(\log n\log(nP))/n})$, where $n$ is the number of data samples. In particular, this rate is independent of dimension and does not require early stopping and decaying step size. Finally, we employ these results in various contexts and derive generalization bounds for multi-index linear models, multi-class support vector machines, and $K$-means clustering for both hard and soft label setups, improving the known state-of-the-art rates.
In this paper, we consider a drift-diffusion charge transport model for perovskite solar cells, where electrons and holes may diffuse linearly (Boltzmann approximation) or nonlinearly (e.g. due to Fermi-Dirac statistics). To incorporate volume exclusion effects, we rely on the Fermi-Dirac integral of order -1 when modeling moving anionic vacancies within the perovskite layer which is sandwiched between electron and hole transport layers. After non-dimensionalization, we first prove a continuous entropy-dissipation inequality for the model. Then, we formulate a corresponding two-point flux finite volume scheme on Voronoi meshes and show an analogous discrete entropy-dissipation inequality. This inequality helps us to show the existence of a discrete solution of the nonlinear discrete system with the help of a corollary of Brouwer's fixed point theorem and the minimization of a convex functional. Finally, we verify our theoretically proven properties numerically, simulate a realistic device setup and show exponential decay in time with respect to the L^2 error as well as a physically and analytically meaningful relative entropy.
This work considers Gaussian process interpolation with a periodized version of the Mat{\'e}rn covariance function (Stein, 1999, Section 6.7) with Fourier coefficients $\phi$($\alpha$^2 + j^2)^(--$\nu$--1/2). Convergence rates are studied for the joint maximum likelihood estimation of $\nu$ and $\phi$ when the data is sampled according to the model. The mean integrated squared error is also analyzed with fixed and estimated parameters, showing that maximum likelihood estimation yields asymptotically the same error as if the ground truth was known. Finally, the case where the observed function is a ''deterministic'' element of a continuous Sobolev space is also considered, suggesting that bounding assumptions on some parameters can lead to different estimates.
We investigate error of the Euler scheme in the case when the right-hand side function of the underlying ODE satisfies nonstandard assumptions such as local one-side Lipschitz condition and local H\"older continuity. Moreover, we assume two cases in regards to information availability: exact and noisy with respect to the right-hand side function. Optimality analysis of the Euler scheme is also provided. Lastly, we present the results of some numerical experiments.
Much of the literature on optimal design of bandit algorithms is based on minimization of expected regret. It is well known that designs that are optimal over certain exponential families can achieve expected regret that grows logarithmically in the number of arm plays, at a rate governed by the Lai-Robbins lower bound. In this paper, we show that when one uses such optimized designs, the regret distribution of the associated algorithms necessarily has a very heavy tail, specifically, that of a truncated Cauchy distribution. Furthermore, for $p>1$, the $p$'th moment of the regret distribution grows much faster than poly-logarithmically, in particular as a power of the total number of arm plays. We show that optimized UCB bandit designs are also fragile in an additional sense, namely when the problem is even slightly mis-specified, the regret can grow much faster than the conventional theory suggests. Our arguments are based on standard change-of-measure ideas, and indicate that the most likely way that regret becomes larger than expected is when the optimal arm returns below-average rewards in the first few arm plays, thereby causing the algorithm to believe that the arm is sub-optimal. To alleviate the fragility issues exposed, we show that UCB algorithms can be modified so as to ensure a desired degree of robustness to mis-specification. In doing so, we also provide a sharp trade-off between the amount of UCB exploration and the tail exponent of the resulting regret distribution.
We propose and analyze a class of particle methods for the Vlasov equation with a strong external magnetic field in a torus configuration. In this regime, the time step can be subject to stability constraints related to the smallness of Larmor radius. To avoid this limitation, our approach is based on higher-order semi-implicit numerical schemes already validated on dissipative systems [3] and for magnetic fields pointing in a fixed direction [9, 10, 12]. It hinges on asymptotic insights gained in [11] at the continuous level. Thus, when the magnitude of the external magnetic field is large, this scheme provides a consistent approximation of the guiding-center system taking into account curvature and variation of the magnetic field. Finally, we carry out a theoretical proof of consistency and perform several numerical experiments that establish a solid validation of the method and its underlying concepts.
To estimate causal effects, analysts performing observational studies in health settings utilize several strategies to mitigate bias due to confounding by indication. There are two broad classes of approaches for these purposes: use of confounders and instrumental variables (IVs). Because such approaches are largely characterized by untestable assumptions, analysts must operate under an indefinite paradigm that these methods will work imperfectly. In this tutorial, we formalize a set of general principles and heuristics for estimating causal effects in the two approaches when the assumptions are potentially violated. This crucially requires reframing the process of observational studies as hypothesizing potential scenarios where the estimates from one approach are less inconsistent than the other. While most of our discussion of methodology centers around the linear setting, we touch upon complexities in non-linear settings and flexible procedures such as target minimum loss-based estimation (TMLE) and double machine learning (DML). To demonstrate the application of our principles, we investigate the use of donepezil off-label for mild cognitive impairment (MCI). We compare and contrast results from confounder and IV methods, traditional and flexible, within our analysis and to a similar observational study and clinical trial.
Parallel-in-time methods for partial differential equations (PDEs) have been the subject of intense development over recent decades, particularly for diffusion-dominated problems. It has been widely reported in the literature, however, that many of these methods perform quite poorly for advection-dominated problems. Here we analyze the particular iterative parallel-in-time algorithm of multigrid reduction-in-time (MGRIT) for discretizations of constant-wave-speed linear advection problems. We focus on common method-of-lines discretizations that employ upwind finite differences in space and Runge-Kutta methods in time. Using a convergence framework we developed in previous work, we prove for a subclass of these discretizations that, if using the standard approach of rediscretizing the fine-grid problem on the coarse grid, robust MGRIT convergence with respect to CFL number and coarsening factor is not possible. This poor convergence and non-robustness is caused, at least in part, by an inadequate coarse-grid correction for smooth Fourier modes known as characteristic components.We propose an alternative coarse-grid that provides a better correction of these modes. This coarse-grid operator is related to previous work and uses a semi-Lagrangian discretization combined with an implicitly treated truncation error correction. Theory and numerical experiments show the coarse-grid operator yields fast MGRIT convergence for many of the method-of-lines discretizations considered, including for both implicit and explicit discretizations of high order.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.