In this paper we propose a novel and general approach to design semi-implicit methods for the simulation of fluid-structure interaction problems in a fully Eulerian framework. In order to properly present the new method, we focus on the two-dimensional version of the general model developed to describe full membrane elasticity. The approach consists in treating the elastic source term by writing an evolution equation on the structure stress tensor, even if it is nonlinear. Then, it is possible to show that its semi-implicit discretization allows us to add to the linear system of the Navier-Stokes equations some consistent dissipation terms that depend on the local deformation and stiffness of the membrane. Due to the linearly implicit discretization, the approach does not need iterative solvers and can be easily applied to any Eulerian framework for fluid-structure interaction. Its stability properties are studied by performing a Von Neumann analysis on a simplified one-dimensional model and proving that, thanks to the additional dissipation, the discretized coupled system is unconditionally stable. Several numerical experiments are shown for two-dimensional problems by comparing the new method to the original explicit scheme and studying the effect of structure stiffness and mesh refinement on the membrane dynamics. The newly designed scheme is able to relax the time step restrictions that affect the explicit method and reduce crucially the computational costs, especially when very stiff membranes are under consideration.
This paper is dedicated to the numerical solution of a fourth-order singular perturbation problem using the interior penalty virtual element method (IPVEM) proposed in [42]. The study introduces modifications to the jumps and averages in the penalty term, as well as presents an automated mesh-dependent selection of the penalty parameter. Drawing inspiration from the modified Morley finite element methods, we leverage the conforming interpolation technique to handle the lower part of the bilinear form. Through our analysis, we establish optimal convergence in the energy norm and provide a rigorous proof of uniform convergence concerning the perturbation parameter in the lowest-order case.
In this paper, we propose a new test for testing the equality of two population covariance matrices in the ultra-high dimensional setting that the dimension is much larger than the sizes of both of the two samples. Our proposed methodology relies on a data splitting procedure and a comparison of a set of well selected eigenvalues of the sample covariance matrices on the split data sets. Compared to the existing methods, our methodology is adaptive in the sense that (i). it does not require specific assumption (e.g., comparable or balancing, etc.) on the sizes of two samples; (ii). it does not need quantitative or structural assumptions of the population covariance matrices; (iii). it does not need the parametric distributions or the detailed knowledge of the moments of the two populations. Theoretically, we establish the asymptotic distributions of the statistics used in our method and conduct the power analysis. We justify that our method is powerful under very weak alternatives. We conduct extensive numerical simulations and show that our method significantly outperforms the existing ones both in terms of size and power. Analysis of two real data sets is also carried out to demonstrate the usefulness and superior performance of our proposed methodology. An $\texttt{R}$ package $\texttt{UHDtst}$ is developed for easy implementation of our proposed methodology.
In this paper we present a mathematical and numerical analysis of an eigenvalue problem associated to the elasticity-Stokes equations stated in two and three dimensions. Both problems are related through the Herrmann pressure. Employing the Babu\v ska--Brezzi theory, it is proved that the resulting continuous and discrete variational formulations are well-posed. In particular, the finite element method is based on general inf-sup stables pairs for the Stokes system, such that, Taylor--Hood finite elements. By using a general approximation theory for compact operators, we obtain optimal order error estimates for the eigenfunctions and a double order for the eigenvalues. Under mild assumptions, we have that these estimates hold with constants independent of the Lam\'e coefficient $\lambda$. In addition, we carry out the reliability and efficiency analysis of a residual-based a posteriori error estimator for the spectral problem. We report a series of numerical tests in order to assess the performance of the method and its behavior when the nearly incompressible case of elasticity is considered.
In this paper we consider the numerical solution of fractional differential equations. In particular, we study a step-by-step graded mesh procedure based on an expansion of the vector field using orthonormal Jacobi polynomials. Under mild hypotheses, the proposed procedure is capable of getting spectral accuracy. A few numerical examples are reported to confirm the theoretical findings.
High-order tensor methods for solving both convex and nonconvex optimization problems have generated significant research interest, leading to algorithms with optimal global rates of convergence and local rates that are faster than Newton's method. On each iteration, these methods require the unconstrained local minimization of a (potentially nonconvex) multivariate polynomial of degree higher than two, constructed using third-order (or higher) derivative information, and regularized by an appropriate power of regularization. Developing efficient techniques for solving such subproblems is an ongoing topic of research, and this paper addresses the case of the third-order tensor subproblem. We propose the CQR algorithmic framework, for minimizing a nonconvex Cubic multivariate polynomial with Quartic Regularisation, by minimizing a sequence of local quadratic models that incorporate simple cubic and quartic terms. The role of the cubic term is to crudely approximate local tensor information, while the quartic one controls model regularization and progress. We provide necessary and sufficient optimality conditions that fully characterise the global minimizers of these cubic-quartic models. We then turn these conditions into secular equations that can be solved using nonlinear eigenvalue techniques. We show, using our optimality characterisations, that a CQR algorithmic variant has the optimal-order evaluation complexity of $\mathcal{O}(\epsilon^{-3/2})$ when applied to minimizing our quartically-regularised cubic subproblem, which can be further improved in special cases. We propose practical CQR variants that use local tensor information to construct the local cubic-quartic models. We test these variants numerically and observe them to be competitive with ARC and other subproblem solvers on typical instances and even superior on ill-conditioned subproblems with special structure.
The aim of this paper is to give a systematic mathematical interpretation of the diffusion problem on which Graph Neural Networks (GNNs) models are based. The starting point of our approach is a dissipative functional leading to dynamical equations which allows us to study the symmetries of the model. We discuss the conserved charges and provide a charge-preserving numerical method for solving the dynamical equations. In any dynamical system and also in GRAph Neural Diffusion (GRAND), knowing the charge values and their conservation along the evolution flow could provide a way to understand how GNNs and other networks work with their learning capabilities.
We study hypothesis testing under communication constraints, where each sample is quantized before being revealed to a statistician. Without communication constraints, it is well known that the sample complexity of simple binary hypothesis testing is characterized by the Hellinger distance between the distributions. We show that the sample complexity of simple binary hypothesis testing under communication constraints is at most a logarithmic factor larger than in the unconstrained setting and this bound is tight. We develop a polynomial-time algorithm that achieves the aforementioned sample complexity. Our framework extends to robust hypothesis testing, where the distributions are corrupted in the total variation distance. Our proofs rely on a new reverse data processing inequality and a reverse Markov inequality, which may be of independent interest. For simple $M$-ary hypothesis testing, the sample complexity in the absence of communication constraints has a logarithmic dependence on $M$. We show that communication constraints can cause an exponential blow-up leading to $\Omega(M)$ sample complexity even for adaptive algorithms.
In this article, we study nonparametric inference for a covariate-adjusted regression function. This parameter captures the average association between a continuous exposure and an outcome after adjusting for other covariates. In particular, under certain causal conditions, this parameter corresponds to the average outcome had all units been assigned to a specific exposure level, known as the causal dose-response curve. We propose a debiased local linear estimator of the covariate-adjusted regression function, and demonstrate that our estimator converges pointwise to a mean-zero normal limit distribution. We use this result to construct asymptotically valid confidence intervals for function values and differences thereof. In addition, we use approximation results for the distribution of the supremum of an empirical process to construct asymptotically valid uniform confidence bands. Our methods do not require undersmoothing, permit the use of data-adaptive estimators of nuisance functions, and our estimator attains the optimal rate of convergence for a twice differentiable function. We illustrate the practical performance of our estimator using numerical studies and an analysis of the effect of air pollution exposure on cardiovascular mortality.
In this article, we create an artificial neural network (ANN) that combines both classical and modern techniques for determining the key length of a Vigen\`{e}re cipher. We provide experimental evidence supporting the accuracy of our model for a wide range of parameters. We also discuss the creation and features of this ANN along with a comparative analysis between our ANN, the index of coincidence, and the twist-based algorithms.
In this paper, we introduce a new simple approach to developing and establishing the convergence of splitting methods for a large class of stochastic differential equations (SDEs), including additive, diagonal and scalar noise types. The central idea is to view the splitting method as a replacement of the driving signal of an SDE, namely Brownian motion and time, with a piecewise linear path that yields a sequence of ODEs $-$ which can be discretised to produce a numerical scheme. This new way of understanding splitting methods is inspired by, but does not use, rough path theory. We show that when the driving piecewise linear path matches certain iterated stochastic integrals of Brownian motion, then a high order splitting method can be obtained. We propose a general proof methodology for establishing the strong convergence of these approximations that is akin to the general framework of Milstein and Tretyakov. That is, once local error estimates are obtained for the splitting method, then a global rate of convergence follows. This approach can then be readily applied in future research on SDE splitting methods. By incorporating recently developed approximations for iterated integrals of Brownian motion into these piecewise linear paths, we propose several high order splitting methods for SDEs satisfying a certain commutativity condition. In our experiments, which include the Cox-Ingersoll-Ross model and additive noise SDEs (noisy anharmonic oscillator, stochastic FitzHugh-Nagumo model, underdamped Langevin dynamics), the new splitting methods exhibit convergence rates of $O(h^{3/2})$ and outperform schemes previously proposed in the literature.