We consider a two-phase Darcy flow in a fractured porous medium consisting in a matrix flow coupled with a tangential flow in the fractures, described as a network of planar surfaces. This flow model is also coupled with the mechanical deformation of the matrix assuming that the fractures are open and filled by the fluids, as well as small deformations and a linear elastic constitutive law. The model is discretized using the gradient discretization method [26], which covers a large class of conforming and non conforming schemes. This framework allows for a generic convergence analysis of the coupled model using a combination of discrete functional tools. Here, we describe the model together with its numerical discretization, and we prove a convergence result assuming the non-degeneracy of the phase mobilities and that the discrete solutions remain physical in the sense that, roughly speaking, the porosity does not vanish and the fractures remain open. This is, to our knowledge, the first convergence result for this type of models taking into account two-phase flows in fractured porous media and the non-linear poromechanical coupling. Previous related works consider a linear approximation obtained for a single phase flow by freezing the fracture conductivity [36, 37]. Numerical tests employing the Two-Point Flux Approximation (TPFA) finite volume scheme for the flows and $\mathbb P_2$ finite elements for the mechanical deformation are also provided to illustrate the behavior of the solution to the model.
We present a novel computational modeling framework to numerically investigate fluid-structure interaction in viscous fluids using the phase field embedding method. Each rigid body or elastic structure immersed in the incompressible viscous fluid matrix, grossly referred to as the particle in this paper, is identified by a volume preserving phase field. The motion of the particle is driven by the fluid velocity in the matrix for passive particles or combined with its self-propelling velocity for active particles. The excluded volume effect between a pair of particles or between a particle and the boundary is modeled by a repulsive potential force. The drag exerted to the fluid by a particle is assumed proportional to its velocity. When the particle is rigid, its state is described by a zero velocity gradient tensor within the nonzero phase field that defines its profile and a constraining stress exists therein. While the particle is elastic, a linear constitutive equation for the elastic stress is provided within the particle domain. A hybrid, thermodynamically consistent hydrodynamic model valid in the entire computational domain is then derived for the fluid-particle ensemble using the generalized Onsager principle accounting for both rigid and elastic particles. Structure-preserving numerical algorithms are subsequently developed for the thermodynamically consistent model. Numerical tests in 2D and 3D space are carried out to verify the rate of convergence and numerical examples are given to demonstrate the usefulness of the computational framework for simulating fluid-structure interactions for passive as well as self-propelling active particles in a viscous fluid matrix.
Stochastic gradient descent (SGD) is one of the most popular algorithms in modern machine learning. The noise encountered in these applications is different from that in many theoretical analyses of stochastic gradient algorithms. In this article, we discuss some of the common properties of energy landscapes and stochastic noise encountered in machine learning problems, and how they affect SGD-based optimization. In particular, we show that the learning rate in SGD with machine learning noise can be chosen to be small, but uniformly positive for all times if the energy landscape resembles that of overparametrized deep learning problems. If the objective function satisfies a Lojasiewicz inequality, SGD converges to the global minimum exponentially fast, and even for functions which may have local minima, we establish almost sure convergence to the global minimum at an exponential rate from any finite energy initialization. The assumptions that we make in this result concern the behavior where the objective function is either small or large and the nature of the gradient noise, but the energy landscape is fairly unconstrained on the domain where the objective function takes values in an intermediate regime.
We present a framework for speeding up the time it takes to sample from discrete distributions $\mu$ defined over subsets of size $k$ of a ground set of $n$ elements, in the regime $k\ll n$. We show that having estimates of marginals $\mathbb{P}_{S\sim \mu}[i\in S]$, the task of sampling from $\mu$ can be reduced to sampling from distributions $\nu$ supported on size $k$ subsets of a ground set of only $n^{1-\alpha}\cdot \operatorname{poly}(k)$ elements. Here, $1/\alpha\in [1, k]$ is the parameter of entropic independence for $\mu$. Further, the sparsified distributions $\nu$ are obtained by applying a sparse (mostly $0$) external field to $\mu$, an operation that often retains algorithmic tractability of sampling from $\nu$. This phenomenon, which we dub domain sparsification, allows us to pay a one-time cost of estimating the marginals of $\mu$, and in return reduce the amortized cost needed to produce many samples from the distribution $\mu$, as is often needed in upstream tasks such as counting and inference. For a wide range of distributions where $\alpha=\Omega(1)$, our result reduces the domain size, and as a corollary, the cost-per-sample, by a $\operatorname{poly}(n)$ factor. Examples include monomers in a monomer-dimer system, non-symmetric determinantal point processes, and partition-constrained Strongly Rayleigh measures. Our work significantly extends the reach of prior work of Anari and Derezi\'nski who obtained domain sparsification for distributions with a log-concave generating polynomial (corresponding to $\alpha=1$). As a corollary of our new analysis techniques, we also obtain a less stringent requirement on the accuracy of marginal estimates even for the case of log-concave polynomials; roughly speaking, we show that constant-factor approximation is enough for domain sparsification, improving over $O(1/k)$ relative error established in prior work.
In this paper, an energy-based discontinuous Galerkin method for dynamic Euler-Bernoulli beam equations is developed. The resulting method is energy-dissipating or energy-conserving depending on the simple, mesh-independent choice of numerical fluxes. By introducing a velocity field, the original problem is transformed into a first-order in time system. In our formulation, the discontinuous Galerkin approximations for the original displacement field and the auxiliary velocity field are not restricted to be in the same space. In particular, a given accuracy can be achieved with the fewest degrees of freedom when the degree for the approximation space of the velocity field is two orders lower than the degree of approximation space for the displacement field. In addition, we establish the error estimates in an energy norm and demonstrate the corresponding optimal convergence in numerical experiments.
Goodness-of-fit (GoF) testing is ubiquitous in statistics, with direct ties to model selection, confidence interval construction, conditional independence testing, and multiple testing, just to name a few applications. While testing the GoF of a simple (point) null hypothesis provides an analyst great flexibility in the choice of test statistic while still ensuring validity, most GoF tests for composite null hypotheses are far more constrained, as the test statistic must have a tractable distribution over the entire null model space. A notable exception is co-sufficient sampling (CSS): resampling the data conditional on a sufficient statistic for the null model guarantees valid GoF testing using any test statistic the analyst chooses. But CSS testing requires the null model to have a compact (in an information-theoretic sense) sufficient statistic, which only holds for a very limited class of models; even for a null model as simple as logistic regression, CSS testing is powerless. In this paper, we leverage the concept of approximate sufficiency to generalize CSS testing to essentially any parametric model with an asymptotically-efficient estimator; we call our extension "approximate CSS" (aCSS) testing. We quantify the finite-sample Type I error inflation of aCSS testing and show that it is vanishing under standard maximum likelihood asymptotics, for any choice of test statistic. We apply our proposed procedure both theoretically and in simulation to a number of models of interest to demonstrate its finite-sample Type I error and power.
Variational phase-field methods have been shown powerful for the modeling of complex crack propagation without a priori knowledge of the crack path or ad hoc criteria. However, phase-field models suffer from their energy functional being non-linear and non-convex, while requiring a very fine mesh to capture the damage gradient. This implies a high computational cost, limiting concrete engineering applications of the method. In this work, we propose an efficient and robust fully monolithic solver for phase-field fracture using a modified Newton method with inertia correction and an energy line-search. To illustrate the gains in efficiency obtained with our approach, we compare it to two popular methods for phase-field fracture, namely the alternating minimization and the quasi-monolithic schemes. To facilitate the evaluation of the time step dependent quasi-monolithic scheme, we couple the latter with an extrapolation correction loop controlled by a damage-based criteria. Finally, we show through four benchmark tests that the modified Newton method we propose is straightforward, robust, and leads to identical solutions, while offering a reduction in computation time by factors of up to 12 and 6 when compared to the alternating minimization and quasi-monolithic schemes.
In this work, we study an inverse problem of recovering a space-time dependent diffusion coefficient in the subdiffusion model from the distributed observation, where the mathematical model involves a Djrbashian-Caputo fractional derivative of order $\alpha\in(0,1)$ in time. The main technical challenges of both theoretical and numerical analysis lie in the limited smoothing properties due to the fractional differential operator and the high degree of nonlinearity of the forward map from the unknown diffusion coefficient to the distributed observation. Theoretically, we establish two conditional stability results using a novel test function, which leads to a stability bound in $L^2(0,T;L^2(\Omega))$ under a suitable positivity condition. The positivity condition is verified for a large class of problem data. Numerically, we develop a rigorous procedure for the recovery of the diffusion coefficient based on a regularized least-squares formulation, which is then discretized by the standard Galerkin method with continuous piecewise linear elements in space and backward Euler convolution quadrature in time. We provide a complete error analysis of the fully discrete formulation, by combining several new error estimates for the direct problem (optimal in terms of data regularity), a discrete version of fractional maximal $L^p$ regularity, and a nonstandard energy argument. Under the positivity condition, we obtain a standard $L^2(0,T; L^2(\Omega))$ error estimate consistent with the conditional stability. Further, we illustrate the analysis with some numerical examples.
We study a finite-element based space-time discretisation for the 2D stochastic Navier-Stokes equations in a bounded domain supplemented with no-slip boundary conditions. We prove optimal convergence rates in the energy norm with respect to convergence in probability, that is convergence of order (almost) 1/2 in time and 1 in space. This was previously only known in the space-periodic case, where higher order energy estimates for any given (deterministic) time are available. In contrast to this, in the Dirichlet-case estimates are only known for a (possibly large) stopping time. We overcome this problem by introducing an approach based on discrete stopping times. This replaces the localised estimates (with respect to the sample space) from earlier contributions.
In micro-fluidics not only does capillarity dominate but also thermal fluctuations become important. On the level of the lubrication approximation, this leads to a quasi-linear fourth-order parabolic equation for the film height $h$ driven by space-time white noise. The gradient flow structure of its deterministic counterpart, the thin-film equation, which encodes the balance between driving capillary and limiting viscous forces, provides the guidance for the thermodynamically consistent introduction of fluctuations. We follow this route on the level of a spatial discretization of the gradient flow structure. Starting from an energetically conformal finite-element (FE) discretization, we point out that the numerical mobility function introduced by Gr\"un and Rumpf can be interpreted as a discretization of the metric tensor in the sense of a mixed FE method with lumping. While this discretization was devised in order to preserve the so-called entropy estimate, we use this to show that the resulting high-dimensional stochastic differential equation (SDE) preserves pathwise and pointwise strict positivity, at least in case of the physically relevant mobility function arising from the no-slip boundary condition. As a consequence, this discretization gives rise to a consistent invariant measure, namely a discretization of the Brownian excursion (up to the volume constraint), and thus features an entropic repulsion. The price to pay over more naive discretizations is that when writing the SDE in It\^o's form, which is the basis for the Euler-Mayurama time discretization, a correction term appears. To conclude, we perform various numerical experiments to compare the behavior of our discretization to that of the more naive finite difference discretization of the equation.
This article proposes an open-source implementation of a phase-field model for brittle fracture using a recently developed finite element toolbox, Gridap in Julia. The present work exploits the advantages of both the phase-field model and Gridap toolbox for simulating fracture in brittle materials. On one hand, the use of the phase-field model, which is a continuum approach and uses a diffuse representation of sharp cracks, enables the proposed implementation to overcome such well-known drawbacks of the discrete approach for predicting complex crack paths as the need for re-meshing, enrichment of finite element shape functions and an explicit tracking of the crack surfaces. On the other hand, the use of Gridap makes the proposed implementation very compact and user-friendly that requires low memory usage, and provides a high degree of flexibility to the users in defining weak forms of partial differential equations. A test on a notched beam under symmetric three-point bending and a set of tests on a notched beam with three holes under asymmetric three-point bending is considered to demonstrate how the proposed Gridap based phase-field Julia code can be used to simulate fracture in brittle materials.