We present a complete numerical analysis for a general discretization of a coupled flow-mechanics model in fractured porous media, considering single-phase flows and including frictionless contact at matrix-fracture interfaces, as well as nonlinear poromechanical coupling. Fractures are described as planar surfaces, yielding the so-called mixed- or hybrid-dimensional models. Small displacements and a linear elastic behavior are considered for the matrix. The model accounts for discontinuous fluid pressures at matrix-fracture interfaces in order to cover a wide range of normal fracture conductivities. The numerical analysis is carried out in the Gradient Discretization framework, encompassing a large family of conforming and nonconforming discretizations. The convergence result also yields, as a by-product, the existence of a weak solution to the continuous model. A numerical experiment in 2D is presented to support the obtained result, employing a Hybrid Finite Volume scheme for the flow and second-order finite elements ($\mathbb P_2$) for the mechanical displacement coupled with face-wise constant ($\mathbb P_0$) Lagrange multipliers on fractures, representing normal stresses, to discretize the contact conditions.
The paper presents a numerical method for simulating flow and mechanics in fractured rock. The governing equations that couple the effects in the rock mass and in the fractures are obtained using the discrete fracture-matrix approach. The fracture flow is driven by the cubic law, and the contact conditions prevent fractures from self-penetration. A stable finite element discretization is proposed for the displacement-pressure-flux formulation. The resulting nonlinear algebraic system of equations and inequalities is decoupled using a robust iterative splitting into the linearized flow subproblem, and the quadratic programming problem for the mechanical part. The non-penetration conditions are solved by means of dualization and an optimal quadratic programming algorithm. The capability of the numerical scheme is demonstrated on a benchmark problem for tunnel excavation with hundreds of fractures in 3D. The paper's novelty consists in a combination of three crucial ingredients: (i) application of discrete fracture-matrix approach to poroelasticity, (ii) robust iterative splitting of resulting nonlinear algebraic system working for real-world 3D problems, and (iii) efficient solution of its mechanical quadratic programming part with a large number of fractures in mutual contact by means of own solvers implemented into an in-house software library.
We propose a new approach to the autoregressive spatial functional model, based on the notion of signature, which represents a function as an infinite series of its iterated integrals. It presents the advantage of being applicable to a wide range of processes. After having provided theoretical guarantees to the proposed model, we have shown in a simulation study and on a real data set that this new approach presents competitive performances compared to the traditional model.
The modeling of cracks is an important topic - both in engineering as well as in mathematics. Since crack propagation is characterized by a free boundary value problem (the geometry of the crack is not known beforehand, but part of the solution), approximations of the underlying sharp-interface problem based on phase-field models are often considered. Focusing on a rate-independent setting, these models are defined by a unidirectional gradient-flow of an energy functional. Since this energy functional is non-convex, the evolution of the variables such as the displacement field and the phase-field variable might be discontinuous in time leading to so-called brutal crack growth. For this reason, solution concepts have to be carefully chosen in order to predict discontinuities that are physically reasonable. One such concept is that of Balanced Viscosity solutions (BV solutions). This concept predicts physically sound energy trajectories that do not jump across energy barriers. The paper deals with a time-adaptive finite element phase-field model for rate-independent fracture which converges to BV solutions. The model is motivated by constraining the pseudo-velocity of the crack tip. The resulting constrained minimization problem is solved by the augmented Lagrangian method. Numerical examples highlight the predictive capabilities of the model and furthermore show the efficiency and the robustness of the final algorithm.
We consider a statistical model for symmetric matrix factorization with additive Gaussian noise in the high-dimensional regime where the rank $M$ of the signal matrix to infer scales with its size $N$ as $M = o(N^{1/10})$. Allowing for a $N$-dependent rank offers new challenges and requires new methods. Working in the Bayesian-optimal setting, we show that whenever the signal has i.i.d. entries the limiting mutual information between signal and data is given by a variational formula involving a rank-one replica symmetric potential. In other words, from the information-theoretic perspective, the case of a (slowly) growing rank is the same as when $M = 1$ (namely, the standard spiked Wigner model). The proof is primarily based on a novel multiscale cavity method allowing for growing rank along with some information-theoretic identities on worst noise for the Gaussian vector channel. We believe that the cavity method developed here will play a role in the analysis of a broader class of inference and spin models where the degrees of freedom are large arrays instead of vectors.
Maximum entropy (Maxent) models are a class of statistical models that use the maximum entropy principle to estimate probability distributions from data. Due to the size of modern data sets, Maxent models need efficient optimization algorithms to scale well for big data applications. State-of-the-art algorithms for Maxent models, however, were not originally designed to handle big data sets; these algorithms either rely on technical devices that may yield unreliable numerical results, scale poorly, or require smoothness assumptions that many practical Maxent models lack. In this paper, we present novel optimization algorithms that overcome the shortcomings of state-of-the-art algorithms for training large-scale, non-smooth Maxent models. Our proposed first-order algorithms leverage the Kullback-Leibler divergence to train large-scale and non-smooth Maxent models efficiently. For Maxent models with discrete probability distribution of $n$ elements built from samples, each containing $m$ features, the stepsize parameters estimation and iterations in our algorithms scale on the order of $O(mn)$ operations and can be trivially parallelized. Moreover, the strong $\ell_{1}$ convexity of the Kullback--Leibler divergence allows for larger stepsize parameters, thereby speeding up the convergence rate of our algorithms. To illustrate the efficiency of our novel algorithms, we consider the problem of estimating probabilities of fire occurrences as a function of ecological features in the Western US MTBS-Interagency wildfire data set. Our numerical results show that our algorithms outperform the state of the arts by one order of magnitude and yield results that agree with physical models of wildfire occurrence and previous statistical analyses of wildfire drivers.
First-order methods are often analyzed via their continuous-time models, where their worst-case convergence properties are usually approached via Lyapunov functions. In this work, we provide a systematic and principled approach to find and verify Lyapunov functions for classes of ordinary and stochastic differential equations. More precisely, we extend the performance estimation framework, originally proposed by Drori and Teboulle [10], to continuous-time models. We retrieve convergence results comparable to those of discrete methods using fewer assumptions and convexity inequalities, and provide new results for stochastic accelerated gradient flows.
We give a short survey of recent results on sparse-grid linear algorithms of approximate recovery and integration of functions possessing a unweighted or weighted Sobolev mixed smoothness based on their sampled values at a certain finite set. Some of them are extended to more general cases.
The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.
This work considers the nodal finite element approximation of peridynamics, in which the nodal displacements satisfy the peridynamics equation at each mesh node. For the nonlinear bond-based peridynamics model, it is shown that, under the suitable assumptions on an exact solution, the discretized solution associated with the central-in-time and nodal finite element discretization converges to the exact solution in $L^2$ norm at the rate $C_1 \Delta t + C_2 h^2/\epsilon^2$. Here, $\Delta t$, $h$, and $\epsilon$ are time step size, mesh size, and the size of the horizon or nonlocal length scale, respectively. Constants $C_1$ and $C_2$ are independent of $h$ and $\Delta t$ and depend on the norms of the exact solution. Several numerical examples involving pre-crack, void, and notch are considered, and the efficacy of the proposed nodal finite element discretization is analyzed.
Mendelian randomization is an instrumental variable method that utilizes genetic information to investigate the causal effect of a modifiable exposure on an outcome. In most cases, the exposure changes over time. Understanding the time-varying causal effect of the exposure can yield detailed insights into mechanistic effects and the potential impact of public health interventions. Recently, a growing number of Mendelian randomization studies have attempted to explore time-varying causal effects. However, the proposed approaches oversimplify temporal information and rely on overly restrictive structural assumptions, limiting their reliability in addressing time-varying causal problems. This paper considers a novel approach to estimate time-varying effects through continuous-time modelling by combining functional principal component analysis and weak-instrument-robust techniques. Our method effectively utilizes available data without making strong structural assumptions and can be applied in general settings where the exposure measurements occur at different timepoints for different individuals. We demonstrate through simulations that our proposed method performs well in estimating time-varying effects and provides reliable inference results when the time-varying effect form is correctly specified. The method could theoretically be used to estimate arbitrarily complex time-varying effects. However, there is a trade-off between model complexity and instrument strength. Estimating complex time-varying effects requires instruments that are unrealistically strong. We illustrate the application of this method in a case study examining the time-varying effects of systolic blood pressure on urea levels.