In this work we investigate a 1D evolution equation involving a divergence form operator where the diffusion coefficient inside the divergence is changing sign, as in models for metamaterials.We focus on the construction of a fundamental solution for the evolution equation,which does not proceed as in the case of standard parabolic PDE's, since the associatedsecond order operator is not elliptic. We show that a spectral representation of the semigroup associated to the equation can be derived, which leads to a first expression of the fundamental solution. We also derive a probabilistic representation in terms of a pseudo Skew Brownian Motion (SBM).This construction generalizes that derived from the killed SBM when the diffusion coefficientis piecewise constant but remains positive.We show that the pseudo SBM can be approached by a rescaled pseudo asymmetric random walk,which allows us to derive several numerical schemes for the resolution of the PDEand we report the associated numerical test results.
In this paper, we carry out the numerical analysis of a nonsmooth quasilinear elliptic optimal control problem, where the coefficient in the divergence term of the corresponding state equation is not differentiable with respect to the state variable. Despite the lack of differentiability of the nonlinearity in the quasilinear elliptic equation, the corresponding control-to-state operator is of class $C^1$ but not of class $C^2$. Analogously, the discrete control-to-state operators associated with the approximated control problems are proven to be of class $C^1$ only. By using an explicit second-order sufficient optimality condition, we prove a priori error estimates for a variational approximation, a piecewise constant approximation, and a continuous piecewise linear approximation of the continuous optimal control problem. The numerical tests confirm these error estimates.
In this work, we establish the linear convergence estimate for the gradient descent involving the delay $\tau\in\mathbb{N}$ when the cost function is $\mu$-strongly convex and $L$-smooth. This result improves upon the well-known estimates in Arjevani et al. \cite{ASS} and Stich-Karmireddy \cite{SK} in the sense that it is non-ergodic and is still established in spite of weaker constraint of cost function. Also, the range of learning rate $\eta$ can be extended from $\eta\leq 1/(10L\tau)$ to $\eta\leq 1/(4L\tau)$ for $\tau =1$ and $\eta\leq 3/(10L\tau)$ for $\tau \geq 2$, where $L >0$ is the Lipschitz continuity constant of the gradient of cost function. In a further research, we show the linear convergence of cost function under the Polyak-{\L}ojasiewicz\,(PL) condition, for which the available choice of learning rate is further improved as $\eta\leq 9/(10L\tau)$ for the large delay $\tau$. The framework of the proof for this result is also extended to the stochastic gradient descent with time-varying delay under the PL condition. Finally, some numerical experiments are provided in order to confirm the reliability of the analyzed results.
Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.
Electromagnetic forming and perforations (EMFP) are complex and innovative high strain rate processes that involve electromagnetic-mechanical interactions for simultaneous metal forming and perforations. Instead of spending costly resources on repetitive experimental work, a properly designed numerical model can be effectively used for detailed analysis and characterization of the complex process. A coupled finite element (FE) model is considered for analyzing the multi-physics of the EMFP because of its robustness and improved accuracy. In this work, a detailed understanding of the process has been achieved by numerically simulating forming and perforations of Al6061-T6 tube for 12 holes and 36 holes with two different punches, i.e., pointed and concave punches using Ls-Dyna software. In order to shed light on EMFP physics, a comparison between experimental data and the formulated numerical simulation has been carried out to compare the average hole diameter and the number of perforated holes, for different types of punches and a range of discharge energies. The simulated results show acceptable agreement with experimental studies, with maximum deviations being less than or equal to 6%, which clearly illustrates the efficacy and capability of the developed coupled Multi-physics FE model.
For the stochastic heat equation with multiplicative noise we consider the problem of estimating the diffusivity parameter in front of the Laplace operator. Based on local observations in space, we first study an estimator that was derived for additive noise. A stable central limit theorem shows that this estimator is consistent and asymptotically mixed normal. By taking into account the quadratic variation, we propose two new estimators. Their limiting distributions exhibit a smaller (conditional) variance and the last estimator also works for vanishing noise levels. The proofs are based on local approximation results to overcome the intricate nonlinearities and on a stable central limit theorem for stochastic integrals with respect to cylindrical Brownian motion. Simulation results illustrate the theoretical findings.
We study the performance of stochastic first-order methods for finding saddle points of convex-concave functions. A notorious challenge faced by such methods is that the gradients can grow arbitrarily large during optimization, which may result in instability and divergence. In this paper, we propose a simple and effective regularization technique that stabilizes the iterates and yields meaningful performance guarantees even if the domain and the gradient noise scales linearly with the size of the iterates (and is thus potentially unbounded). Besides providing a set of general results, we also apply our algorithm to a specific problem in reinforcement learning, where it leads to performance guarantees for finding near-optimal policies in an average-reward MDP without prior knowledge of the bias span.
In this work, we present a simple and unified analysis of the Johnson-Lindenstrauss (JL) lemma, a cornerstone in the field of dimensionality reduction critical for managing high-dimensional data. Our approach not only simplifies the understanding but also unifies various constructions under the JL framework, including spherical, binary-coin, sparse JL, Gaussian and sub-Gaussian models. This simplification and unification make significant strides in preserving the intrinsic geometry of data, essential across diverse applications from streaming algorithms to reinforcement learning. Notably, we deliver the first rigorous proof of the spherical construction's effectiveness and provide a general class of sub-Gaussian constructions within this simplified framework. At the heart of our contribution is an innovative extension of the Hanson-Wright inequality to high dimensions, complete with explicit constants, marking a substantial leap in the literature. By employing simple yet powerful probabilistic tools and analytical techniques, such as an enhanced diagonalization process, our analysis not only solidifies the JL lemma's theoretical foundation but also extends its practical reach, showcasing its adaptability and importance in contemporary computational algorithms.
Accurate triangulation of the domain plays a pivotal role in computing the numerical approximation of the differential operators. A good triangulation is the one which aids in reducing discretization errors. In a standard collocation technique, the smooth curved domain is typically triangulated with a mesh by taking points on the boundary to approximate them by polygons. However, such an approach often leads to geometrical errors which directly affect the accuracy of the numerical approximation. To restrict such geometrical errors, \textit{isoparametric}, \textit{subparametric}, and \textit{iso-geometric} methods were introduced which allow the approximation of the curved surfaces (or curved line segments). In this paper, we present an efficient finite element method to approximate the solution to the elliptic boundary value problem (BVP), which governs the response of an elastic solid containing a v-notch and inclusions. The algebraically nonlinear constitutive equation along with the balance of linear momentum reduces to second-order quasi-linear elliptic partial differential equation. Our approach allows us to represent the complex curved boundaries by smooth \textit{one-of-its-kind} point transformation. The main idea is to obtain higher-order shape functions which enable us to accurately compute the entries in the finite element matrices and vectors. A Picard-type linearization is utilized to handle the nonlinearities in the governing differential equation. The numerical results for the test cases show considerable improvement in the accuracy.
We introduce a fine-grained framework for uncertainty quantification of predictive models under distributional shifts. This framework distinguishes the shift in covariate distributions from that in the conditional relationship between the outcome (Y) and the covariates (X). We propose to reweight the training samples to adjust for an identifiable covariate shift while protecting against worst-case conditional distribution shift bounded in an $f$-divergence ball. Based on ideas from conformal inference and distributionally robust learning, we present an algorithm that outputs (approximately) valid and efficient prediction intervals in the presence of distributional shifts. As a use case, we apply the framework to sensitivity analysis of individual treatment effects with hidden confounding. The proposed methods are evaluated in simulation studies and three real data applications, demonstrating superior robustness and efficiency compared with existing benchmarks.
The ability to extract material parameters of perovskite from quantitative experimental analysis is essential for rational design of photovoltaic and optoelectronic applications. However, the difficulty of this analysis increases significantly with the complexity of the theoretical model and the number of material parameters for perovskite. Here we use Bayesian optimization to develop an analysis platform that can extract up to 8 fundamental material parameters of an organometallic perovskite semiconductor from a transient photoluminescence experiment, based on a complex full physics model that includes drift-diffusion of carriers and dynamic defect occupation. An example study of thermal degradation reveals that changes in doping concentration and carrier mobility dominate, while the defect energy level remains nearly unchanged. This platform can be conveniently applied to other experiments or to combinations of experiments, accelerating materials discovery and optimization of semiconductor materials for photovoltaics and other applications.