The problems of frictional contacts are the key to the investigation of mechanical performances of composite materials under varying service environments. The paper considers a linear elasticity system with strongly heterogeneous coefficients and quasistatic Tresca's friction law, and we study the homogenization theories under the frameworks of H-convergence and small $\epsilon$-periodicity. The qualitative result is based on H-convergence, which shows the original oscillating solutions will converge weakly to the homogenized solution, while our quantitative result provides an estimate of asymptotic errors in the $H^1$ norm for the periodic homogenization. We also design several numerical experiments to validate the convergence rates in the quantitative analysis.
The Cox regression, a semi-parametric method of survival analysis, is extremely popular in biomedical applications. The proportional hazards assumption is a key requirement in the Cox model. To accommodate non-proportional hazards, we propose to parameterise the shape parameter of the baseline hazard function using the additional, separate Cox-regression term which depends on the vector of the covariates. We call this model the double-Cox model. The R programs for fitting the double-Cox model are available on Github. We formally introduce the double-Cox model with shared frailty and investigate, by simulation, the estimation bias and the coverage of the proposed point and interval estimation methods for the Gompertz and the Weibull baseline hazards. In applications with low frailty variance and a large number of clusters, the marginal likelihood estimation is almost unbiased and the profile likelihood-based confidence intervals provide good coverage for all model parameters. We also compare the results from the over-fitted double-Cox model to those from the standard Cox model with frailty in the case of the scale-only proportional hazards. Results of our simulations on the bias and coverage of the model parameters are provided in 12 Tables and in 145 A4 Figures, 178 pages in total.
Event cameras are bio-inspired sensors that perform well in challenging illumination conditions and have high temporal resolution. However, their concept is fundamentally different from traditional frame-based cameras. The pixels of an event camera operate independently and asynchronously. They measure changes of the logarithmic brightness and return them in the highly discretised form of time-stamped events indicating a relative change of a certain quantity since the last event. New models and algorithms are needed to process this kind of measurements. The present work looks at several motion estimation problems with event cameras. The flow of the events is modelled by a general homographic warping in a space-time volume, and the objective is formulated as a maximisation of contrast within the image of warped events. Our core contribution consists of deriving globally optimal solutions to these generally non-convex problems, which removes the dependency on a good initial guess plaguing existing methods. Our methods rely on branch-and-bound optimisation and employ novel and efficient, recursive upper and lower bounds derived for six different contrast estimation functions. The practical validity of our approach is demonstrated by a successful application to three different event camera motion estimation problems.
We present a robust framework to perform linear regression with missing entries in the features. By considering an elliptical data distribution, and specifically a multivariate normal model, we are able to conditionally formulate a distribution for the missing entries and present a robust framework, which minimizes the worst case error caused by the uncertainty about the missing data. We show that the proposed formulation, which naturally takes into account the dependency between different variables, ultimately reduces to a convex program, for which a customized and scalable solver can be delivered. In addition to a detailed analysis to deliver such solver, we also asymptoticly analyze the behavior of the proposed framework, and present technical discussions to estimate the required input parameters. We complement our analysis with experiments performed on synthetic, semi-synthetic, and real data, and show how the proposed formulation improves the prediction accuracy and robustness, and outperforms the competing techniques.
This paper introduces methods and a novel toolbox that efficiently integrates any high-dimensional Neural Mass Models (NMMs) specified by two essential components. The first is the set of nonlinear Random Differential Equations of the dynamics of each neural mass. The second is the highly sparse three-dimensional Connectome Tensor (CT) that encodes the strength of the connections and the delays of information transfer along the axons of each connection. Semi-analytical integration of the RDE is done with the Local Linearization scheme for each neural mass model, which is the only scheme guaranteeing dynamical fidelity to the original continuous-time nonlinear dynamic. It also seamlessly allows modeling distributed delays CT with any level of complexity or realism, as shown by the Moore-Penrose diagram of the algorithm. This ease of implementation includes models with distributed-delay CTs. We achieve high computational efficiency by using a tensor representation of the model that leverages semi-analytic expressions to integrate the Random Differential Equations (RDEs) underlying the NMM. We discretized the state equation with Local Linearization via an algebraic formulation. This approach increases numerical integration speed and efficiency, a crucial aspect of large-scale NMM simulations. To illustrate the usefulness of the toolbox, we simulate both a single Zetterberg-Jansen-Rit (ZJR) cortical column and an interconnected population of such columns. These examples illustrate the consequence of modifying the CT in these models, especially by introducing distributed delays. We provide an open-source Matlab live script for the toolbox.
Flight-related health effects are a growing area of environmental health research with most work examining the concurrent impact of in-flight exposure on cardiac health. One understudied area is on the post-flight effects of in-flight exposures. Studies on the health effects of flight often collect a range of repeatedly sampled, time-varying exposure measurements both under crossover and longitudinal sampling designs. A natural choice to model the relationship of these lagged exposures on post-flight outcomes is the distributed lag model (DLM). However, longitudinal DLMs are a lightly studied class of models. In this article, we propose a class of models for analyzing longitudinal DLMs where the random effects can incorporate more general structures including random lags that arise from repeatedly sampling lagged exposures. We develop variational Bayesian algorithms to estimate model components under differing random effect structures, derive a variational AIC for model selection between these structures, and show how the converged variational estimates can fit into a framework for testing for the difference between two semiparametric curves. We then investigate the post-flight effects of in-flight, lagged exposures on heart health. We also perform simulation studies to evaluate the operating characteristics of our models.
Regional Climate Models (RCM) describe the medium scale global atmospheric and oceanic dynamics and serve as downscaling models. RCMs use atmospheric interactions in General Circulation Models (GCM) to develop a higher resolution output. They are computationally demanding and require orders of magnitude more computer time than statistical downscaling. In this paper we describe how to use a spatio-temporal statistical model with varying coefficients (VC), as a downscaling emulator for a RCM using varying coefficients. In order to estimate the proposed model, three options are compared: MRA, INLA and varycoef. MRA methods have not been applied to estimate VC models with covariates, INLA has limited work on VC models, and varycoef (an R package on CRAN) has been exclusively proposed for spatially VC models to use on medium-size data sets. We set up a simulation to compare the performance of INLA, varycoef, and MRA for building a statistical downscaling emulator for RCM, and then show that the emulator works properly for NARCCAP data. The results show that the model is able to estimate non-stationary marginal effects, which means that the downscaling can vary over space. Furthermore, the model has flexibility to estimate the mean of any variable in space and time, and that the model has good prediction results. Throughout the simulations, INLA was by far the best approximation method for both the spatial and spatial temporal versions of the proposed model. Moreover, INLA was the fastest method for all the cases, and the approximation with best accuracy to estimate the different parameters from the model and the posterior distribution of the response variable.
We consider the problem of kernel classification. Works on kernel regression have shown that the rate of decay of the prediction error with the number of samples for a large class of data-sets is well characterized by two quantities: the capacity and source of the data-set. In this work, we compute the decay rates for the misclassification (prediction) error under the Gaussian design, for data-sets satisfying source and capacity assumptions. We derive the rates as a function of the source and capacity coefficients for two standard kernel classification settings, namely margin-maximizing Support Vector Machines (SVM) and ridge classification, and contrast the two methods. As a consequence, we find that the known worst-case rates are loose for this class of data-sets. Finally, we show that the rates presented in this work are also observed on real data-sets.
Two of the most significant challenges in uncertainty quantification pertain to the high computational cost for simulating complex physical models and the high dimension of the random inputs. In applications of practical interest, both of these problems are encountered, and standard methods either fail or are not feasible. To overcome the current limitations, we present a generalized formulation of a Bayesian multi-fidelity Monte-Carlo (BMFMC) framework that can exploit lower-fidelity model versions in a small data regime. The goal of our analysis is an efficient and accurate estimation of the complete probabilistic response for high-fidelity models. BMFMC circumvents the curse of dimensionality by learning the relationship between the outputs of a reference high-fidelity model and potentially several lower-fidelity models. While the continuous formulation is mathematically exact and independent of the low-fidelity model's accuracy, we address challenges associated with the small data regime (i.e., only a small number of 50 to 300 high-fidelity model runs can be performed). Specifically, we complement the formulation with a set of informative input features at no extra cost. Despite the inaccurate and noisy information that some low-fidelity models provide, we demonstrate that accurate and certifiable estimates for the quantities of interest can be obtained for uncertainty quantification problems in high stochastic dimensions, with significantly fewer high-fidelity model runs than state-of-the-art methods for uncertainty quantification. We illustrate our approach by applying it to challenging numerical examples such as Navier-Stokes flow simulations and fluid-structure interaction problems.
The magnetohydrodynamics (MHD) equations are generally known to be difficult to solve numerically, due to their highly nonlinear structure and the strong coupling between the electromagnetic and hydrodynamic variables, especially for high Reynolds and coupling numbers. In this work, we present a scalable augmented Lagrangian preconditioner for a finite element discretization of the $\mathbf{B}$-$\mathbf{E}$ formulation of the incompressible viscoresistive MHD equations. For stationary problems, our solver achieves robust performance with respect to the Reynolds and coupling numbers in two dimensions and good results in three dimensions. We extend our method to fully implicit methods for time-dependent problems which we solve robustly in both two and three dimensions. Our approach relies on specialized parameter-robust multigrid methods for the hydrodynamic and electromagnetic blocks. The scheme ensures exactly divergence-free approximations of both the velocity and the magnetic field up to solver tolerances. We confirm the robustness of our solver by numerical experiments in which we consider fluid and magnetic Reynolds numbers and coupling numbers up to 10,000 for stationary problems and up to 100,000 for transient problems in two and three dimensions.
In the management of lung nodules, we are desirable to predict nodule evolution in terms of its diameter variation on Computed Tomography (CT) scans and then provide a follow-up recommendation according to the predicted result of the growing trend of the nodule. In order to improve the performance of growth trend prediction for lung nodules, it is vital to compare the changes of the same nodule in consecutive CT scans. Motivated by this, we screened out 4,666 subjects with more than two consecutive CT scans from the National Lung Screening Trial (NLST) dataset to organize a temporal dataset called NLSTt. In specific, we first detect and pair regions of interest (ROIs) covering the same nodule based on registered CT scans. After that, we predict the texture category and diameter size of the nodules through models. Last, we annotate the evolution class of each nodule according to its changes in diameter. Based on the built NLSTt dataset, we propose a siamese encoder to simultaneously exploit the discriminative features of 3D ROIs detected from consecutive CT scans. Then we novelly design a spatial-temporal mixer (STM) to leverage the interval changes of the same nodule in sequential 3D ROIs and capture spatial dependencies of nodule regions and the current 3D ROI. According to the clinical diagnosis routine, we employ hierarchical loss to pay more attention to growing nodules. The extensive experiments on our organized dataset demonstrate the advantage of our proposed method. We also conduct experiments on an in-house dataset to evaluate the clinical utility of our method by comparing it against skilled clinicians.