Polymer flooding is crucial in hydrocarbon production, increasing oil recovery by improving the water-oil mobility ratio. However, the high viscosity of displacing fluid may cause problems with sand production on poorly consolidated reservoirs. This work investigates the effect of polymer injection on the sand production phenomenon using the experimental study and numerical model at a laboratory scale. The experiment uses an artificially made sandstone based on the characteristics of the oil field in Kazakhstan. Polymer solution based on Xanthan gum is injected into the core to study the impact of polymer flooding on sand production. The rheology of the polymer solution is also examined using a rotational rheometer, and the power-law model fits outcomes. We observe no sand production during the brine injection at various flow rate ranges. However, the sanding is noticed when the polymer solution is injected. More than 50% of cumulatively produced sand is obtained after one pore volume of polymer sand is injected. In the numerical part of the study, we present a coupling model of DEM with CFD to describe the polymer flow in a granular porous medium. In the solid phase, the modified cohesive contact model characterizes the bonding mechanism between sand particles. The fluid phase is modeled as a non-Newtonian fluid using a power-law model. We verify the numerical model with the laboratory experiment result. The numerical model observes non-uniform bond breakage when only a confining stress is applied. Alternatively, the injection of the polymer into the sample leads to a relatively gradual decrease in bonds. The significant difference in the pressure of the fluid results in its higher velocity, which causes intensive sand production at the beginning of the simulation. The ratio of medium-sized produced particles is greater than the initial ratio of those before injection.
Introduction. There is currently no guidance on how to assess the calibration of multistate models used for risk prediction. We introduce several techniques that can be used to produce calibration plots for the transition probabilities of a multistate model, before assessing their performance in the presence of non-informative and informative censoring through a simulation. Methods. We studied pseudo-values based on the Aalen-Johansen estimator, binary logistic regression with inverse probability of censoring weights (BLR-IPCW), and multinomial logistic regression with inverse probability of censoring weights (MLR-IPCW). The MLR-IPCW approach results in a calibration scatter plot, providing extra insight about the calibration. We simulated data with varying levels of censoring and evaluated the ability of each method to estimate the calibration curve for a set of predicted transition probabilities. We also developed evaluated the calibration of a model predicting the incidence of cardiovascular disease, type 2 diabetes and chronic kidney disease among a cohort of patients derived from linked primary and secondary healthcare records. Results. The pseudo-value, BLR-IPCW and MLR-IPCW approaches give unbiased estimates of the calibration curves under non-informative censoring. These methods remained unbiased in the presence of informative censoring, unless the mechanism was strongly informative, with bias concentrated in the areas of predicted transition probabilities of low density. Conclusions. We recommend implementing either the pseudo-value or BLR-IPCW approaches to produce a calibration curve, combined with the MLR-IPCW approach to produce a calibration scatter plot, which provides additional information over either of the other methods.
This paper proposes a hierarchy of numerical fluxes for the compressible flow equations which are kinetic-energy and pressure equilibrium preserving and asymptotically entropy conservative, i.e., they are able to arbitrarily reduce the numerical error on entropy production due to the spatial discretization. The fluxes are based on the use of the harmonic mean for internal energy and only use algebraic operations, making them less computationally expensive than the entropy-conserving fluxes based on the logarithmic mean. The use of the geometric mean is also explored and identified to be well-suited to reduce errors on entropy evolution. Results of numerical tests confirmed the theoretical predictions and the entropy-conserving capabilities of a selection of schemes have been compared.
Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex constrained optimization that sequentially minimizes majorizing surrogates of the objective function in each block coordinate while the other coordinates are held fixed. BMM entails a large class of optimization algorithms such as block coordinate descent and its proximal-point variant, expectation-minimization, and block projected gradient descent. We establish that for general constrained nonconvex optimization, BMM with strongly convex surrogates can produce an $\epsilon$-stationary point within $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$ iterations and asymptotically converges to the set of stationary points. Furthermore, we propose a trust-region variant of BMM that can handle surrogates that are only convex and still obtain the same iteration complexity and asymptotic stationarity. These results hold robustly even when the convex sub-problems are inexactly solved as long as the optimality gaps are summable. As an application, we show that a regularized version of the celebrated multiplicative update algorithm for nonnegative matrix factorization by Lee and Seung has iteration complexity of $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$. The same result holds for a wide class of regularized nonnegative tensor decomposition algorithms as well as the classical block projected gradient descent algorithm. These theoretical results are validated through various numerical experiments.
This paper aims to reconstruct the initial condition of a hyperbolic equation with an unknown damping coefficient. Our approach involves approximating the hyperbolic equation's solution by its truncated Fourier expansion in the time domain and using a polynomial-exponential basis. This truncation process facilitates the elimination of the time variable, consequently, yielding a system of quasi-linear elliptic equations. To globally solve the system without needing an accurate initial guess, we employ the Carleman contraction principle. We provide several numerical examples to illustrate the efficacy of our method. The method not only delivers precise solutions but also showcases remarkable computational efficiency.
Polyurethane (PU) is an ideal thermal insulation material due to its excellent thermal properties. The incorporation of Phase Change Materials (PCMs) capsules into Polyurethane (PU) has been shown to be effective in building envelopes. This design can significantly increase the stability of the indoor thermal environment and reduce the fluctuation of indoor air temperature. We develop a multiscale model of a PU-PCM foam composite and study the thermal conductivity of this material. Later, the design of materials can be optimized by obtaining thermal conductivity. We conduct a case study based on the performance of this optimized material to fully consider the thermal comfort of the occupants of a building envelope with the application of PU-PCMs composites in a single room. At the same time, we also predict the energy consumption of this case. All the outcomes show that this design is promising, enabling the passive design of building energy and significantly improving occupants' comfort.
We propose a computationally and statistically efficient procedure for segmenting univariate data under piecewise linearity. The proposed moving sum (MOSUM) methodology detects multiple change points where the underlying signal undergoes discontinuous jumps and/or slope changes. Theoretically, it controls the family-wise error rate at a given significance level asymptotically and achieves consistency in multiple change point detection, as well as matching the minimax optimal rate of estimation when the signal is piecewise linear and continuous, all under weak assumptions permitting serial dependence and heavy-tailedness. Computationally, the complexity of the MOSUM procedure is $O(n)$ which, combined with its good performance on simulated datasets, making it highly attractive in comparison with the existing methods. We further demonstrate its good performance on a real data example on rolling element-bearing prognostics.
There is a growing interest in the implementation of platform trials, which provide the flexibility to incorporate new treatment arms during the trial and the ability to halt treatments early based on lack of benefit or observed superiority. In such trials, it can be important to ensure that error rates are controlled. This paper introduces a multi-stage design that enables the addition of new treatment arms, at any point, in a pre-planned manner within a platform trial, while still maintaining control over the family-wise error rate. This paper focuses on finding the required sample size to achieve a desired level of statistical power when treatments are continued to be tested even after a superior treatment has already been found. This may be of interest if there are other sponsors treatments which are also superior to the current control or multiple doses being tested. The calculations to determine the expected sample size is given. A motivating trial is presented in which the sample size of different configurations is studied. Additionally the approach is compared to running multiple separate trials and it is shown that in many scenarios if family wise error rate control is needed there may not be benefit in using a platform trial when comparing the sample size of the trial.
The dynamics of cellular pattern formation is crucial for understanding embryonic development and tissue morphogenesis. Recent studies have shown that human dermal fibroblasts cultured on liquid crystal elastomers can exhibit an increase in orientational alignment over time, accompanied by cell proliferation, under the influence of the weak guidance of a molecularly aligned substrate. However, a comprehensive understanding of how this order arises remains largely unknown. This knowledge gap may be attributed, in part, to a scarcity of mechanistic models that can capture the temporal progression of the complex nonequilibrium dynamics during the cellular alignment process. The orientational alignment occurs primarily when cells reach a high density near confluence. Therefore, for accurate modeling, it is crucial to take into account both the cell-cell interaction term and the influence from the substrate, acting as a one-body external potential term. To fill in this gap, we develop a hybrid procedure that utilizes statistical learning approaches to extend the state-of-the-art physics models for quantifying both effects. We develop a more efficient way to perform feature selection that avoids testing all feature combinations through simulation. The maximum likelihood estimator of the model was derived and implemented in computationally scalable algorithms for model calibration and simulation. By including these features, such as the non-Gaussian, anisotropic fluctuations, and limiting alignment interaction only to neighboring cells with the same velocity direction, this model quantitatively reproduce the key system-level parameters--the temporal progression of the velocity orientational order parameters and the variability of velocity vectors, whereas models missing any of the features fail to capture these temporally dependent parameters.
Accurately estimating parameters in complex nonlinear systems is crucial across scientific and engineering fields. We present a novel approach for parameter estimation using a neural network with the Huber loss function. This method taps into deep learning's abilities to uncover parameters governing intricate behaviors in nonlinear equations. We validate our approach using synthetic data and predefined functions that model system dynamics. By training the neural network with noisy time series data, it fine-tunes the Huber loss function to converge to accurate parameters. We apply our method to damped oscillators, Van der Pol oscillators, Lotka-Volterra systems, and Lorenz systems under multiplicative noise. The trained neural network accurately estimates parameters, evident from closely matching latent dynamics. Comparing true and estimated trajectories visually reinforces our method's precision and robustness. Our study underscores the Huber loss-guided neural network as a versatile tool for parameter estimation, effectively uncovering complex relationships in nonlinear systems. The method navigates noise and uncertainty adeptly, showcasing its adaptability to real-world challenges.
The Sinc approximation applied to double-exponentially decaying functions is referred to as the DE-Sinc approximation. Because of its high efficiency, this method has been used in various applications. In the Sinc approximation, the mesh size and truncation numbers should be optimally selected to achieve its best performance. However, the standard selection formula has only been "near-optimally" selected because the optimal formula of the mesh size cannot be expressed in terms of elementary functions of truncation numbers. In this study, we propose two improved selection formulas. The first one is based on the concept by an earlier research that resulted in a better selection formula for the double-exponential formula. The formula performs slightly better than the standard one, but is still not optimal. As a second selection formula, we introduce a new parameter to propose truly optimal selection formula. We provide explicit error bounds for both selection formulas. Numerical comparisons show that the first formula gives a better error bound than the standard formula, and the second formula gives a much better error bound than the standard and first formulas.