The Keller-Segel-Navier-Stokes system governs chemotaxis in liquid environments. This system is to be solved for the organism and chemoattractant densities and for the fluid velocity and pressure. It is known that if the total initial cell density mass is below $2\pi$ there exist globally defined generalised solutions, but what is less understood is whether there are blow-up solutions beyond such a threshold and its optimality. Motivated by this issue, a numerical blow-up scenario is investigated. Approximate solutions computed via a stabilised finite element method founded on a shock capturing technique are such that they satisfy \emph{a priori} bounds as well as lower and $L^1(\Omega)$ bounds for the cell and chemoattractant densities. In particular, this latter properties are essential in detecting numerical blow-up configurations, since the non-satisfaction of these two requirements might trigger numerical oscillations leading to non-realistic finite-time collapses into persistent Dirac-type measures. Our findings show that the existence threshold value $2\pi$ encountered for the cell density mass may not be optimal and hence it is conjectured that the critical threshold value $4\pi$ may be inherited from the fluid-free Keller-Segel equations. Additionally it is observed that the formation of singular points can be neglected if the fluid flow is intensified.
In the task of predicting spatio-temporal fields in environmental science using statistical methods, introducing statistical models inspired by the physics of the underlying phenomena that are numerically efficient is of growing interest. Large space-time datasets call for new numerical methods to efficiently process them. The Stochastic Partial Differential Equation (SPDE) approach has proven to be effective for the estimation and the prediction in a spatial context. We present here the advection-diffusion SPDE with first order derivative in time which defines a large class of nonseparable spatio-temporal models. A Gaussian Markov random field approximation of the solution to the SPDE is built by discretizing the temporal derivative with a finite difference method (implicit Euler) and by solving the spatial SPDE with a finite element method (continuous Galerkin) at each time step. The ''Streamline Diffusion'' stabilization technique is introduced when the advection term dominates the diffusion. Computationally efficient methods are proposed to estimate the parameters of the SPDE and to predict the spatio-temporal field by kriging, as well as to perform conditional simulations. The approach is applied to a solar radiation dataset. Its advantages and limitations are discussed.
In this work, we study discrete minimizers of the Ginzburg-Landau energy in finite element spaces. Special focus is given to the influence of the Ginzburg-Landau parameter $\kappa$. This parameter is of physical interest as large values can trigger the appearance of vortex lattices. Since the vortices have to be resolved on sufficiently fine computational meshes, it is important to translate the size of $\kappa$ into a mesh resolution condition, which can be done through error estimates that are explicit with respect to $\kappa$ and the spatial mesh width $h$. For that, we first work in an abstract framework for a general class of discrete spaces, where we present convergence results in a problem-adapted $\kappa$-weighted norm. Afterwards we apply our findings to Lagrangian finite elements and a particular generalized finite element construction. In numerical experiments we confirm that our derived $L^2$- and $H^1$-error estimates are indeed optimal in $\kappa$ and $h$.
In this letter, we investigate a novel quadrature spatial scattering modulation (QSSM) transmission technique based on millimeter wave (mmWave) systems, in which the transmitter generates two orthogonal beams targeting candidate scatterers in the channel to carry the real and imaginary parts of the conventional signal, respectively. Meanwhile, the maximum likelihood (ML) detector is adopted at the receiver to recover the received beams and signals. Based on the ML detector, we derive the closed-form average bit error probability (ABEP) expression of the QSSM scheme. Furthermore, we evaluate the asymptotic ABEP expression of the proposed scheme. Monte Carlo simulations verify the exactness and tightness of the derivation results. It is shown that the ABEP performance of QSSM is better than that of traditional spatial scattering modulation.
Meshfree Lagrangian frameworks for free surface flow simulations do not conserve fluid volume. Meshfree particle methods like SPH are not mimetic, in the sense that discrete mass conservation does not imply discrete volume conservation. On the other hand, meshfree collocation methods typically do not use any notion of mass. As a result, they are neither mass conservative nor volume conservative at the discrete level. In this paper, we give an overview of various sources of conservation errors across different meshfree methods. The present work focuses on one specific issue: unreliable volume and mass definitions. We introduce the concept of representative masses and densities, which are essential for accurate post-processing especially in meshfree collocation methods. Using these, we introduce an artificial compression or expansion in the fluid to rectify errors in volume conservation. Numerical experiments show that the introduced frameworks significantly improve volume conservation behaviour, even for complex industrial test cases such as automotive water crossing.
Electronic health records (EHRs) store an extensive array of patient information, encompassing medical histories, diagnoses, treatments, and test outcomes. These records are crucial for enabling healthcare providers to make well-informed decisions regarding patient care. Summarizing clinical notes further assists healthcare professionals in pinpointing potential health risks and making better-informed decisions. This process contributes to reducing errors and enhancing patient outcomes by ensuring providers have access to the most pertinent and current patient data. Recent research has shown that incorporating prompts with large language models (LLMs) substantially boosts the efficacy of summarization tasks. However, we show that this approach also leads to increased output variance, resulting in notably divergent outputs even when prompts share similar meanings. To tackle this challenge, we introduce a model-agnostic Soft Prompt-Based Calibration (SPeC) pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization. Experimental findings on multiple clinical note tasks and LLMs indicate that our method not only bolsters performance but also effectively curbs variance for various LLMs, providing a more uniform and dependable solution for summarizing vital medical information.
We present a study of the standard plasma physics test, Landau damping, using the Particle-In-Cell (PIC) algorithm. The Landau damping phenomenon consists of the damping of small oscillations in plasmas without collisions. In the PIC method, a hybrid discretization is constructed with a grid of finitely supported basis functions to represent the electric, magnetic and/or gravitational fields, and a distribution of delta functions to represent the particle field. Approximations to the dispersion relation are found to be inadequate in accurately calculating values for the electric field frequency and damping rate when parameters of the physical system, such as the plasma frequency or thermal velocity, are varied. We present a full derivation and numerical solution for the dispersion relation, and verify the PETSC-PIC numerical solutions to the Vlasov-Poisson for a large range of wave numbers and charge densities.
Problem definition: Behavioral health interventions, delivered through digital platforms, have the potential to significantly improve health outcomes, through education, motivation, reminders, and outreach. We study the problem of optimizing personalized interventions for patients to maximize some long-term outcome, in a setting where interventions are costly and capacity-constrained. Methodology/results: This paper provides a model-free approach to solving this problem. We find that generic model-free approaches from the reinforcement learning literature are too data intensive for healthcare applications, while simpler bandit approaches make progress at the expense of ignoring long-term patient dynamics. We present a new algorithm we dub DecompPI that approximates one step of policy iteration. Implementing DecompPI simply consists of a prediction task from offline data, alleviating the need for online experimentation. Theoretically, we show that under a natural set of structural assumptions on patient dynamics, DecompPI surprisingly recovers at least 1/2 of the improvement possible between a naive baseline policy and the optimal policy. At the same time, DecompPI is both robust to estimation errors and interpretable. Through an empirical case study on a mobile health platform for improving treatment adherence for tuberculosis, we find that DecompPI can provide the same efficacy as the status quo with approximately half the capacity of interventions. Managerial implications: DecompPI is general and is easily implementable for organizations aiming to improve long-term behavior through targeted interventions. Our case study suggests that the platform's costs of deploying interventions can potentially be cut by 50%, which facilitates the ability to scale up the system in a cost-efficient fashion.
Speeding has been acknowledged as a critical determinant in increasing the risk of crashes and their resulting injury severities. This paper demonstrates that severe speeding-related crashes within the state of Pennsylvania have a spatial clustering trend, where four crash datasets are extracted from four hotspot districts. Two log-likelihood ratio (LR) tests were conducted to determine whether speeding-related crashes classified by hotspot districts should be modeled separately. The results suggest that separate modeling is necessary. To capture the unobserved heterogeneity, four correlated random parameter order models with heterogeneity in means are employed to explore the factors contributing to crash severity involving at least one vehicle speeding. Overall, the findings exhibit that some indicators are observed to be spatial instability, including hit pedestrian crashes, head-on crashes, speed limits, work zones, light conditions (dark), rural areas, older drivers, running stop signs, and running red lights. Moreover, drunk driving, exceeding the speed limit, and being unbelted present relative spatial stability in four district models. This paper provides insights into preventing speeding-related crashes and potentially facilitating the development of corresponding crash injury mitigation policies.
Analyzing observational data from multiple sources can be useful for increasing statistical power to detect a treatment effect; however, practical constraints such as privacy considerations may restrict individual-level information sharing across data sets. This paper develops federated methods that only utilize summary-level information from heterogeneous data sets. Our federated methods provide doubly-robust point estimates of treatment effects as well as variance estimates. We derive the asymptotic distributions of our federated estimators, which are shown to be asymptotically equivalent to the corresponding estimators from the combined, individual-level data. We show that to achieve these properties, federated methods should be adjusted based on conditions such as whether models are correctly specified and stable across heterogeneous data sets.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.