Reduced-order models have been widely adopted in fluid mechanics, particularly in the context of Newtonian fluid flows. These models offer the ability to predict complex dynamics, such as instabilities and oscillations, at a considerably reduced computational cost. In contrast, the reduced-order modeling of non-Newtonian viscoelastic fluid flows remains relatively unexplored. This work leverages the sparse identification of nonlinear dynamics algorithm to develop interpretable reduced-order models for viscoelastic flows. In particular, we explore a benchmark oscillatory viscoelastic flow on the four-roll mill geometry using the classical Oldroyd-B fluid. This flow exemplifies many canonical challenges associated with non-Newtonian flows, including transitions, asymmetries, instabilities, and bifurcations arising from the interplay of viscous and elastic forces, all of which require expensive computations in order to resolve the fast timescales and long transients characteristic of such flows. First, we demonstrate the effectiveness of our data-driven surrogate model to predict the transient evolution and accurately reconstruct the spatial flow field for fixed flow parameters. We then develop a fully parametric, nonlinear model capable of capturing the dynamic variations as a function of the Weissenberg number. While the training data is predominantly concentrated on a limit cycle regime for moderate Wi, we show that the parameterized model can be used to extrapolate, accurately predicting the dominant dynamics in the case of high Weissenberg numbers. The proposed methodology represents an initial step in the field of reduced-order modeling for viscoelastic flows with the potential to be further refined and enhanced for the design, optimization, and control of a wide range of non-Newtonian fluid flows using machine learning and reduced-order modeling techniques.
While Product of Exponentials (POE) formula has been gaining increasing popularity in modeling the kinematics of a serial-link robot, the Denavit-Hartenberg (D-H) notation is still the most widely used due to its intuitive and concise geometric interpretation of the robot. This paper has developed an analytical solution to automatically convert a POE model into a D-H model for a robot with revolute, prismatic, and helical joints, which are the complete set of three basic one degree of freedom lower pair joints for constructing a serial-link robot. The conversion algorithm developed can be used in applications such as calibration where it is necessary to convert the D-H model to the POE model for identification and then back to the D-H model for compensation. The equivalence of the two models proved in this paper also benefits the analysis of the identifiability of the kinematic parameters. It is found that the maximum number of identifiable parameters in a general POE model is 5h+4r +2t +n+6 where h, r, t, and n stand for the number of helical, revolute, prismatic, and general joints, respectively. It is also suggested that the identifiability of the base frame and the tool frame in the D-H model is restricted rather than the arbitrary six parameters as assumed previously.
We present a novel combination of dynamic embedded topic models and change-point detection to explore diachronic change of lexical semantic modality in classical and early Christian Latin. We demonstrate several methods for finding and characterizing patterns in the output, and relating them to traditional scholarship in Comparative Literature and Classics. This simple approach to unsupervised models of semantic change can be applied to any suitable corpus, and we conclude with future directions and refinements aiming to allow noisier, less-curated materials to meet that threshold.
A generalization of Passing-Bablok regression is proposed for comparing multiple measurement methods simultaneously. Possible applications include assay migration studies or interlaboratory trials. When comparing only two methods, the method reduces to the usual Passing-Bablok estimator. It is close in spirit to reduced major axis regression, which is, however, not robust. To obtain a robust estimator, the major axis is replaced by the (hyper-)spherical median axis. The method is shown to reduce to the usual Passing-Bablok estimator if only two methods are compared. This technique has been applied to compare SARS-CoV-2 serological tests, bilirubin in neonates, and an in vitro diagnostic test using different instruments, sample preparations, and reagent lots. In addition, plots similar to the well-known Bland-Altman plots have been developed to represent the variance structure.
Remotely sensed data are dominated by mixed Land Use and Land Cover (LULC) types. Spectral unmixing (SU) is a key technique that disentangles mixed pixels into constituent LULC types and their abundance fractions. While existing studies on Deep Learning (DL) for SU typically focus on single time-step hyperspectral (HS) or multispectral (MS) data, our work pioneers SU using MODIS MS time series, addressing missing data with end-to-end DL models. Our approach enhances a Long-Short Term Memory (LSTM)-based model by incorporating geographic, topographic (geo-topographic), and climatic ancillary information. Notably, our method eliminates the need for explicit endmember extraction, instead learning the input-output relationship between mixed spectra and LULC abundances through supervised learning. Experimental results demonstrate that integrating spectral-temporal input data with geo-topographic and climatic information significantly improves the estimation of LULC abundances in mixed pixels. To facilitate this study, we curated a novel labeled dataset for Andalusia (Spain) with monthly MODIS multispectral time series at 460m resolution for 2013. Named Andalusia MultiSpectral MultiTemporal Unmixing (Andalusia-MSMTU), this dataset provides pixel-level annotations of LULC abundances along with ancillary information. The dataset (//zenodo.org/records/7752348) and code (//github.com/jrodriguezortega/MSMTU) are available to the public.
We propose an energy-stable parametric finite element method (PFEM) for the planar Willmore flow and establish its unconditional energy stability of the full discretization scheme. The key lies in the introduction of two novel geometric identities to describe the planar Willmore flow: the first one involves the coupling of the outward unit normal vector $\boldsymbol{n}$ and the normal velocity $V$, and the second one concerns the time derivative of the mean curvature $\kappa$. Based on them, we derive a set of new geometric partial differential equations for the planar Willmore flow, leading to our new fully-discretized and unconditionally energy-stable PFEM. Our stability analysis is also based on the two new geometric identities. Extensive numerical experiments are provided to illustrate its efficiency and validate its unconditional energy stability.
Skew normal model suffers from inferential drawbacks, namely singular Fisher information in the vicinity of symmetry and diverging of maximum likelihood estimation. To address the above drawbacks, Azzalini and Arellano-Valle (2013) introduced maximum penalised likelihood estimation (MPLE) by subtracting a penalty function from the log-likelihood function with a pre-specified penalty coefficient. Here, we propose a cross-validated MPLE to improve its performance when the underlying model is close to symmetry. We develop a theory for MPLE, where an asymptotic rate for the cross-validated penalty coefficient is derived. We further show that the proposed cross-validated MPLE is asymptotically efficient under certain conditions. In simulation studies and a real data application, we demonstrate that the proposed estimator can outperform the conventional MPLE when the model is close to symmetry.
We propose a framework where Fer and Wilcox expansions for the solution of differential equations are derived from two particular choices for the initial transformation that seeds the product expansion. In this scheme intermediate expansions can also be envisaged. Recurrence formulas are developed. A new lower bound for the convergence of the Wilcox expansion is provided as well as some applications of the results. In particular, two examples are worked out up to high order of approximation to illustrate the behavior of the Wilcox expansion.
The classification of different grapevine varieties is a relevant phenotyping task in Precision Viticulture since it enables estimating the growth of vineyard rows dedicated to different varieties, among other applications concerning the wine industry. This task can be performed with destructive methods that require time-consuming tasks, including data collection and analysis in the laboratory. However, Unmanned Aerial Vehicles (UAV) provide a more efficient and less prohibitive approach to collecting hyperspectral data, despite acquiring noisier data. Therefore, the first task is the processing of these data to correct and downsample large amounts of data. In addition, the hyperspectral signatures of grape varieties are very similar. In this work, a Convolutional Neural Network (CNN) is proposed for classifying seventeen varieties of red and white grape variants. Rather than classifying single samples, these are processed together with their neighbourhood. Hence, the extraction of spatial and spectral features is addressed with 1) a spatial attention layer and 2) Inception blocks. The pipeline goes from processing to dataset elaboration, finishing with the training phase. The fitted model is evaluated in terms of response time, accuracy and data separability, and compared with other state-of-the-art CNNs for classifying hyperspectral data. Our network was proven to be much more lightweight with a reduced number of input bands, a lower number of trainable weights and therefore, reduced training time. Despite this, the evaluated metrics showed much better results for our network (~99% overall accuracy), in comparison with previous works barely achieving 81% OA.
Although Regge finite element functions are not continuous, useful generalizations of nonlinear derivatives like the curvature, can be defined using them. This paper is devoted to studying the convergence of the finite element lifting of a generalized (distributional) Gauss curvature defined using a metric tensor in the Regge finite element space. Specifically, we investigate the interplay between the polynomial degree of the curvature lifting by Lagrange elements and the degree of the metric tensor in the Regge finite element space. Previously, a superconvergence result, where convergence rate of one order higher than expected, was obtained when the metric is the canonical Regge interpolant of the exact metric. In this work, we show that an even higher order can be obtained if the degree of the curvature lifting is reduced by one polynomial degre and if at least linear Regge elements are used. These improved convergence rates are confirmed by numerical examples.
Recently, addressing spatial confounding has become a major topic in spatial statistics. However, the literature has provided conflicting definitions, and many proposed definitions do not address the issue of confounding as it is understood in causal inference. We define spatial confounding as the existence of an unmeasured causal confounder with a spatial structure. We present a causal inference framework for nonparametric identification of the causal effect of a continuous exposure on an outcome in the presence of spatial confounding. We propose double machine learning (DML), a procedure in which flexible models are used to regress both the exposure and outcome variables on confounders to arrive at a causal estimator with favorable robustness properties and convergence rates, and we prove that this approach is consistent and asymptotically normal under spatial dependence. As far as we are aware, this is the first approach to spatial confounding that does not rely on restrictive parametric assumptions (such as linearity, effect homogeneity, or Gaussianity) for both identification and estimation. We demonstrate the advantages of the DML approach analytically and in simulations. We apply our methods and reasoning to a study of the effect of fine particulate matter exposure during pregnancy on birthweight in California.