This research introduces a novel hydrofoil-based propulsion framework for unmanned aquatic robots, inspired by the undulating locomotion observed in select aquatic species. The proposed system incorporates a camber-modulating mechanism to enhance hydrofoil propulsive force generation and eventually efficiency. Through dynamic simulations, we validate the effectiveness of the camber-adjusting hydrofoil compared to a symmetric counterpart. The results demonstrate a significant improvement in horizontal thrust, emphasizing the potential of the cambering approach to enhance propulsive performance. Additionally, a prototype flipper design is presented, featuring individual control of heave and pitch motions, as well as a camber-adjustment mechanism. The integrated system not only provides efficient water-based propulsion but also offers the capacity for generating vertical forces during take-off maneuvers for seaplanes. The design is tailored to harness wave energy, contributing to the exploration of alternative energy resources. This work advances the understanding of bionic oscillatory principles for aquatic robots and provides a foundation for future developments in environmentally safe and agile underwater exploration.
We propose a generalized free energy potential for active systems, including both stochastic master equations and deterministic nonlinear chemical reaction networks. Our generalized free energy is defined variationally as the "most irreversible" state observable. This variational principle is motivated from several perspectives, including large deviations theory, thermodynamic uncertainty relations, Onsager theory, and information-theoretic optimal transport. In passive systems, the most irreversible observable is the usual free energy potential and its irreversibility is the entropy production rate (EPR). In active systems, the most irreversible observable is the generalized free energy and its irreversibility gives the excess EPR, the nonstationary contribution to dissipation. The remaining "housekeeping" EPR is a genuine nonequilibrium contribution that quantifies the nonconservative nature of the forces. We derive far-from-equilibrium thermodynamic speed limits for excess EPR, applicable to both linear and nonlinear systems. Our approach overcomes several limitations of the steady-state potential and the Hatano-Sasa (adiabatic/nonadiabatic) decomposition, as we demonstrate in several examples.
We consider anisotropic heat flow with extreme anisotropy, as arises in magnetized plasmas for fusion applications. Such problems pose significant challenges in both obtaining an accurate approximation as well in the construction of an efficient solver. In both cases, the underlying difficulty is in forming an accurate approximation of temperature fields that follow the direction of complex, non-grid-aligned magnetic fields. In this work, we construct a highly accurate coarse grid approximation using spectral multiscale basis functions based on local anisotropic normalized Laplacians. We show that the local generalized spectral problems yield local modes that align with magnetic fields, and provide an excellent coarse-grid approximation of the problem. We then utilize this spectral coarse space as an approximation in itself, and as the coarse-grid in a two-level spectral preconditioner. Numerical results are presented for several magnetic field distributions and anisotropy ratios up to $10^{12}$, showing highly accurate results with a large system size reduction, and two-grid preconditioning that converges in $O(1)$ iterations, independent of anisotropy.
Extreme events over large spatial domains may exhibit highly heterogeneous tail dependence characteristics, yet most existing spatial extremes models yield only one dependence class over the entire spatial domain. To accurately characterize "data-level dependence'' in analysis of extreme events, we propose a mixture model that achieves flexible dependence properties and allows high-dimensional inference for extremes of spatial processes. We modify the popular random scale construction that multiplies a Gaussian random field by a single radial variable; we allow the radial variable to vary smoothly across space and add non-stationarity to the Gaussian process. As the level of extremeness increases, this single model exhibits both asymptotic independence at long ranges and either asymptotic dependence or independence at short ranges. We make joint inference on the dependence model and a marginal model using a copula approach within a Bayesian hierarchical model. Three different simulation scenarios show close to nominal frequentist coverage rates. Lastly, we apply the model to a dataset of extreme summertime precipitation over the central United States. We find that the joint tail of precipitation exhibits non-stationary dependence structure that cannot be captured by limiting extreme value models or current state-of-the-art sub-asymptotic models.
Nested integration problems arise in various scientific and engineering applications, including Bayesian experimental design, financial risk assessment, and uncertainty quantification. These nested integrals take the form $\int f\left(\int g(\bs{y},\bs{x})\di{}\bs{x}\right)\di{}\bs{y}$, for nonlinear $f$, making them computationally challenging, particularly in high-dimensional settings. Although widely used for single integrals, traditional Monte Carlo (MC) methods can be inefficient when encountering complexities of nested integration. This work introduces a novel multilevel estimator, combining deterministic and randomized quasi-MC (rQMC) methods to handle nested integration problems efficiently. In this context, the inner number of samples and the discretization accuracy of the inner integrand evaluation constitute the level. We provide a comprehensive theoretical analysis of the estimator, deriving error bounds demonstrating significant reductions in bias and variance compared with standard methods. The proposed estimator is particularly effective in scenarios where the integrand is evaluated approximately, as it adapts to different levels of resolution without compromising precision. We verify the performance of our method via numerical experiments, focusing on estimating the expected information gain of experiments. We further introduce a truncation scheme to address the eventual unboundedness of the experimental noise. When applied to Gaussian noise in the estimator, this truncation scheme renders the same computational complexity as in the bounded noise case up to multiplicative logarithmic terms. The results reveal that the proposed multilevel rQMC estimator outperforms existing MC and rQMC approaches, offering a substantial reduction in computational costs and offering a powerful tool for practitioners dealing with complex, nested integration problems across various domains.
This work presents a hybrid quantum-classical algorithm to perform clustering aggregation, designed for neutral-atoms quantum computers and quantum annealers. Clustering aggregation is a technique that mitigates the weaknesses of clustering algorithms, an important class of data science methods for partitioning datasets, and is widely employed in many real-world applications. By expressing the clustering aggregation problem instances as a Maximum Independent Set (MIS) problem and as a Quadratic Unconstrained Binary Optimization (QUBO) problem, it was possible to solve them by leveraging the potential of Pasqal's Fresnel (neutral-atoms processor) and D-Wave's Advantage QPU (quantum annealer). Additionally, the designed clustering aggregation algorithm was first validated on a Fresnel emulator based on QuTiP and later on an emulator of the same machine based on tensor networks, provided by Pasqal. The results revealed technical limitations, such as the difficulty of adding additional constraints on the employed neutral-atoms platform and the need for better metrics to measure the quality of the produced clusterings. However, this work represents a step towards a benchmark to compare two different machines: a quantum annealer and a neutral-atom quantum computer. Moreover, findings suggest promising potential for future advancements in hybrid quantum-classical pipelines, although further improvements are needed in both quantum and classical components.
This paper introduces a novel decomposition framework to explain heterogeneity in causal effects observed across different studies, considering both observational and randomized settings. We present a formal decomposition of between-study heterogeneity, identifying sources of variability in treatment effects across studies. The proposed methodology allows for robust estimation of causal parameters under various assumptions, addressing differences in pre-treatment covariate distributions, mediating variables, and the outcome mechanism. Our approach is validated through a simulation study and applied to data from the Moving to Opportunity (MTO) study, demonstrating its practical relevance. This work contributes to the broader understanding of causal inference in multi-study environments, with potential applications in evidence synthesis and policy-making.
We study the identification of binary choice models with fixed effects. We provide a condition called sign saturation and show that this condition is sufficient for the identification of the model. In particular, we can guarantee identification even when all the regressors are bounded, including multiple discrete regressors. We also show that without this condition, the model is not identified unless the error distribution belongs to a special class. The same sign saturation condition is also essential for identifying the sign of treatment effects. A test is provided to check the sign saturation condition and can be implemented using existing algorithms for the maximum score estimator.
This paper introduces a novel regression model designed for angular response variables with linear predictors, utilizing a generalized M\"{o}bius transformation to define the regression curve. By mapping the real axis to the circle, the model effectively captures the relationship between linear and angular components. A key innovation is the introduction of an area-based loss function, inspired by the geometry of a curved torus, for efficient parameter estimation. The semi-parametric nature of the model eliminates the need for specific distributional assumptions about the angular error, enhancing its versatility. Extensive simulation studies, incorporating von Mises and wrapped Cauchy distributions, highlight the robustness of the framework. The model's practical utility is demonstrated through real-world data analysis of Bitcoin and Ethereum, showcasing its ability to derive meaningful insights from complex data structures.
We develop some graph-based tests for spherical symmetry of a multivariate distribution using a method based on data augmentation. These tests are constructed using a new notion of signs and ranks that are computed along a path obtained by optimizing an objective function based on pairwise dissimilarities among the observations in the augmented data set. The resulting tests based on these signs and ranks have the exact distribution-free property, and irrespective of the dimension of the data, the null distributions of the test statistics remain the same. These tests can be conveniently used for high-dimensional data, even when the dimension is much larger than the sample size. Under appropriate regularity conditions, we prove the consistency of these tests in high dimensional asymptotic regime, where the dimension grows to infinity while the sample size may or may not grow with the dimension. We also propose a generalization of our methods to take care of the situations, where the center of symmetry is not specified by the null hypothesis. Several simulated data sets and a real data set are analyzed to demonstrate the utility of the proposed tests.
With the promise of accelerating software development, low-code platforms (LCPs) are becoming popular across various industries. Nevertheless, there are still barriers hindering their adoption. Among them, vendor lock-in is a major concern, especially considering the lack of interoperability between these platforms. Typically, after modeling an application in one LCP, migrating to another requires starting from scratch remodeling everything (the data model, the graphical user interface, workflows, etc.), in the new platform. To overcome this situation, this work proposes an approach to improve the interoperability of LCPs by (semi)automatically migrating models specified in one platform to another one. The concrete migration path depends on the capabilities of the source and target tools. We first analyze popular LCPs, characterize their import and export alternatives and define transformations between those data formats when available. This is then complemented with an LLM-based solution, where image recognition features of large language models are employed to migrate models based on a simple image export of the model at hand. The full pipelines are implemented on top of the BESSER modeling framework that acts as a pivot representation between the tools.