We derive a novel uplink-downlink duality principle for optimal joint precoding design under per-transmitter power and information constraints in fading channels. The information constraints model limited sharing of channel state information and data bearing signals across the transmitters. The main application is to cell-free networks, where each access point (AP) must typically satisfy an individual power constraint and form its transmit signal using limited cooperation capabilities. Our duality principle applies to ergodic achievable rates given by the popular hardening bound, and it can be interpreted as a nontrivial generalization of a previous result by Yu and Lan for deterministic channels. This generalization allows us to study involved information constraints going beyond the simple case of cluster-wise centralized precoding covered by previous techniques. Specifically, we show that the optimal joint precoders are, in general, given by an extension of the recently developed team minimum mean-square error method. As a particular yet practical example, we then solve the problem of optimal local precoding design in user-centric cell-free massive MIMO networks subject to per-AP power constraints.
We develop a new rank-based approach for univariate two-sample testing in the presence of missing data which makes no assumptions about the missingness mechanism. This approach is a theoretical extension of the Wilcoxon-Mann-Whitney test that controls the Type I error by providing exact bounds for the test statistic after accounting for the number of missing values. Greater statistical power is shown when the method is extended to account for a bounded domain. Furthermore, exact bounds are provided on the proportions of data that can be missing in the two samples while yielding a significant result. Simulations demonstrate that our method has good power, typically for cases of $10\%$ to $20\%$ missing data, while standard imputation approaches fail to control the Type I error. We illustrate our method on complex clinical trial data in which patients' withdrawal from the trial lead to missing values.
The present study introduces an advanced multi-physics and multi-scale modeling approach to investigate in silico colon motility. We introduce a generalized electromechanical framework, integrating cellular electrophysiology and smooth muscle contractility, thus advancing a first-of-its-kind computational model of laser tissue soldering after incision resection. The proposed theoretical framework comprises three main elements: a microstructural material model describing intestine wall geometry and composition of reinforcing fibers, with four fiber families, two active-conductive and two passive; an electrophysiological model describing the propagation of slow waves, based on a fully-coupled nonlinear phenomenological approach; and a thermodynamical consistent mechanical model describing the hyperelastic energetic contributions ruling tissue equilibrium under diverse loading conditions. The active strain approach was adopted to describe tissue electromechanics by exploiting the multiplicative decomposition of the deformation gradient for each active fiber family and solving the governing equations via a staggered finite element scheme. The computational framework was fine-tuned according to state-of-the-art experimental evidence, and extensive numerical analyses allowed us to compare manometric traces computed via numerical simulations with those obtained clinically in human patients. The model proved capable of reproducing both qualitatively and quantitatively high or low-amplitude propagation contractions. Colon motility after laser tissue soldering demonstrates that material properties and couplings of the deposited tissue are critical to reproducing a physiological muscular contraction, thus restoring a proper peristaltic activity.
We formulate a uniform tail bound for empirical processes indexed by a class of functions, in terms of the individual deviations of the functions rather than the worst-case deviation in the considered class. The tail bound is established by introducing an initial "deflation" step to the standard generic chaining argument. The resulting tail bound is the sum of the complexity of the "deflated function class" in terms of a generalization of Talagrand's $\gamma$ functional, and the deviation of the function instance, both of which are formulated based on the natural seminorm induced by the corresponding Cram\'{e}r functions. We also provide certain approximations for the mentioned seminorm when the function class lies in a given (exponential type) Orlicz space, that can be used to make the complexity term and the deviation term more explicit.
Existing statistical methods for the analysis of micro-randomized trials (MRTs) are designed to estimate causal excursion effects using data from a single MRT. In practice, however, researchers can often find previous MRTs that employ similar interventions. In this paper, we develop data integration methods that capitalize on this additional information, leading to statistical efficiency gains. To further increase efficiency, we demonstrate how to combine these approaches according to a generalization of multivariate precision weighting that allows for correlation between estimates, and we show that the resulting meta-estimator possesses an asymptotic optimality property. We illustrate our methods in simulation and in a case study involving two MRTs in the area of smoking cessation.
The subject of this work is an adaptive stochastic Galerkin finite element method for parametric or random elliptic partial differential equations, which generates sparse product polynomial expansions with respect to the parametric variables of solutions. For the corresponding spatial approximations, an independently refined finite element mesh is used for each polynomial coefficient. The method relies on multilevel expansions of input random fields and achieves error reduction with uniform rate. In particular, the saturation property for the refinement process is ensured by the algorithm. The results are illustrated by numerical experiments, including cases with random fields of low regularity.
We propose a material design method via gradient-based optimization on compositions, overcoming the limitations of traditional methods: exhaustive database searches and conditional generation models. It optimizes inputs via backpropagation, aligning the model's output closely with the target property and facilitating the discovery of unlisted materials and precise property determination. Our method is also capable of adaptive optimization under new conditions without retraining. Applying to exploring high-Tc superconductors, we identified potential compositions beyond existing databases and discovered new hydrogen superconductors via conditional optimization. This method is versatile and significantly advances material design by enabling efficient, extensive searches and adaptability to new constraints.
We present the design and implementation of a tool for semi-automatic verification of functional specifications of operating system modules. Such verification tasks are traditionally done in interactive theorem provers, where the functionalities of the module are specified at abstract and concrete levels using data such as structures, algebraic datatypes, arrays, maps and so on. In this work, we provide encodings to SMT for these commonly occurring data types. This allows verification conditions to be reduced into a form suitable for SMT solvers. The use of SMT solvers combined with a tactic language allows semi-automatic verification of the specification. We apply the tool to verify functional specification for key parts of the uC-OS/II operating system, based on earlier work giving full verification of the system in Coq. We demonstrate a large reduction in the amount of human effort due to increased level of automation.
We present a novel framework for the development of fourth-order lattice Boltzmann schemes to tackle multidimensional nonlinear systems of conservation laws. Our numerical schemes preserve two fundamental characteristics inherent in classical lattice Boltzmann methods: a local relaxation phase and a transport phase composed of elementary shifts on a Cartesian grid. Achieving fourth-order accuracy is accomplished through the composition of second-order time-symmetric basic schemes utilizing rational weights. This enables the representation of the transport phase in terms of elementary shifts. Introducing local variations in the relaxation parameter during each stage of relaxation ensures the entropic nature of the schemes. This not only enhances stability in the long-time limit but also maintains fourth-order accuracy. To validate our approach, we conduct comprehensive testing on scalar equations and systems in both one and two spatial dimensions.
Regent is an implicitly parallel programming language that allows the development of a single codebase for heterogeneous platforms targeting CPUs and GPUs. This paper presents the development of a parallel meshfree solver in Regent for two-dimensional inviscid compressible flows. The meshfree solver is based on the least squares kinetic upwind method. Example codes are presented to show the difference between the Regent and CUDA-C implementations of the meshfree solver on a GPU node. For CPU parallel computations, details are presented on how the data communication and synchronisation are handled by Regent and Fortran+MPI codes. The Regent solver is verified by applying it to the standard test cases for inviscid flows. Benchmark simulations are performed on coarse to very fine point distributions to assess the solver's performance. The computational efficiency of the Regent solver on an A100 GPU is compared with an equivalent meshfree solver written in CUDA-C. The codes are then profiled to investigate the differences in their performance. The performance of the Regent solver on CPU cores is compared with an equivalent explicitly parallel Fortran meshfree solver based on MPI. Scalability results are shown to offer insights into performance.
Intelligent tutoring systems optimize the selection and timing of learning materials to enhance understanding and long-term retention. This requires estimates of both the learner's progress (''knowledge tracing''; KT), and the prerequisite structure of the learning domain (''knowledge mapping''). While recent deep learning models achieve high KT accuracy, they do so at the expense of the interpretability of psychologically-inspired models. In this work, we present a solution to this trade-off. PSI-KT is a hierarchical generative approach that explicitly models how both individual cognitive traits and the prerequisite structure of knowledge influence learning dynamics, thus achieving interpretability by design. Moreover, by using scalable Bayesian inference, PSI-KT targets the real-world need for efficient personalization even with a growing body of learners and learning histories. Evaluated on three datasets from online learning platforms, PSI-KT achieves superior multi-step predictive accuracy and scalable inference in continual-learning settings, all while providing interpretable representations of learner-specific traits and the prerequisite structure of knowledge that causally supports learning. In sum, predictive, scalable and interpretable knowledge tracing with solid knowledge mapping lays a key foundation for effective personalized learning to make education accessible to a broad, global audience.