Antenna array calibration is necessary to maintain the high fidelity of beam patterns across a wide range of advanced antenna systems and to ensure channel reciprocity in time division duplexing schemes. Despite the continuous development in this area, most existing solutions are optimised for specific radio architectures, require standardised over-the-air data transmission, or serve as extensions of conventional methods. The diversity of communication protocols and hardware creates a problematic case, since this diversity requires to design or update the calibration procedures for each new advanced antenna system. In this study, we formulate antenna calibration in an alternative way, namely as a task of functional approximation, and address it via Bayesian machine learning. Our contributions are three-fold. Firstly, we define a parameter space, based on near-field measurements, that captures the underlying hardware impairments corresponding to each radiating element, their positional offsets, as well as the mutual coupling effects between antenna elements. Secondly, Gaussian process regression is used to form models from a sparse set of the aforementioned near-field data. Once deployed, the learned non-parametric models effectively serve to continuously transform the beamforming weights of the system, resulting in corrected beam patterns. Lastly, we demonstrate the viability of the described methodology for both digital and analog beamforming antenna arrays of different scales and discuss its further extension to support real-time operation with dynamic hardware impairments.
Efficient and accurate estimation of multivariate empirical probability distributions is fundamental to the calculation of information-theoretic measures such as mutual information and transfer entropy. Common techniques include variations on histogram estimation which, whilst computationally efficient, are often unable to precisely capture the probability density of samples with high correlation, kurtosis or fine substructure, especially when sample sizes are small. Adaptive partitions, which adjust heuristically to the sample, can reduce the bias imparted from the geometry of the histogram itself, but these have commonly focused on the location, scale and granularity of the partition, the effects of which are limited for highly correlated distributions. In this paper, I reformulate the differential entropy estimator for the special case of an equiprobable histogram, using a k-d tree to partition the sample space into bins of equal probability mass. By doing so, I expose an implicit rotational orientation parameter, which is conjectured to be suboptimally specified in the typical marginal alignment. I propose that the optimal orientation minimises the variance of the bin volumes, and demonstrate that improved entropy estimates can be obtained by rotationally aligning the partition to the sample distribution accordingly. Such optimal partitions are observed to be more accurate than existing techniques in estimating entropies of correlated bivariate Gaussian distributions with known theoretical values, across varying sample sizes (99% CI).
Compliant grasping is an essential capability for most robots in practical applications. For compliant robotic end-effectors that commonly appear in industrial or logistic scenarios, such as Fin-Ray gripper, it still remains challenging to build a bidirectional mathematical model that mutually maps the shape deformation and contact force. Part I of this article has constructed the force-displacement relationship for design optimization through the co-rotational theory with very few assumptions. In Part II, we further devise a detailed displacement-force mathematical model, enabling the compliant gripper to precisely estimate contact force sensor-free. Specifically, the proposed approach based on the co-rotational theory can calculate contact forces from deformations. The presented displacement-control algorithm elaborately investigates contact forces and provides force feedback for a force control system of a gripper, where deformation appears as displacements in contact points. Afterward, simulation experiments are conducted to evaluate the performance of the proposed model through comparisons with the finite-element analysis (FEA). Simulation results reveal that the proposed model accurately estimates contact force, with an average error of around 5% throughout all single/multiple node cases, regardless of various design parameters (Part I of this article is released in Google Drive).
The paper focuses on a new error analysis of a class of mixed FEMs for stationary incompressible magnetohydrodynamics with the standard inf-sup stable velocity-pressure space pairs to Navier-Stokes equations and the N\'ed\'elec's edge element for the magnetic field. The methods have been widely used in various numerical simulations in the last several decades, while the existing analysis is not optimal due to the strong coupling of system and the pollution of the lower-order N\'ed\'elec's edge approximation in analysis. In terms of a newly modified Maxwell projection we establish new and optimal error estimates. In particular, we prove that the method based on the commonly-used Taylor-Hood/lowest-order N\'ed\'elec's edge element is efficient and the method provides the second-order accuracy for numerical velocity. Two numerical examples for the problem in both convex and nonconvex polygonal domains are presented. Numerical results confirm our theoretical analysis.
A patch framework consists of a bipartite graph between $n$ points and $m$ local views (patches) and the $d$-dimensional local coordinates of the points due to the views containing them. Given a patch framework, we consider the problem of finding a rigid alignment of the views, identified with an element of the product of $m$ orthogonal groups, $\mathbb{O}(d)^m$, that minimizes the alignment error. In the case when the views are noiseless, a perfect alignment exists, resulting in a realization of the points that respects the geometry of the views. The affine rigidity of such realizations, its connection with the overlapping structure of the views, and its consequences in spectral and semidefinite algorithms has been studied in related work [Zha and Zhang; Chaudhary et al.]. In this work, we characterize the non-degeneracy of a rigid alignment, consequently obtaining a characterization of the local rigidity of a realization, and convergence guarantees on Riemannian gradient descent for aligning the views. Precisely, we characterize the non-degeneracy of an alignment of (possibly noisy) local views based on the kernel and positivity of a certain matrix. Thereafter, we work in the noiseless setting. Under a mild condition on the local views, we show that the non-degeneracy and uniqueness of a perfect alignment, up to the action of $\mathbb{O}(d)$, are equivalent to the local and global rigidity of the resulting realization, respectively. This also yields a characterization of the local rigidity of a realization. We also provide necessary and sufficient conditions on the overlapping structure of the noiseless local views for their realizations to be locally/globally rigid. Finally, we focus on the Riemannian gradient descent for aligning the local views and obtain a sufficient condition on an alignment for the algorithm to converge (locally) linearly to it.
In this article, we propose a new class of consistent tests for $p$-variate normality. These tests are based on the characterization of the standard multivariate normal distribution, that the Hessian of the corresponding cumulant generating function is identical to the $p\times p$ identity matrix and the idea of decomposing the information from the joint distribution into the dependence copula and all marginal distributions. Under the null hypothesis of multivariate normality, our proposed test statistic is independent of the unknown mean vector and covariance matrix so that the distribution-free critical value of the test can be obtained by Monte Carlo simulation. We also derive the asymptotic null distribution of proposed test statistic and establish the consistency of the test against different fixed alternatives. Last but not least, a comprehensive and extensive Monte Carlo study also illustrates that our test is a superb yet computationally convenient competitor to many well-known existing test statistics.
Age of information (AoI) is a powerful metric to evaluate the freshness of information, where minimization of average statistics, such as the average AoI and average peak AoI, currently prevails in guiding freshness optimization for related applications. Although minimizing the statistics does improve the received information's freshness for status update systems in the sense of average, the time-varying fading characteristics of wireless channels often cause uncertain yet frequent age violations. The recently-proposed statistical AoI metric can better characterize more features of AoI dynamics, which evaluates the achievable minimum peak AoI under the certain constraint on age violation probability. In this paper, we study the statistical AoI minimization problem for status update systems over multi-state fading channels, which can effectively upper-bound the AoI violation probability but introduce the prohibitively-high computing complexity. To resolve this issue, we tackle the problem with a two-fold approach. For a small AoI exponent, the problem is approximated via a fractional programming problem. For a large AoI exponent, the problem is converted to a convex problem. Solving the two problems respectively, we derive the near-optimal sampling interval for diverse status update systems. Insightful observations are obtained on how sampling interval shall be tuned as a decreasing function of channel state information (CSI). Surprisingly, for the extremely stringent AoI requirement, the sampling interval converges to a constant regardless of CSI's variation. Numerical results verify effectiveness as well as superiority of our proposed scheme.
Privacy protection methods, such as differentially private mechanisms, introduce noise into resulting statistics which often results in complex and intractable sampling distributions. In this paper, we propose to use the simulation-based "repro sample" approach to produce statistically valid confidence intervals and hypothesis tests based on privatized statistics. We show that this methodology is applicable to a wide variety of private inference problems, appropriately accounts for biases introduced by privacy mechanisms (such as by clamping), and improves over other state-of-the-art inference methods such as the parametric bootstrap in terms of the coverage and type I error of the private inference. We also develop significant improvements and extensions for the repro sample methodology for general models (not necessarily related to privacy), including 1) modifying the procedure to ensure guaranteed coverage and type I errors, even accounting for Monte Carlo error, and 2) proposing efficient numerical algorithms to implement the confidence intervals and $p$-values.
Body weight, as an essential physiological trait, is of considerable significance in many applications like body management, rehabilitation, and drug dosing for patient-specific treatments. Previous works on the body weight estimation task are mainly vision-based, using 2D/3D, depth, or infrared images, facing problems in illumination, occlusions, and especially privacy issues. The pressure mapping mattress is a non-invasive and privacy-preserving tool to obtain the pressure distribution image over the bed surface, which strongly correlates with the body weight of the lying person. To extract the body weight from this image, we propose a deep learning-based model, including a dual-branch network to extract the deep features and pose features respectively. A contrastive learning module is also combined with the deep-feature branch to help mine the mutual factors across different postures of every single subject. The two groups of features are then concatenated for the body weight regression task. To test the model's performance over different hardware and posture settings, we create a pressure image dataset of 10 subjects and 23 postures, using a self-made pressure-sensing bedsheet. This dataset, which is made public together with this paper, together with a public dataset, are used for the validation. The results show that our model outperforms the state-of-the-art algorithms over both 2 datasets. Our research constitutes an important step toward fully automatic weight estimation in both clinical and at-home practice. Our dataset is available for research purposes at: //github.com/USTCWzy/MassEstimation.
Learning precise surrogate models of complex computer simulations and physical machines often require long-lasting or expensive experiments. Furthermore, the modeled physical dependencies exhibit nonlinear and nonstationary behavior. Machine learning methods that are used to produce the surrogate model should therefore address these problems by providing a scheme to keep the number of queries small, e.g. by using active learning and be able to capture the nonlinear and nonstationary properties of the system. One way of modeling the nonstationarity is to induce input-partitioning, a principle that has proven to be advantageous in active learning for Gaussian processes. However, these methods either assume a known partitioning, need to introduce complex sampling schemes or rely on very simple geometries. In this work, we present a simple, yet powerful kernel family that incorporates a partitioning that: i) is learnable via gradient-based methods, ii) uses a geometry that is more flexible than previous ones, while still being applicable in the low data regime. Thus, it provides a good prior for active learning procedures. We empirically demonstrate excellent performance on various active learning tasks.
Causal reversibility blends reversibility and causality for concurrent systems. It indicates that an action can be undone provided that all of its consequences have been undone already, thus making it possible to bring the system back to a past consistent state. Time reversibility is instead considered in the field of stochastic processes, mostly for efficient analysis purposes. A performance model based on a continuous-time Markov chain is time reversible if its stochastic behavior remains the same when the direction of time is reversed. We bridge these two theories of reversibility by showing the conditions under which causal reversibility and time reversibility are both ensured by construction. This is done in the setting of a stochastic process calculus, which is then equipped with a variant of stochastic bisimilarity accounting for both forward and backward directions.