We study a mean change point testing problem for high-dimensional data, with exponentially- or polynomially-decaying tails. In each case, depending on the $\ell_0$-norm of the mean change vector, we separately consider dense and sparse regimes. We characterise the boundary between the dense and sparse regimes under the above two tail conditions for the first time in the change point literature and propose novel testing procedures that attain optimal rates in each of the four regimes up to a poly-iterated logarithmic factor. By comparing with previous results under Gaussian assumptions, our results quantify the costs of heavy-tailedness on the fundamental difficulty of change point testing problems for high-dimensional data. To be specific, when the error vectors follow sub-Weibull distributions, a CUSUM-type statistic is shown to achieve a minimax testing rate up to $\sqrt{\log\log(8n)}$. When the error distributions have polynomially-decaying tails, admitting bounded $\alpha$-th moments for some $\alpha \geq 4$, we introduce a median-of-means-type test statistic that achieves a near-optimal testing rate in both dense and sparse regimes. In particular, in the sparse regime, we further propose a computationally-efficient test to achieve the exact optimality. Surprisingly, our investigation in the even more challenging case of $2 \leq \alpha < 4$, unveils a new phenomenon that the minimax testing rate has no sparse regime, i.e.\ testing sparse changes is information-theoretically as hard as testing dense changes. This phenomenon implies a phase transition of the minimax testing rates at $\alpha = 4$.
Temporal irreversibility, often referred to as the arrow of time, is a fundamental concept in statistical mechanics. Markers of irreversibility also provide a powerful characterisation of information processing in biological systems. However, current approaches tend to describe temporal irreversibility in terms of a single scalar quantity, without disentangling the underlying dynamics that contribute to irreversibility. Here we propose a broadly applicable information-theoretic framework to characterise the arrow of time in multivariate time series, which yields qualitatively different types of irreversible information dynamics. This multidimensional characterisation reveals previously unreported high-order modes of irreversibility, and establishes a formal connection between recent heuristic markers of temporal irreversibility and metrics of information processing. We demonstrate the prevalence of high-order irreversibility in the hyperactive regime of a biophysical model of brain dynamics, showing that our framework is both theoretically principled and empirically useful. This work challenges the view of the arrow of time as a monolithic entity, enhancing both our theoretical understanding of irreversibility and our ability to detect it in practical applications.
During the process of robot-assisted ultrasound(US) puncture, it is important to estimate the location of the puncture from the 2D US images. To this end, the calibration of the US image becomes an important issue. In this paper, we proposed a depth camera-based US calibration method, where an easy-to-deploy device is designed for the calibration. With this device, the coordinates of the puncture needle tip are collected respectively in US image and in the depth camera, upon which a correspondence matrix is built for calibration. Finally, a number of experiments are conducted to validate the effectiveness of our calibration method.
Shape morphing that transforms morphologies in response to stimuli is crucial for future multifunctional systems. While kirigami holds great promise in enhancing shape-morphing, existing designs primarily focus on kinematics and overlook the underlying physics. This study introduces a differentiable inverse design framework that considers the physical interplay between geometry, materials, and stimuli of active kirigami, made by soft material embedded with magnetic particles, to realize target shape-morphing upon magnetic excitation. We achieve this by combining differentiable kinematics and energy models into a constrained optimization, simultaneously designing the cuts and magnetization orientations to ensure kinematic and physical feasibility. Complex kirigami designs are obtained automatically with unparallel efficiency, which can be remotely controlled to morph into intricate target shapes and even multiple states. The proposed framework can be extended to accommodate various active systems, bridging geometry and physics to push the frontiers in shape-morphing applications, like flexible electronics and minimally invasive surgery.
The discovery of equations with knowledge of the process origin is a tempting prospect. However, most equation discovery tools rely on gradient methods, which offer limited control over parameters. An alternative approach is the evolutionary equation discovery, which allows modification of almost every optimization stage. In this paper, we examine the modifications that can be introduced into the evolutionary operators of the equation discovery algorithm, taking inspiration from directed evolution techniques employed in fields such as chemistry and biology. The resulting approach, dubbed directed equation discovery, demonstrates a greater ability to converge towards accurate solutions than the conventional method. To support our findings, we present experiments based on Burgers', wave, and Korteweg--de Vries equations.
We propose a new model to address the overlooked problem of node clustering in simple hypergraphs. Simple hypergraphs are suitable when a node may not appear multiple times in the same hyperedge, such as in co-authorship datasets. Our model assumes the existence of latent node groups and hyperedges are conditionally independent given these groups. We first establish the generic identifiability of the model parameters. We then develop a variational approximation Expectation-Maximization algorithm for parameter inference and node clustering, and derive a statistical criterion for model selection. To illustrate the performance of our R package HyperSBM, we compare it with other node clustering methods using synthetic data generated from the model, as well as from a line clustering experiment and a co-authorship dataset. As a by-product, our synthetic experiments demonstrate that the detectability thresholds for non-uniform sparse hypergraphs cannot be deduced from the uniform case.
An approach to parameter optimization for the low-rank matrix recovery method in hyperspectral imaging is discussed. We formulate an optimization problem with respect to the initial parameters of the low-rank matrix recovery method. The performance for different parameter settings is compared in terms of computational times and memory. The results are evaluated by computing the peak signal-to-noise ratio as a quantitative measure. The potential improvement of the performance of the noise reduction method is discussed when optimizing the choice of the initial values. The optimization method is tested on standard and openly available hyperspectral data sets including Indian Pines, Pavia Centre, and Pavia University.
Linear statistics of point processes yield Monte Carlo estimators of integrals. While the simplest approach relies on a homogeneous Poisson point process, more regularly spread point processes, such as scrambled low-discrepancy sequences or determinantal point processes, can yield Monte Carlo estimators with fast-decaying mean square error. Following the intuition that more regular configurations result in lower integration error, we introduce the repulsion operator, which reduces clustering by slightly pushing the points of a configuration away from each other. Our main theoretical result is that applying the repulsion operator to a homogeneous Poisson point process yields an unbiased Monte Carlo estimator with lower variance than under the original point process. On the computational side, the evaluation of our estimator is only quadratic in the number of integrand evaluations and can be easily parallelized without any communication across tasks. We illustrate our variance reduction result with numerical experiments and compare it to popular Monte Carlo methods. Finally, we numerically investigate a few open questions on the repulsion operator. In particular, the experiments suggest that the variance reduction also holds when the operator is applied to other motion-invariant point processes.
Estimating causal effects from observational network data is a significant but challenging problem. Existing works in causal inference for observational network data lack an analysis of the generalization bound, which can theoretically provide support for alleviating the complex confounding bias and practically guide the design of learning objectives in a principled manner. To fill this gap, we derive a generalization bound for causal effect estimation in network scenarios by exploiting 1) the reweighting schema based on joint propensity score and 2) the representation learning schema based on Integral Probability Metric (IPM). We provide two perspectives on the generalization bound in terms of reweighting and representation learning, respectively. Motivated by the analysis of the bound, we propose a weighting regression method based on the joint propensity score augmented with representation learning. Extensive experimental studies on two real-world networks with semi-synthetic data demonstrate the effectiveness of our algorithm.
When considering initial stress field in geomaterial, nonzero resultant of shallow tunnel excavation exists, which produces logarithmic items in complex potentials, and would further lead to a unique displacement singularity at infinity to violate geo-engineering fact in real world. The mechanical and mathematical reasons of such a unique displacement singularity in the existing mechanical models are elaborated, and a new mechanical model is subsequently proposed to eliminate this singularity by constraining far-field ground surface displacement, and the original unbalanced resultant problem is converted into an equilibrium one with mixed boundary conditions. To solve stress and displacement in the new model, the analytic continuation is applied to transform the mixed boundary conditions into a homogenerous Riemann-Hilbert problem with extra constraints, which is then solved using an approximate and iterative method with good numerical stability. The Lanczos filtering is applied to the stress and displacement solution to reduce the Gibbs phenomena caused by abrupt change of the boundary conditions along ground surface. Several numerical cases are conducted to verify the proposed mechanical model and the results strongly validate that the proposed mechanical model successfully eliminates the displacement singularity caused by unbalanced resultant with good convergence and accuracy to obtain stress and displacement for shallow tunnel excavation. A parametric investigation is subsequently conducted to study the influence of tunnel depth, lateral coefficient, and free surface range on stress and displacement distribution in geomaterial.
Deep learning has emerged as an effective solution for addressing the challenges of short-term voltage stability assessment (STVSA) in power systems. However, existing deep learning-based STVSA approaches face limitations in adapting to topological changes, sample labeling, and handling small datasets. To overcome these challenges, this paper proposes a novel phasor measurement unit (PMU) measurements-based STVSA method by using deep transfer learning. The method leverages the real-time dynamic information captured by PMUs to create an initial dataset. It employs temporal ensembling for sample labeling and utilizes least squares generative adversarial networks (LSGAN) for data augmentation, enabling effective deep learning on small-scale datasets. Additionally, the method enhances adaptability to topological changes by exploring connections between different faults. Experimental results on the IEEE 39-bus test system demonstrate that the proposed method improves model evaluation accuracy by approximately 20% through transfer learning, exhibiting strong adaptability to topological changes. Leveraging the self-attention mechanism of the Transformer model, this approach offers significant advantages over shallow learning methods and other deep learning-based approaches.