There are a prohibitively large number of floating-point time series data generated at an unprecedentedly high rate. An efficient, compact and lossless compression for time series data is of great importance for a wide range of scenarios. Most existing lossless floating-point compression methods are based on the XOR operation, but they do not fully exploit the trailing zeros, which usually results in an unsatisfactory compression ratio. This paper proposes an Erasing-based Lossless Floating-point compression algorithm, i.e., Elf. The main idea of Elf is to erase the last few bits (i.e., set them to zero) of floating-point values, so the XORed values are supposed to contain many trailing zeros. The challenges of the erasing-based method are three-fold. First, how to quickly determine the erased bits? Second, how to losslessly recover the original data from the erased ones? Third, how to compactly encode the erased data? Through rigorous mathematical analysis, Elf can directly determine the erased bits and restore the original values without losing any precision. To further improve the compression ratio, we propose a novel encoding strategy for the XORed values with many trailing zeros. Furthermore, observing the values in a time series usually have similar significand counts, we propose an upgraded version of Elf named Elf+ by optimizing the significand count encoding strategy, which improves the compression ratio and reduces the running time further. Both Elf and Elf+ work in a streaming fashion. They take only O(N) (where N is the length of a time series) in time and O(1) in space, and achieve a notable compression ratio with a theoretical guarantee. Extensive experiments using 22 datasets show the powerful performance of Elf and Elf+ compared with 9 advanced competitors for both double-precision and single-precision floating-point values.
To minimize the average of a set of log-convex functions, the stochastic Newton method iteratively updates its estimate using subsampled versions of the full objective's gradient and Hessian. We contextualize this optimization problem as sequential Bayesian inference on a latent state-space model with a discriminatively-specified observation process. Applying Bayesian filtering then yields a novel optimization algorithm that considers the entire history of gradients and Hessians when forming an update. We establish matrix-based conditions under which the effect of older observations diminishes over time, in a manner analogous to Polyak's heavy ball momentum. We illustrate various aspects of our approach with an example and review other relevant innovations for the stochastic Newton method.
Ghost, or fictitious points allow to capture boundary conditions that are not located on the finite difference grid discretization. We explore in this paper the impact of ghost points on the stability of the explicit Euler finite difference scheme in the context of the diffusion equation. In particular, we consider the case of a one-touch option under the Black-Scholes model. The observations and results are however valid for a much wider range of financial contracts and models.
Statistical depth functions provide measures of the outlyingness, or centrality, of the elements of a space with respect to a distribution. It is a nonparametric concept applicable to spaces of any dimension, for instance, multivariate and functional. Liu and Singh (1993) presented a multivariate two-sample test based on depth-ranks. We dedicate this paper to improving the power of the associated test statistic and incorporating its applicability to functional data. In doing so, we obtain a more natural test statistic that is symmetric in both samples. We derive the null asymptotic of the proposed test statistic, also proving the validity of the testing procedure for functional data. Finally, the finite sample performance of the test for functional data is illustrated by means of a simulation study and a real data analysis on annual temperature curves of ocean drifters is executed.
Fixed-point iteration algorithms like RTA (response time analysis) and QPA (quick processor-demand analysis) are arguably the most popular ways of solving schedulability problems for preemptive uniprocessor FP (fixed-priority) and EDF (earliest-deadline-first) systems. Several IP (integer program) formulations have also been proposed for these problems, but it is unclear whether the algorithms for solving these formulations are related to RTA and QPA. By discovering connections between the problems and the algorithms, we show that RTA and QPA are, in fact, suboptimal cutting-plane algorithms for specific IP formulations of FP and EDF schedulability, where optimality is defined with respect to convergence rate. We propose optimal cutting-plane algorithms for these IP formulations. We compare the new algorithms with RTA and QPA on large collections of synthetic systems to gauge the improvement in convergence rates and running times.
Due to its computational complexity, graph cuts for cluster detection and identification are used mostly in the form of convex relaxations. We propose to utilize the original graph cuts such as Ratio, Normalized or Cheeger Cut in order to detect clusters in weighted undirected graphs by restricting the graph cut minimization to the subset of $st$-MinCut partitions. Incorporating a vertex selection technique and restricting optimization to tightly connected clusters, we therefore combine the efficient computability of $st$-MinCuts and the intrinsic properties of Gomory-Hu trees with the cut quality of the original graph cuts, leading to linear runtime in the number of vertices and quadratic in the number of edges. Already in simple scenarios, the resulting algorithm Xist is able to approximate graph cut values better empirically than spectral clustering or comparable algorithms, even for large network datasets. We showcase its applicability by segmenting images from cell biology and provide empirical studies of runtime and classification rate.
We propose a novel Hadamard integrator for the self-adjoint time-dependent wave equation in an inhomogeneous medium. First, we create a new asymptotic series based on the Gelfand-Shilov function, dubbed Hadamard's ansatz, to approximate the Green's function of the time-dependent wave equation. Second, incorporating the leading term of Hadamard's ansatz into the Kirchhoff-Huygens representation, we develop an original Hadamard integrator for the Cauchy problem of the time-dependent wave equation and derive the corresponding Lagrangian formulation in geodesic polar coordinates. Third, to construct the Hadamard integrator in the Lagrangian formulation efficiently, we use a short-time ray tracing method to obtain wavefront locations accurately, and we further develop fast algorithms to compute Chebyshev-polynomial based low-rank representations of both wavefront locations and variants of Hadamard coefficients. Fourth, equipped with these low-rank representations, we apply the Hadamard integrator to efficiently solve time-dependent wave equations with highly oscillatory initial conditions, where the time step size is independent of the initial conditions. By judiciously choosing the medium-dependent time step, our new Hadamard integrator can propagate wave field beyond caustics implicitly and advance spatially overturning waves in time naturally. Moreover, since the integrator is independent of initial conditions, the Hadamard integrator can be applied to many different initial conditions once it is constructed. Both two-dimensional and three-dimensional numerical examples illustrate the accuracy and performance of the proposed method.
Optimal transport has gained much attention in image processing field, such as computer vision, image interpolation and medical image registration. Recently, Bredies et al. (ESAIM:M2AN 54:2351-2382, 2020) and Schmitzer et al. (IEEE T MED IMAGING 39:1626-1635, 2019) established the framework of optimal transport regularization for dynamic inverse problems. In this paper, we incorporate Wasserstein distance, together with total variation, into static inverse problems as a prior regularization. The Wasserstein distance formulated by Benamou-Brenier energy measures the similarity between the given template and the reconstructed image. Also, we analyze the existence of solutions of such variational problem in Radon measure space. Moreover, the first-order primal-dual algorithm is constructed for solving this general imaging problem in a specific grid strategy. Finally, numerical experiments for undersampled MRI reconstruction are presented which show that our proposed model can recover images well with high quality and structure preservation.
A population-averaged additive subdistribution hazards model is proposed to assess the marginal effects of covariates on the cumulative incidence function and to analyze correlated failure time data subject to competing risks. This approach extends the population-averaged additive hazards model by accommodating potentially dependent censoring due to competing events other than the event of interest. Assuming an independent working correlation structure, an estimating equations approach is outlined to estimate the regression coefficients and a new sandwich variance estimator is proposed. The proposed sandwich variance estimator accounts for both the correlations between failure times and between the censoring times, and is robust to misspecification of the unknown dependency structure within each cluster. We further develop goodness-of-fit tests to assess the adequacy of the additive structure of the subdistribution hazards for the overall model and each covariate. Simulation studies are conducted to investigate the performance of the proposed methods in finite samples. We illustrate our methods using data from the STrategies to Reduce Injuries and Develop confidence in Elders (STRIDE) trial.
Long-span bridges are subjected to a multitude of dynamic excitations during their lifespan. To account for their effects on the structural system, several load models are used during design to simulate the conditions the structure is likely to experience. These models are based on different simplifying assumptions and are generally guided by parameters that are stochastically identified from measurement data, making their outputs inherently uncertain. This paper presents a probabilistic physics-informed machine-learning framework based on Gaussian process regression for reconstructing dynamic forces based on measured deflections, velocities, or accelerations. The model can work with incomplete and contaminated data and offers a natural regularization approach to account for noise in the measurement system. An application of the developed framework is given by an aerodynamic analysis of the Great Belt East Bridge. The aerodynamic response is calculated numerically based on the quasi-steady model, and the underlying forces are reconstructed using sparse and noisy measurements. Results indicate a good agreement between the applied and the predicted dynamic load and can be extended to calculate global responses and the resulting internal forces. Uses of the developed framework include validation of design models and assumptions, as well as prognosis of responses to assist in damage detection and structural health monitoring.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.