Many astrophysical phenomena are time-varying, in the sense that their brightness change over time. In the case of periodic stars, previous approaches assumed that changes in period, amplitude, and phase are well described by either parametric or piecewise-constant functions. With this paper, we introduce a new mathematical model for the description of the so-called modulated light curves, as found in periodic variable stars that exhibit smoothly time-varying parameters such as amplitude, frequency, and/or phase. Our model accounts for a smoothly time-varying trend, and a harmonic sum with smoothly time-varying weights. In this sense, our approach is flexible because it avoids restrictive assumptions (parametric or piecewise-constant) about the functional form of trend and amplitudes. We apply our methodology to the light curve of a pulsating RR Lyrae star characterised by the Blazhko effect. To estimate the time-varying parameters of our model, we develop a semi-parametric method for unequally spaced time series. The estimation of our time-varying curves translates into the estimation of time-invariant parameters that can be performed by ordinary least-squares, with the following two advantages: modeling and forecasting can be implemented in a parametric fashion, and we are able to cope with missing observations. To detect serial correlation in the residuals of our fitted model, we derive the mathematical definition of the spectral density for unequally spaced time series. The proposed method is designed to estimate smoothly time-varying trend and amplitudes, as well as the spectral density function of the errors. We provide simulation results and applications to real data.
We study the shape reconstruction of an inclusion from the {faraway} measurement of the associated electric field. This is an inverse problem of practical importance in biomedical imaging and is known to be notoriously ill-posed. By incorporating Drude's model of the permittivity parameter, we propose a novel reconstruction scheme by using the plasmon resonance with a significantly enhanced resonant field. We conduct a delicate sensitivity analysis to establish a sharp relationship between the sensitivity of the reconstruction and the plasmon resonance. It is shown that when plasmon resonance occurs, the sensitivity functional blows up and hence ensures a more robust and effective construction. Then we combine the Tikhonov regularization with the Laplace approximation to solve the inverse problem, which is an organic hybridization of the deterministic and stochastic methods and can quickly calculate the minimizer while capture the uncertainty of the solution. We conduct extensive numerical experiments to illustrate the promising features of the proposed reconstruction scheme.
One of the key problems in tensor completion is the number of uniformly random sample entries required for recovery guarantee. The main aim of this paper is to study $n_1 \times n_2 \times n_3$ third-order tensor completion based on transformed tensor singular value decomposition, and provide a bound on the number of required sample entries. Our approach is to make use of the multi-rank of the underlying tensor instead of its tubal rank in the bound. In numerical experiments on synthetic and imaging data sets, we demonstrate the effectiveness of our proposed bound for the number of sample entries. Moreover, our theoretical results are valid to any unitary transformation applied to $n_3$-dimension under transformed tensor singular value decomposition.
This paper presents an algorithm for iterative joint channel parameter (carrier phase, Doppler shift and Doppler rate) estimation and decoding of transmission over channels affected by Doppler shift and Doppler rate using a distributed receiver. This algorithm is derived by applying the sum-product algorithm (SPA) to a factor graph representing the joint a posteriori distribution of the information symbols and channel parameters given the channel output. In this paper, we present two methods for dealing with intractable messages of the sum-product algorithm. In the first approach, we use particle filtering with sequential importance sampling (SIS) for the estimation of the unknown parameters. We also propose a method for fine-tuning of particles for improved convergence. In the second approach, we approximate our model with a random walk phase model, followed by a phase tracking algorithm and polynomial regression algorithm to estimate the unknown parameters. We derive the Weighted Bayesian Cramer-Rao Bounds (WBCRBs) for joint carrier phase, Doppler shift and Doppler rate estimation, which take into account the prior distribution of the estimation parameters and are accurate lower bounds for all considered Signal to Noise Ratio (SNR) values. Numerical results (of bit error rate (BER) and the mean-square error (MSE) of parameter estimation) suggest that phase tracking with the random walk model slightly outperforms particle filtering. However, particle filtering has a lower computational cost than the random walk model based method.
Trefftz methods are high-order Galerkin schemes in which all discrete functions are elementwise solution of the PDE to be approximated. They are viable only when the PDE is linear and its coefficients are piecewise constant. We introduce a 'quasi-Trefftz' discontinuous Galerkin method for the discretisation of the acoustic wave equation with piecewise-smooth wavespeed: the discrete functions are elementwise approximate PDE solutions. We show that the new discretisation enjoys the same excellent approximation properties as the classical Trefftz one, and prove stability and high-order convergence of the DG scheme. We introduce polynomial basis functions for the new discrete spaces and describe a simple algorithm to compute them. The technique we propose is inspired by the generalised plane waves previously developed for time-harmonic problems with variable coefficients; it turns out that in the case of the time-domain wave equation under consideration the quasi-Trefftz approach allows for polynomial basis functions.
We consider a time-varying first-order autoregressive model with irregular innovations, where we assume that the coefficient function is H\"{o}lder continuous. To estimate this function, we use a quasi-maximum likelihood based approach. A precise control of this method demands a delicate analysis of extremes of certain weakly dependent processes, our main result being a concentration inequality for such quantities. Based on our analysis, upper and matching minimax lower bounds are derived, showing the optimality of our estimators. Unlike the regular case, the information theoretic complexity depends both on the smoothness and an additional shape parameter, characterizing the irregularity of the underlying distribution. The results and ideas for the proofs are very different from classical and more recent methods in connection with statistics and inference for locally stationary processes.
We propose a novel multibody dynamics simulation framework that can efficiently deal with large-dimensionality and complementarity multi-contact conditions. Typical contact simulation approaches perform contact impulse-level fixed-point iteration (IL-FPI), which has high time-complexity from large-size matrix inversion and multiplication, as well as susceptibility to ill-conditioned contact situations. To circumvent this, we propose a novel framework based on velocity-level fixed-point iteration (VL-FPI), which, by utilizing a certain surrogate dynamics and contact nodalization (with virtual nodes), can achieve not only inter-contact decoupling but also their inter-axes decoupling (i.e., contact diagonalization). This then enables us to one-shot/parallel-solve the contact problem during each VL-FPI iteration-loop, while the surrogate dynamics structure allows us to circumvent large-size/dense matrix inversion/multiplication, thereby, significantly speeding up the simulation time with improved convergence property. We theoretically show that the solution of our framework is consistent with that of the original problem and, further, elucidate mathematical conditions for the convergence of our proposed solver. Performance and properties of our proposed simulation framework are also demonstrated and experimentally-validated for various large-dimensional/multi-contact scenarios including deformable objects.
We consider parameter estimation for a linear parabolic second-order stochastic partial differential equation (SPDE) in two space dimensions driven by two types $Q$-Wiener processes based on high frequency data in time and space. We first estimate the parameters which appear in the coordinate process of the SPDE using the minimum contrast estimator based on the thinned data with respect to space, and then construct an approximate coordinate process of the SPDE. Furthermore, we propose estimators of the coefficient parameters of the SPDE utilizing the approximate coordinate process based on the thinned data with respect to time. We also give some simulation results.
The classical Smagorinsky model's solution is an approximation to a (resolved) mean velocity. Since it is an eddy viscosity model, it cannot represent a flow of energy from unresolved fluctuations to the (resolved) mean velocity. This model has recently been modified to incorporate this flow and still be well-posed. Herein we first develop some basic properties of the modified model. Next, we perform a complete numerical analysis of two algorithms for its approximation. They are tested and proven to be effective.
Continuous determinantal point processes (DPPs) are a class of repulsive point processes on $\mathbb{R}^d$ with many statistical applications. Although an explicit expression of their density is known, it is too complicated to be used directly for maximum likelihood estimation. In the stationary case, an approximation using Fourier series has been suggested, but it is limited to rectangular observation windows and no theoretical results support it. In this contribution, we investigate a different way to approximate the likelihood by looking at its asymptotic behaviour when the observation window grows towards $\mathbb{R}^d$. This new approximation is not limited to rectangular windows, is faster to compute than the previous one, does not require any tuning parameter, and some theoretical justifications are provided. It moreover provides an explicit formula for estimating the asymptotic variance of the associated estimator. The performances are assessed in a simulation study on standard parametric models on $\mathbb{R}^d$ and compare favourably to common alternative estimation methods for continuous DPPs.
Federated learning is a distributed machine learning method that aims to preserve the privacy of sample features and labels. In a federated learning system, ID-based sample alignment approaches are usually applied with few efforts made on the protection of ID privacy. In real-life applications, however, the confidentiality of sample IDs, which are the strongest row identifiers, is also drawing much attention from many participants. To relax their privacy concerns about ID privacy, this paper formally proposes the notion of asymmetrical vertical federated learning and illustrates the way to protect sample IDs. The standard private set intersection protocol is adapted to achieve the asymmetrical ID alignment phase in an asymmetrical vertical federated learning system. Correspondingly, a Pohlig-Hellman realization of the adapted protocol is provided. This paper also presents a genuine with dummy approach to achieving asymmetrical federated model training. To illustrate its application, a federated logistic regression algorithm is provided as an example. Experiments are also made for validating the feasibility of this approach.