This paper introduces the statistical analysis of Jacobi frequency varying Long Range Dependence (LRD) functional time series in connected and compact two-point homogeneous spaces. The convergence to zero, in the Hilbert-Schmidt operator norm, of the integrated bias of the periodogram operator is proved under alternative conditions to the ones considered in Ruiz-Medina (2022). Under this setting of conditions, weak-consistency of the minimum contrast parameter estimator of the LRD operator holds. The case where the projected manifold process can display Short Range Dependence (SRD) and LRD at different manifold scales is also analyzed. The estimation of the spectral density operator is addressed in this case. The performance of both estimation procedures is illustrated in the simulation study undertaken within the families of multifractionally integrated spherical functional autoregressive-moving average (SPHARMA) processes.
The sequential multiple assignment randomized trial (SMART) is the gold standard trial design to generate data for the evaluation of multi-stage treatment regimes. As with conventional (single-stage) randomized clinical trials, interim monitoring allows early stopping; however, there are few methods for principled interim analysis in SMARTs. Because SMARTs involve multiple stages of treatment, a key challenge is that not all enrolled participants will have progressed through all treatment stages at the time of an interim analysis. Wu et al. (2021) propose basing interim analyses on an estimator for the mean outcome under a given regime that uses data only from participants who have completed all treatment stages. We propose an estimator for the mean outcome under a given regime that gains efficiency by using partial information from enrolled participants regardless of their progression through treatment stages. Using the asymptotic distribution of this estimator, we derive associated Pocock and O'Brien-Fleming testing procedures for early stopping. In simulation experiments, the estimator controls type I error and achieves nominal power while reducing expected sample size relative to the method of Wu et al. (2021). We present an illustrative application of the proposed estimator based on a recent SMART evaluating behavioral pain interventions for breast cancer patients.
Zhang (2019) presented a general estimation approach based on the Gaussian distribution for general parametric models where the likelihood of the data is difficult to obtain or unknown, but the mean and variance-covariance matrix are known. Castilla and Zografos (2021) extended the method to density power divergence-based estimators, which are more robust than the likelihood-based Gaussian estimator against data contamination. In this paper we introduce the restricted minimum density power divergence Gaussian estimator (MDPDGE) and study its main asymptotic properties. Also, we examine it robustness through its influence function analysis. Restricted estimators are required in many practical situations, in special in testing composite null hypothesis, and provide here constrained estimators to inherent restrictions of the underlying distribution. Further, we derive robust Rao-type test statistics based on the MDPDGE for testing simple null hypothesis and we deduce explicit expressions for some main important distributions. Finally, we empirically evaluate the efficiency and robustness of the method through a simulation study.
The false discovery rate (FDR) and the false non-discovery rate (FNR), defined as the expected false discovery proportion (FDP) and the false non-discovery proportion (FNP), are the most popular benchmarks for multiple testing. Despite the theoretical and algorithmic advances in recent years, the optimal tradeoff between the FDR and the FNR has been largely unknown except for certain restricted class of decision rules, e.g., separable rules, or for other performance metrics, e.g., the marginal FDR and the marginal FNR (mFDR and mFNR). In this paper we determine the asymptotically optimal FDR-FNR tradeoff under the two-group random mixture model when the number of hypotheses tends to infinity. Distinct from the optimal mFDR-mFNR tradeoff, which is achieved by separable decision rules, the optimal FDR-FNR tradeoff requires compound rules and randomization even in the large-sample limit. A data-driven version of the oracle rule is proposed and shown to outperform existing methodologies on simulated data for models as simple as the normal mean model. Finally, to address the limitation of the FDR and FNR which only control the expectations but not the fluctuations of the FDP and FNP, we also determine the optimal tradeoff when the FDP and FNP are controlled with high probability and show it coincides with that of the mFDR and the mFNR.
To make a good balance between performance, cost, and power consumption, a hybrid intelligent reflecting surface (IRS)-aided directional modulation (DM) network is investigated in this paper, where the hybrid IRS consists of passive and active reflecting elements. To maximize the achievable rate, two optimization algorithms, called maximum signal-to-noise ratio (SNR)-fractional programming (FP) (Max-SNR-FP) and maximum SNR-equal amplitude reflecting (EAR) (Max-SNR-EAR), are proposed to jointly design the beamforming vector and phase shift matrix (PSM) of hybrid IRS by alternately optimizing one and giving another. The former employs the successive convex approximation and FP methods to derive the beamforming vector and hybrid IRS PSM, while the latter adopts the maximum signal-to-leakage-noise ratio method and the criteria of phase alignment and EAR to design them. Simulation results show that the rates harvested by the proposed two methods are slightly lower than those of active IRS with higher power consumption, which are 35 percent higher than those of no IRS and random phase IRS, while passive IRS achieves only about 17 percent rate gain over the latter. Moreover, compared to Max-SNR-FP, the proposed Max-SNR-EAR method makes an obvious complexity degradation at the price of a slight performance loss.
We analyze the convergence of a nonlocal gradient descent method for minimizing a class of high-dimensional non-convex functions, where a directional Gaussian smoothing (DGS) is proposed to define the nonlocal gradient (also referred to as the DGS gradient). The method was first proposed in [42], in which multiple numerical experiments showed that replacing the traditional local gradient with the DGS gradient can help the optimizers escape local minima more easily and significantly improve their performance. However, a rigorous theory for the efficiency of the method on nonconvex landscape is lacking. In this work, we investigate the scenario where the objective function is composed of a convex function, perturbed by a oscillating noise. We provide a convergence theory under which the iterates exponentially converge to a tightened neighborhood of the solution, whose size is characterized by the noise wavelength. We also establish a correlation between the optimal values of the Gaussian smoothing radius and the noise wavelength, thus justify the advantage of using moderate or large smoothing radius with the method. Furthermore, if the noise level decays to zero when approaching global minimum, we prove that DGS-based optimization converges to the exact global minimum with linear rates, similarly to standard gradient-based method in optimizing convex functions. Several numerical experiments are provided to confirm our theory and illustrate the superiority of the approach over those based on the local gradient.
Bayesian inference problems require sampling or approximating high-dimensional probability distributions. The focus of this paper is on the recently introduced Stein variational gradient descent methodology, a class of algorithms that rely on iterated steepest descent steps with respect to a reproducing kernel Hilbert space norm. This construction leads to interacting particle systems, the mean-field limit of which is a gradient flow on the space of probability distributions equipped with a certain geometrical structure. We leverage this viewpoint to shed some light on the convergence properties of the algorithm, in particular addressing the problem of choosing a suitable positive definite kernel function. Our analysis leads us to considering certain nondifferentiable kernels with adjusted tails. We demonstrate significant performance gains of these in various numerical experiments.
Modelling randomness in shape data, for example, the evolution of shapes of organisms in biology, requires stochastic models of shapes. This paper presents a new stochastic shape model based on a description of shapes as functions in a Sobolev space. Using an explicit orthonormal basis as a reference frame for the noise, the model is independent of the parameterisation of the mesh. We define the stochastic model, explore its properties, and illustrate examples of stochastic shape evolutions using the resulting numerical framework.
This paper proposes a flexible framework for inferring large-scale time-varying and time-lagged correlation networks from multivariate or high-dimensional non-stationary time series with piecewise smooth trends. Built on a novel and unified multiple-testing procedure of time-lagged cross-correlation functions with a fixed or diverging number of lags, our method can accurately disclose flexible time-varying network structures associated with complex functional structures at all time points. We broaden the applicability of our method to the structure breaks by developing difference-based nonparametric estimators of cross-correlations, achieve accurate family-wise error control via a bootstrap-assisted procedure adaptive to the complex temporal dynamics, and enhance the probability of recovering the time-varying network structures using a new uniform variance reduction technique. We prove the asymptotic validity of the proposed method and demonstrate its effectiveness in finite samples through simulation studies and empirical applications.
We study the convergences of three projected Sobolev gradient flows to the ground state of the Gross-Pitaevskii eigenvalue problem. They are constructed as the gradient flows of the Gross-Pitaevskii energy functional with respect to the $H^1_0$-metric and two other equivalent metrics on $H_0^1$, including the iterate-independent $a_0$-metric and the iterate-dependent $a_u$-metric. We first prove the energy dissipation property and the global convergence to a critical point of the Gross-Pitaevskii energy for the discrete-time $H^1$ and $a_0$-gradient flow. We also prove local exponential convergence of all three schemes to the ground state.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.