In this article, a discrete analogue of continuous Teissier distribution is presented. Its several important distributional characteristics have been derived. The estimation of the unknown parameter has been done using the method of maximum likelihood and the method of moment. Two real data applications have been presented to show the applicability of the proposed model.
Let $P$ be a bounded polyhedron defined as the intersection of the non-negative orthant ${\Bbb R}^n_+$ and an affine subspace of codimension $m$ in ${\Bbb R}^n$. We show that a simple and computationally efficient formula approximates the volume of $P$ within a factor of $\gamma^m$, where $\gamma >0$ is an absolute constant. The formula provides the best known estimate for the volume of transportation polytopes from a wide family.
In the era of big data and the Internet of Things (IoT), data owners need to share a large amount of data with the intended receivers in an insecure environment, posing a trade-off issue between user privacy and data utility. The privacy utility trade-off was facilitated through a privacy funnel based on mutual information. Nevertheless, it is challenging to characterize the mutual information accurately with small sample size or unknown distribution functions. In this article, we propose a privacy funnel based on mutual information neural estimator (MINE) to optimize the privacy utility trade-off by estimating mutual information. Instead of computing mutual information in traditional way, we estimate it using an MINE, which obtains the estimated mutual information in a trained way, ensuring that the estimation results are as precise as possible. We employ estimated mutual information as a measure of privacy and utility, and then form a problem to optimize data utility by training a neural network while the estimator's privacy discourse is less than a threshold. The simulation results also demonstrated that the estimated mutual information from MINE works very well to approximate the mutual information even with a limited number of samples to quantify privacy leakage and data utility retention, as well as optimize the privacy utility trade-off.
This paper presents SimAEN, an agent-based simulation whose purpose is to assist public health in understanding and controlling AEN. SimAEN models a population of interacting individuals, or 'agents', in which COVID-19 is spreading. These individuals interact with a public health system that includes Automated Exposure Notifiation (AEN) and Manual Contact Tracing (MCT). These interactions influence when individuals enter and leave quarantine, affecting the spread of the simulated disease. Over 70 user-configurable parameters influence the outcome of SimAEN's simulations. These parameters allow the user to tailor SimAEN to a specific public health jurisdiction and to test the effects of various interventions, including different sensitivity settings of AEN.
Parameters of the covariance kernel of a Gaussian process model often need to be estimated from the data generated by an unknown Gaussian process. We consider fixed-domain asymptotics of the maximum likelihood estimator of the scale parameter under smoothness misspecification. If the covariance kernel of the data-generating process has smoothness $\nu_0$ but that of the model has smoothness $\nu \geq \nu_0$, we prove that the expectation of the maximum likelihood estimator is of the order $N^{2(\nu-\nu_0)/d}$ if the $N$ observation points are quasi-uniform in $[0, 1]^d$. This indicates that maximum likelihood estimation of the scale parameter alone is sufficient to guarantee the correct rate of decay of the conditional variance. We also discuss a connection the expected maximum likelihood estimator has to Driscoll's theorem on sample path properties of Gaussian processes. The proofs are based on reproducing kernel Hilbert space techniques and worst-case case rates for approximation in Sobolev spaces.
Time-to-event endpoints show an increasing popularity in phase II cancer trials. The standard statistical tool for such one-armed survival trials is the one-sample log-rank test. Its distributional properties are commonly derived in the large sample limit. It is however known from the literature, that the asymptotical approximations suffer when sample size is small. There have already been several attempts to address this problem. While some approaches do not allow easy power and sample size calculations, others lack a clear theoretical motivation and require further considerations. The problem itself can partly be attributed to the dependence of the compensated counting process and its variance estimator. For this purpose, we suggest a variance estimator which is uncorrelated to the compensated counting process. Moreover, this and other present approaches to variance estimation are covered as special cases by our general framework. For practical application, we provide sample size and power calculations for any approach fitting into this framework. Finally, we use simulations and real world data to study the empirical type I error and power performance of our methodology as compared to standard approaches.
In neuroscience, the distribution of a decision time is modelled by means of a one-dimensional Fokker--Planck equation with time-dependent boundaries and space-time-dependent drift. Efficient approximation of the solution to this equation is required, e.g., for model evaluation and parameter fitting. However, the prescribed boundary conditions lead to a strong singularity and thus to slow convergence of numerical approximations. In this article we demonstrate that the solution can be related to the solution of a parabolic PDE on a rectangular space-time domain with homogeneous initial and boundary conditions by transformation and subtraction of a known function. We verify that the solution of the new PDE is indeed more regular than the solution of the original PDE and proceed to discretize the new PDE using a space-time minimal residual method. We also demonstrate that the solution depends analytically on the parameters determining the boundaries as well as the drift. This justifies the use of a sparse tensor product interpolation method to approximate the PDE solution for various parameter ranges. The predicted convergence rates of the minimal residual method and that of the interpolation method are supported by numerical simulations.
We consider the problem where $n$ clients transmit $d$-dimensional real-valued vectors using $d(1+o(1))$ bits each, in a manner that allows the receiver to approximately reconstruct their mean. Such compression problems naturally arise in distributed and federated learning. We provide novel mathematical results and derive computationally efficient algorithms that are more accurate than previous compression techniques. We evaluate our methods on a collection of distributed and federated learning tasks, using a variety of datasets, and show a consistent improvement over the state of the art.
Multifidelity methods are widely used for estimation of quantities of interest (QoIs) in uncertainty quantification using simulation codes of differing costs and accuracies. Many methods approximate numerical-valued statistics that represent only limited information of the QoIs. In this paper, we generalize the ideas in \cite{xu2021bandit} to develop a multifidelity method that approximates the distribution of scalar-valued QoI. Under a linear model hypothesis, we propose an exploration-exploitation strategy to reconstruct the full distribution, not just statistics, of a scalar-valued QoI using samples from a subset of low-fidelity regressors. We derive an informative asymptotic bound for the mean 1-Wasserstein distance between the estimator and the true distribution, and use it to adaptively allocate computational budget for parametric estimation and non-parametric approximation of the probability distribution. Assuming the linear model is correct, we prove that such a procedure is consistent and converges to the optimal policy (and hence optimal computational budget allocation) under an upper bound criterion as the budget goes to infinity. As a corollary, we obtain convergence of the approximated distribution in the mean 1-Wasserstein metric. The major advantages of our approach are that convergence to the full distribution of the output is attained under appropriate assumptions, and that the procedure and implementation require neither a hierarchical model setup, knowledge of cross-model information or correlation, nor \textit{a priori} known model statistics. Numerical experiments are provided in the end to support our theoretical analysis.
Intractable posterior distributions of parameters with intractable normalizing constants depending upon the parameters are known as doubly intractable posterior distributions. The terminology itself indicates that obtaining Bayesian inference from such posteriors is doubly difficult compared to traditional intractable posteriors where the normalizing constants are tractable and admit traditional Markov Chain Monte Carlo (MCMC) solutions. As can be anticipated, a plethora of MCMC-based methods have originated in the literature to deal with doubly intractable distributions. Yet, it remains very much unclear if any of the methods can satisfactorily sample from such posteriors, particularly in high-dimensional setups. In this article, we consider efficient Monte Carlo and importance sampling approximations of the intractable normalizing constant for a few values of the parameters, and Gaussian process interpolations for the remaining values of the parameters, using the approximations. We then incorporate this strategy within the exact iid sampling framework developed in Bhattacharya (2021a) and Bhattacharya (2021b), and illustrate the methodology with simulation experiments comprising a two-dimensional normal-gamma posterior, a two-dimensional Ising model posterior, a two-dimensional Strauss process posterior and a 100-dimensional autologistic model posterior. In each case we demonstrate great accuracy of our methodology, which is also computationally extremely efficient, often taking only a few minutes for generating 10, 000 iid realizations on 80 processors.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.