In this work, drawing inspiration from the type of noise present in real hardware, we study the output distribution of random quantum circuits under practical non-unital noise sources with constant noise rates. We show that even in the presence of unital sources like the depolarizing channel, the distribution, under the combined noise channel, never resembles a maximally entropic distribution at any depth. To show this, we prove that the output distribution of such circuits never anticoncentrates $\unicode{x2014}$ meaning it is never too "flat" $\unicode{x2014}$ regardless of the depth of the circuit. This is in stark contrast to the behavior of noiseless random quantum circuits or those with only unital noise, both of which anticoncentrate at sufficiently large depths. As consequences, our results have interesting algorithmic implications on both the hardness and easiness of noisy random circuit sampling, since anticoncentration is a critical property exploited by both state-of-the-art classical hardness and easiness results.
The choice to participate in a data-driven service, often made on the basis of quality of that service, influences the ability of the service to learn and improve. We study the participation and retraining dynamics that arise when both the learners and sub-populations of users are \emph{risk-reducing}, which cover a broad class of updates including gradient descent, multiplicative weights, etc. Suppose, for example, that individuals choose to spend their time amongst social media platforms proportionally to how well each platform works for them. Each platform also gathers data about its active users, which it uses to update parameters with a gradient step. For this example and for our general class of dynamics, we show that the only asymptotically stable equilibria are segmented, with sub-populations allocated to a single learner. Under mild assumptions, the utilitarian social optimum is a stable equilibrium. In contrast to previous work, which shows that repeated risk minimization can result in representation disparity and high overall loss for a single learner \citep{hashimoto2018fairness,miller2021outside}, we find that repeated myopic updates with multiple learners lead to better outcomes. We illustrate the phenomena via a simulated example initialized from real data.
Many economic panel and dynamic models, such as rational behavior and Euler equations, imply that the parameters of interest are identified by conditional moment restrictions with high dimensional conditioning instruments. We develop a novel inference method for the parameters identified by conditional moment restrictions, where the dimension of the conditioning instruments is high and there is no prior information about which conditioning instruments are weak or irrelevant. Building on Bierens (1990), we propose penalized maximum statistics and combine bootstrap inference with model selection. Our method optimizes the asymptotic power against a set of $n^{-1/2}$-local alternatives of interest by solving a data-dependent max-min problem for tuning parameter selection. We demonstrate the efficacy of our method by two empirical examples: the elasticity of intertemporal substitution and rational unbiased reporting of ability status. Extensive Monte Carlo experiments based on the first empirical example show that our inference procedure is superior to those available in the literature in realistic settings.
In this paper, we introduce the flexible interpretable gamma (FIG) distribution which has been derived by Weibullisation of the body-tail generalised normal distribution. The parameters of the FIG have been verified graphically and mathematically as having interpretable roles in controlling the left-tail, body, and right-tail shape. The generalised gamma (GG) distribution has become a staple model for positive data in statistics due to its interpretable parameters and tractable equations. Although there are many generalised forms of the GG which can provide better fit to data, none of them extend the GG so that the parameters are interpretable. Additionally, we present some mathematical characteristics and prove the identifiability of the FIG parameters. Finally, we apply the FIG model to hand grip strength and insurance loss data to assess its flexibility relative to existing models.
Sample size determination for cluster randomised trials (CRTs) is challenging as it requires robust estimation of the intra-cluster correlation coefficient (ICC). Typically, the sample size is chosen to provide a certain level of power to reject the null hypothesis in a hypothesis test. This relies on the minimal clinically important difference (MCID) and estimates for the standard deviation, ICC and possibly the coefficient of variation of the cluster size. Varying these parameters can have a strong effect on the sample size. In particular, it is sensitive to small differences in the ICC. A relevant ICC estimate is often not available, or the available estimate is imprecise. If the ICC used is far from the unknown true value, this can lead to trials which are substantially over- or under-powered. We propose a hybrid approach using Bayesian assurance to find the sample size for a CRT with a frequentist analysis. Assurance is an alternative to power which incorporates uncertainty on parameters through a prior distribution. We suggest specifying prior distributions for the standard deviation, ICC and coefficient of variation of the cluster size, while still utilising the MCID. We illustrate the approach through the design of a CRT in post-stroke incontinence. We show assurance can be used to find a sample size based on an elicited prior distribution for the ICC, when a power calculation discards all information in the prior except a single point estimate. Results show that this approach can avoid misspecifying sample sizes when prior medians for the ICC are very similar but prior distributions exhibit quite different behaviour. Assurance provides an understanding of the probability of success of a trial given an MCID and can be used to produce sample sizes which are robust to parameter uncertainty. This is especially useful when there is difficulty obtaining reliable parameter estimates.
In this paper, we address the problem of modeling data with periodic autoregressive (PAR) time series and additive noise. In most cases, the data are processed assuming a noise-free model (i.e., without additive noise), which is not a realistic assumption in real life. The first two steps in PAR model identification are order selection and period estimation, so the main focus is on these issues. Finally, the model should be validated, so a procedure for analyzing the residuals, which are considered here as multidimensional vectors, is proposed. Both order and period selection, as well as model validation, are addressed by using the characteristic function (CF) of the residual series. The CF is used to obtain the probability density function, which is utilized in the information criterion and for residuals distribution testing. To complete the PAR model analysis, the procedure for estimating the coefficients is necessary. However, this issue is only mentioned here as it is a separate task (under consideration in parallel). The presented methodology can be considered as the general framework for analyzing data with periodically non-stationary characteristics disturbed by finite-variance external noise. The original contribution is in the selection of the optimal model order and period identification, as well as the analysis of residuals. All these findings have been inspired by our previous work on machine condition monitoring that used PAR modeling
We report some results regarding the mechanization of normative (preference-based) conditional reasoning. Our focus is on Aqvist's system E for conditional obligation (and its extensions). Our mechanization is achieved via a shallow semantical embedding in Isabelle/HOL. We consider two possible uses of the framework. The first one is as a tool for meta-reasoning about the considered logic. We employ it for the automated verification of deontic correspondences (broadly conceived) and related matters, analogous to what has been previously achieved for the modal logic cube. The second use is as a tool for assessing ethical arguments. We provide a computer encoding of a well-known paradox in population ethics, Parfit's repugnant conclusion. Whether the presented encoding increases or decreases the attractiveness and persuasiveness of the repugnant conclusion is a question we would like to pass on to philosophy and ethics.
The distributed task allocation problem, as one of the most interesting distributed optimization challenges, has received considerable research attention recently. Previous works mainly focused on the task allocation problem in a population of individuals, where there are no constraints for affording task amounts. The latter condition, however, cannot always be hold. In this paper, we study the task allocation problem with constraints of task allocation in a game-theoretical framework. We assume that each individual can afford different amounts of task and the cost function is convex. To investigate the problem in the framework of population games, we construct a potential game and calculate the fitness function for each individual. We prove that when the Nash equilibrium point in the potential game is in the feasible solutions for the limited task allocation problem, the Nash equilibrium point is the unique globally optimal solution. Otherwise, we also derive analytically the unique globally optimal solution. In addition, in order to confirm our theoretical results, we consider the exponential and quadratic forms of cost function for each agent. Two algorithms with the mentioned representative cost functions are proposed to numerically seek the optimal solution to the limited task problems. We further perform Monte Carlo simulations which provide agreeing results with our analytical calculations.
In this paper we develop a numerical method for efficiently approximating solutions of certain Zakai equations in high dimensions. The key idea is to transform a given Zakai SPDE into a PDE with random coefficients. We show that under suitable regularity assumptions on the coefficients of the Zakai equation, the corresponding random PDE admits a solution random field which, for almost all realizations of the random coefficients, can be written as a classical solution of a linear parabolic PDE. This makes it possible to apply the Feynman--Kac formula to obtain an efficient Monte Carlo scheme for computing approximate solutions of Zakai equations. The approach achieves good results in up to 25 dimensions with fast run times.
In the present paper, we develop a new goodness-of-fit test for the Birnbaum- Saunders distribution based on the probability plot. We utilize the sample correlation coefficient from the Birnbaum-Saunders probability plot as a measure of goodness of fit. Unfortunately, it is impossible or extremely difficult to obtain an explicit distribution of this sample correlation coefficient. To address this challenge, we employ extensive Monte Carlo simulations to obtain the empirical distribution of the sample correlation coefficient from the Birnbaum-Saunders probability plot. This empirical distribution allows us to determine the critical values alongside their corresponding significance levels, thus facilitating the computation of the p-value when the sample correlation coefficient is obtained. Finally, two real-data examples are provided for illustrative purposes.
The forecasting and computation of the stability of chaotic systems from partial observations are tasks for which traditional equation-based methods may not be suitable. In this computational paper, we propose data-driven methods to (i) infer the dynamics of unobserved (hidden) chaotic variables (full-state reconstruction); (ii) time forecast the evolution of the full state; and (iii) infer the stability properties of the full state. The tasks are performed with long short-term memory (LSTM) networks, which are trained with observations (data) limited to only part of the state: (i) the low-to-high resolution LSTM (LH-LSTM), which takes partial observations as training input, and requires access to the full system state when computing the loss; and (ii) the physics-informed LSTM (PI-LSTM), which is designed to combine partial observations with the integral formulation of the dynamical system's evolution equations. First, we derive the Jacobian of the LSTMs. Second, we analyse a chaotic partial differential equation, the Kuramoto-Sivashinsky (KS), and the Lorenz-96 system. We show that the proposed networks can forecast the hidden variables, both time-accurately and statistically. The Lyapunov exponents and covariant Lyapunov vectors, which characterize the stability of the chaotic attractors, are correctly inferred from partial observations. Third, the PI-LSTM outperforms the LH-LSTM by successfully reconstructing the hidden chaotic dynamics when the input dimension is smaller or similar to the Kaplan-Yorke dimension of the attractor. This work opens new opportunities for reconstructing the full state, inferring hidden variables, and computing the stability of chaotic systems from partial data.