Quantum state preparation, as a general process of loading classical data to quantum device, is essential for end-to-end implementation of quantum algorithms. Yet, existing methods suffer from either high circuit depth or complicated hardware, limiting their practicality and robustness. In this work, we overcome these limitations with a bucket-brigade approach. The tree architectures of our hardware represents the simplest connectivity required for achieving sub-exponential circuit depth. Leveraging the bucket-brigade mechanism that can suppress the error propagation between different branches, our approach exhibit exponential improvement on the robustness compared to existing depth-optimal methods. More specifically, the infidelity scales as $O(\text{polylog}(N))$ with data size $N$, as oppose to $O(N)$ for conventional methods. Moreover, our approach is the first to simultaneously achieve linear Clifford$+T$ circuit depth, gate count number, and space-time allocation. These advancements offer the opportunity for processing big data in both near-term and fault-tolerant quantum devices.
Random effects meta-analysis is widely used for synthesizing studies under the assumption that underlying effects come from a normal distribution. However, under certain conditions the use of alternative distributions might be more appropriate. We conducted a systematic review to identify articles introducing alternative meta-analysis models assuming non-normal between-study distributions. We identified 27 eligible articles suggesting 24 alternative meta-analysis models based on long-tail and skewed distributions, on mixtures of distributions, and on Dirichlet process priors. Subsequently, we performed a simulation study to evaluate the performance of these models and to compare them with the standard normal model. We considered 22 scenarios varying the amount of between-study variance, the shape of the true distribution, and the number of included studies. We compared 15 models implemented in the Frequentist or in the Bayesian framework. We found small differences with respect to bias between the different models but larger differences in the level of coverage probability. In scenarios with large between-study variance, all models were substantially biased in the estimation of the mean treatment effect. This implies that focusing only on the mean treatment effect of random effects meta-analysis can be misleading when substantial heterogeneity is suspected or outliers are present.
Uncertainty quantification is not yet widely adapted in the design process of engineering components despite its importance for achieving sustainable and resource-efficient structures. This is mainly due to two reasons: 1) Tracing the effect of uncertainty in engineering simulations is a computationally challenging task. This is especially true for inelastic simulations as the whole loading history influences the results. 2) Implementations of efficient schemes in standard finite element software are lacking. In this paper, we are tackling both problems. We are proposing a \rev{weakly}-intrusive implementation of the time-separated stochastic mechanics in the finite element software Abaqus. The time-separated stochastic mechanics is an efficient and accurate method for the uncertainty quantification of structures with inelastic material behavior. The method effectivly separates the stochastic but time-independent from the deterministic but time-dependent behavior. The resulting scheme consists only two deterministic finite element simulations for homogeneous material fluctuations in order to approximate the stochastic behavior. This brings down the computational cost compared to standard Monte Carlo simulations by at least two orders of magnitude while ensuring accurate solutions. In this paper, the implementation details in Abaqus and numerical comparisons are presented for the example of damage simulations.
Dynamical theories of speech use computational models of articulatory control to generate quantitative predictions and advance understanding of speech dynamics. The addition of a nonlinear restoring force to task dynamic models is a significant improvement over linear models, but nonlinearity introduces challenges with parameterization and interpretability. We illustrate these problems through numerical simulations and introduce solutions in the form of scaling laws. We apply the scaling laws to a cubic model and show how they facilitate interpretable simulations of articulatory dynamics, and can be theoretically interpreted as imposing physical and cognitive constraints on models of speech movement dynamics.
A discrete spatial lattice can be cast as a network structure over which spatially-correlated outcomes are observed. A second network structure may also capture similarities among measured features, when such information is available. Incorporating the network structures when analyzing such doubly-structured data can improve predictive power, and lead to better identification of important features in the data-generating process. Motivated by applications in spatial disease mapping, we develop a new doubly regularized regression framework to incorporate these network structures for analyzing high-dimensional datasets. Our estimators can be easily implemented with standard convex optimization algorithms. In addition, we describe a procedure to obtain asymptotically valid confidence intervals and hypothesis tests for our model parameters. We show empirically that our framework provides improved predictive accuracy and inferential power compared to existing high-dimensional spatial methods. These advantages hold given fully accurate network information, and also with networks which are partially misspecified or uninformative. The application of the proposed method to modeling COVID-19 mortality data suggests that it can improve prediction of deaths beyond standard spatial models, and that it selects relevant covariates more often.
We propose and analyse a boundary-preserving numerical scheme for the weak approximations of some stochastic partial differential equations (SPDEs) with bounded state-space. We impose regularity assumptions on the drift and diffusion coefficients only locally on the state-space. In particular, the drift and diffusion coefficients may be non-globally Lipschitz continuous and superlinearly growing. The scheme consists of a finite difference discretisation in space and a Lie--Trotter splitting followed by exact simulation and exact integration in time. We prove weak convergence of optimal order 1/4 for globally Lipschitz continuous test functions of the scheme by proving strong convergence towards a strong solution driven by a different noise process. Boundary-preservation is ensured by the use of Lie--Trotter time splitting followed by exact simulation and exact integration. Numerical experiments confirm the theoretical results and demonstrate the effectiveness of the proposed Lie--Trotter-Exact (LTE) scheme compared to existing methods for SPDEs.
The phenomenon of finite time blow-up in hydrodynamic partial differential equations is central in analysis and mathematical physics. While numerical studies have guided theoretical breakthroughs, it is challenging to determine if the observed computational results are genuine or mere numerical artifacts. Here we identify numerical signatures of blow-up. Our study is based on the complexified Euler equations in two dimensions, where instant blow-up is expected. Via a geometrically consistent spatiotemporal discretization, we perform several numerical experiments and verify their computational stability. We then identify a signature of blow-up based on the growth rates of the supremum norm of the vorticity with increasing spatial resolution. The study aims to be a guide for cross-checking the validity for future numerical experiments of suspected blow-up in equations where the analysis is not yet resolved.
We present a novel, model-free, and data-driven methodology for controlling complex dynamical systems into previously unseen target states, including those with significantly different and complex dynamics. Leveraging a parameter-aware realization of next-generation reservoir computing, our approach accurately predicts system behavior in unobserved parameter regimes, enabling control over transitions to arbitrary target states. Crucially, this includes states with dynamics that differ fundamentally from known regimes, such as shifts from periodic to intermittent or chaotic behavior. The method's parameter-awareness facilitates non-stationary control, ensuring smooth transitions between states. By extending the applicability of machine learning-based control mechanisms to previously inaccessible target dynamics, this methodology opens the door to transformative new applications while maintaining exceptional efficiency. Our results highlight reservoir computing as a powerful alternative to traditional methods for dynamic system control.
We present a computational design method that optimizes the reinforcement of dentures and increases the stiffness of dentures. Our approach optimally places reinforcement in the denture, which modern multi-material three-dimensional printers could implement. The study focuses on reducing denture displacement by identifying regions that require reinforcement (E-glass material) with the help of topology optimization. Our method is applied to a three-dimensional complete lower jaw denture. We compare the displacement results of a non-reinforced denture and a reinforced denture that has two materials. The comparison results indicate that there is a decrease in the displacement in the reinforced denture. Considering node-based displacement distribution, the reinforcement reduces the displacement magnitudes in the reinforced denture compared to the non-reinforced denture. The study guides dental technicians on where to automatically place reinforcement in the fabrication process, helping them save time and reduce material usage.
Studying unified model averaging estimation for situations with complicated data structures, we propose a novel model averaging method based on cross-validation (MACV). MACV unifies a large class of new and existing model averaging estimators and covers a very general class of loss functions. Furthermore, to reduce the computational burden caused by the conventional leave-subject/one-out cross validation, we propose a SEcond-order-Approximated Leave-one/subject-out (SEAL) cross validation, which largely improves the computation efficiency. In the context of non-independent and non-identically distributed random variables, we establish the unified theory for analyzing the asymptotic behaviors of the proposed MACV and SEAL methods, where the number of candidate models is allowed to diverge with sample size. To demonstrate the breadth of the proposed methodology, we exemplify four optimal model averaging estimators under four important situations, i.e., longitudinal data with discrete responses, within-cluster correlation structure modeling, conditional prediction in spatial data, and quantile regression with a potential correlation structure. We conduct extensive simulation studies and analyze real-data examples to illustrate the advantages of the proposed methods.
High-dimensional, higher-order tensor data are gaining prominence in a variety of fields, including but not limited to computer vision and network analysis. Tensor factor models, induced from noisy versions of tensor decompositions or factorizations, are natural potent instruments to study a collection of tensor-variate objects that may be dependent or independent. However, it is still in the early stage of developing statistical inferential theories for the estimation of various low-rank structures, which are customary to play the role of signals of tensor factor models. In this paper, we attempt to ``decode" the estimation of a higher-order tensor factor model by leveraging tensor matricization. Specifically, we recast it into mode-wise traditional high-dimensional vector/fiber factor models, enabling the deployment of conventional principal components analysis (PCA) for estimation. Demonstrated by the Tucker tensor factor model (TuTFaM), which is induced from the noisy version of the widely-used Tucker decomposition, we summarize that estimations on signal components are essentially mode-wise PCA techniques, and the involvement of projection and iteration will enhance the signal-to-noise ratio to various extent. We establish the inferential theory of the proposed estimators, conduct rich simulation experiments, and illustrate how the proposed estimations can work in tensor reconstruction, and clustering for independent video and dependent economic datasets, respectively.