Reconstructing a dynamic object with affine motion in computerized tomography (CT) leads to motion artifacts if the motion is not taken into account. In most cases, the actual motion is neither known nor can be determined easily. As a consequence, the respective model that describes CT is incomplete. The iterative RESESOP-Kaczmarz method can - under certain conditions and by exploiting the modeling error - reconstruct dynamic objects at different time points even if the exact motion is unknown. However, the method is very time-consuming. To speed the reconstruction process up and obtain better results, we combine the following three steps: 1. RESESOP-Kacmarz with only a few iterations is implemented to reconstruct the object at different time points. 2. The motion is estimated via landmark detection, e.g. using deep learning. 3. The estimated motion is integrated into the reconstruction process, allowing the use of dynamic filtered backprojection. We give a short review of all methods involved and present numerical results as a proof of principle.
Missing data is a common problem in practical settings. Various imputation methods have been developed to deal with missing data. However, even though the label is usually available in the training data, the common practice of imputation usually only relies on the input and ignores the label. In this work, we illustrate how stacking the label into the input can significantly improve the imputation of the input. In addition, we propose a classification strategy that initializes the predicted test label with missing values and stacks the label with the input for imputation. This allows imputing the label and the input at the same time. Also, the technique is capable of handling data training with missing labels without any prior imputation and is applicable to continuous, categorical, or mixed-type data. Experiments show promising results in terms of accuracy.
Surrogate models provide a quick-to-evaluate approximation to complex computational models and are essential for multi-query problems like design optimisation. The inputs of current computational models are usually high-dimensional and uncertain. We consider Bayesian inference for constructing statistical surrogates with input uncertainties and intrinsic dimensionality reduction. The surrogates are trained by fitting to data from prevalent deterministic computational models. The assumed prior probability density of the surrogate is a Gaussian process. We determine the respective posterior probability density and parameters of the posited statistical model using variational Bayes. The non-Gaussian posterior is approximated by a simpler trial density with free variational parameters and the discrepancy between them is measured using the Kullback-Leibler (KL) divergence. We employ the stochastic gradient method to compute the variational parameters and other statistical model parameters by minimising the KL divergence. We demonstrate the accuracy and versatility of the proposed reduced dimension variational Gaussian process (RDVGP) surrogate on illustrative and robust structural optimisation problems with cost functions depending on a weighted sum of the mean and standard deviation of model outputs.
Boundary condition (BC) calibration to assimilate clinical measurements is an essential step in any subject-specific simulation of cardiovascular fluid dynamics. Bayesian calibration approaches have successfully quantified the uncertainties inherent in identified parameters. Yet, routinely estimating the posterior distribution for all BC parameters in 3D simulations has been unattainable due to the infeasible computational demand. We propose an efficient method to identify Windkessel parameter posteriors using results from a single high-fidelity three-dimensional (3D) model evaluation. We only evaluate the 3D model once for an initial choice of BCs and use the result to create a highly accurate zero-dimensional (0D) surrogate. We then perform Sequential Monte Carlo (SMC) using the optimized 0D model to derive the high-dimensional Windkessel BC posterior distribution. We validate this approach in a publicly available dataset of N=72 subject-specific vascular models. We found that optimizing 0D models to match 3D data a priori lowered their median approximation error by nearly one order of magnitude. In a subset of models, we confirm that the optimized 0D models still generalize to a wide range of BCs. Finally, we present the high-dimensional Windkessel parameter posterior for different measured signal-to-noise ratios in a vascular model using SMC. We further validate that the 0D-derived posterior is a good approximation of the 3D posterior. The minimal computational demand of our method using a single 3D simulation, combined with the open-source nature of all software and data used in this work, will increase access and efficiency of Bayesian Windkessel calibration in cardiovascular fluid dynamics simulations.
Cross-validation is usually employed to evaluate the performance of a given statistical methodology. When such a methodology depends on a number of tuning parameters, cross-validation proves to be helpful to select the parameters that optimize the estimated performance. In this paper, however, a very different and nonstandard use of cross-validation is investigated. Instead of focusing on the cross-validated parameters, the main interest is switched to the estimated value of the error criterion at optimal performance. It is shown that this approach is able to provide consistent and efficient estimates of some density functionals, with the noteworthy feature that these estimates do not rely on the choice of any further tuning parameter, so that, in that sense, they can be considered to be purely empirical. Here, a base case of application of this new paradigm is developed in full detail, while many other possible extensions are hinted as well.
Circuit complexity, defined as the minimum circuit size required for implementing a particular Boolean computation, is a foundational concept in computer science. Determining circuit complexity is believed to be a hard computational problem [1]. Recently, in the context of black holes, circuit complexity has been promoted to a physical property, wherein the growth of complexity is reflected in the time evolution of the Einstein-Rosen bridge (``wormhole'') connecting the two sides of an AdS ``eternal'' black hole [2]. Here we explore another link between complexity and thermodynamics for circuits of given functionality, making the physics-inspired approach relevant to real computational problems, for which functionality is the key element of interest. In particular, our thermodynamic framework provides a new perspective on the obfuscation of programs of arbitrary length -- an important problem in cryptography -- as thermalization through recursive mixing of neighboring sections of a circuit, which can be viewed as the mixing of two containers with ``gases of gates''. This recursive process equilibrates the average complexity and leads to the saturation of the circuit entropy, while preserving functionality of the overall circuit. The thermodynamic arguments hinge on ergodicity in the space of circuits which we conjecture is limited to disconnected ergodic sectors due to fragmentation. The notion of fragmentation has important implications for the problem of circuit obfuscation as it implies that there are circuits with same size and functionality that cannot be connected via local moves. Furthermore, we argue that fragmentation is unavoidable unless the complexity classes NP and coNP coincide, a statement that implies the collapse of the polynomial hierarchy of computational complexity theory to its first level.
Generalized cross-validation (GCV) is a widely-used method for estimating the squared out-of-sample prediction risk that employs a scalar degrees of freedom adjustment (in a multiplicative sense) to the squared training error. In this paper, we examine the consistency of GCV for estimating the prediction risk of arbitrary ensembles of penalized least-squares estimators. We show that GCV is inconsistent for any finite ensemble of size greater than one. Towards repairing this shortcoming, we identify a correction that involves an additional scalar correction (in an additive sense) based on degrees of freedom adjusted training errors from each ensemble component. The proposed estimator (termed CGCV) maintains the computational advantages of GCV and requires neither sample splitting, model refitting, or out-of-bag risk estimation. The estimator stems from a finer inspection of the ensemble risk decomposition and two intermediate risk estimators for the components in this decomposition. We provide a non-asymptotic analysis of the CGCV and the two intermediate risk estimators for ensembles of convex penalized estimators under Gaussian features and a linear response model. Furthermore, in the special case of ridge regression, we extend the analysis to general feature and response distributions using random matrix theory, which establishes model-free uniform consistency of CGCV.
In this paper, we plan to show an eigenvalue algorithm for block Hessenberg matrices by using the idea of non-commutative integrable systems and matrix-valued orthogonal polynomials. We introduce adjacent families of matrix-valued $\theta$-deformed bi-orthogonal polynomials, and derive corresponding discrete non-commutative hungry Toda lattice from discrete spectral transformations for polynomials. It is shown that this discrete system can be used as a pre-precessing algorithm for block Hessenberg matrices. Besides, some convergence analysis and numerical examples of this algorithm are presented.
In the context of the optimization of rotating electric machines, many different objective functions are of interest and considering this during the optimization is of crucial importance. While evolutionary algorithms can provide a Pareto front straightforwardly and are widely used in this context, derivative-based optimization algorithms can be computationally more efficient. In this case, a Pareto front can be obtained by performing several optimization runs with different weights. In this work, we focus on a free-form shape optimization approach allowing for arbitrary motor geometries. In particular, we propose a way to efficiently obtain Pareto-optimal points by moving along to the Pareto front exploiting a homotopy method based on second order shape derivatives.
Any interactive protocol between a pair of parties can be reliably simulated in the presence of noise with a multiplicative overhead on the number of rounds (Schulman 1996). The reciprocal of the best (least) overhead is called the interactive capacity of the noisy channel. In this work, we present lower bounds on the interactive capacity of the binary erasure channel. Our lower bound improves the best known bound due to Ben-Yishai et al. 2021 by roughly a factor of 1.75. The improvement is due to a tighter analysis of the correctness of the simulation protocol using error pattern analysis. More precisely, instead of using the well-known technique of bounding the least number of erasures needed to make the simulation fail, we identify and bound the probability of specific erasure patterns causing simulation failure. We remark that error pattern analysis can be useful in solving other problems involving stochastic noise, such as bounding the interactive capacity of different channels.
We propose new linear combinations of compositions of a basic second-order scheme with appropriately chosen coefficients to construct higher order numerical integrators for differential equations. They can be considered as a generalization of extrapolation methods and multi-product expansions. A general analysis is provided and new methods up to order 8 are built and tested. The new approach is shown to reduce the latency problem when implemented in a parallel environment and leads to schemes that are significantly more efficient than standard extrapolation when the linear combination is delayed by a number of steps.