The importance of building footprints and their inventory has been recognised as foundational spatial information for multiple societal problems. Extracting complex urban buildings involves the segmentation of very high-resolution (VHR) earth observation (EO) images. U-Net is a common deep learning network and foundation for its new incarnations like ResUnet, U-Net++ and U-Net3+ for such segmentation. The re-incarnations look for efficiency gain by re-designing the skip connection component and exploiting the multi-scale features in U-Net. However, skip connections do not always improve these networks and removing some of them provides efficiency gains and reduced network parameters. In this paper, we propose three dual skip connection mechanisms for U-Net, ResUnet, and U-Net3+. These mechanisms deepen the feature maps forwarded by the skip connections and allow us to study which skip connections need to be denser to yield the highest efficiency gain. The mechanisms are evaluated on feature maps of different scales in the three networks, producing nine new network configurations. The networks are evaluated against their original vanilla versions using four building footprint datasets (three existing and one new) of different spatial resolutions: VHR (0.3m), high-resolution (1m and 1.2m), and multi-resolution (0.3+0.6+1.2m). The proposed mechanisms report efficiency gain on four evaluation measures for U-Net and ResUnet, and up to 17.7% and 18.4% gain in F1 score and Intersection over Union (IoU) for U-Net3+. The codes will be available in a GitHub link after peer review.
We examine the problem of estimating footprint uncertainty of objects imaged using the infrastructure based camera sensing. A closed form relationship is established between the ground coordinates and the sources of the camera errors. Using the error propagation equation, the covariance of a given ground coordinate can be measured as a function of the camera errors. The uncertainty of the footprint of the bounding box can then be given as the function of all the extreme points of the object footprint. In order to calculate the uncertainty of a ground point, the typical error sizes of the error sources are required. We present a method of estimating the typical error sizes from an experiment using a static, high-precision LiDAR as the ground truth. Finally, we present a simulated case study of uncertainty quantification from infrastructure based camera in CARLA to provide a sense of how the uncertainty changes across a left turn maneuver.
Early sensory systems in the brain rapidly adapt to fluctuating input statistics, which requires recurrent communication between neurons. Mechanistically, such recurrent communication is often indirect and mediated by local interneurons. In this work, we explore the computational benefits of mediating recurrent communication via interneurons compared with direct recurrent connections. To this end, we consider two mathematically tractable recurrent linear neural networks that statistically whiten their inputs -- one with direct recurrent connections and the other with interneurons that mediate recurrent communication. By analyzing the corresponding continuous synaptic dynamics and numerically simulating the networks, we show that the network with interneurons is more robust to initialization than the network with direct recurrent connections in the sense that the convergence time for the synaptic dynamics in the network with interneurons (resp. direct recurrent connections) scales logarithmically (resp. linearly) with the spectrum of their initialization. Our results suggest that interneurons are computationally useful for rapid adaptation to changing input statistics. Interestingly, the network with interneurons is an overparameterized solution of the whitening objective for the network with direct recurrent connections, so our results can be viewed as a recurrent linear neural network analogue of the implicit acceleration phenomenon observed in overparameterized feedforward linear neural networks.
The design of automatic speech pronunciation assessment can be categorized into closed and open response scenarios, each with strengths and limitations. A system with the ability to function in both scenarios can cater to diverse learning needs and provide a more precise and holistic assessment of pronunciation skills. In this study, we propose a Multi-task Pronunciation Assessment model called MultiPA. MultiPA provides an alternative to Kaldi-based systems in that it has simpler format requirements and better compatibility with other neural network models. Compared with previous open response systems, MultiPA provides a wider range of evaluations, encompassing assessments at both the sentence and word-level. Our experimental results show that MultiPA achieves comparable performance when working in closed response scenarios and maintains more robust performance when directly used for open responses.
The choice to participate in a data-driven service, often made on the basis of quality of that service, influences the ability of the service to learn and improve. We study the participation and retraining dynamics that arise when both the learners and sub-populations of users are \emph{risk-reducing}, which cover a broad class of updates including gradient descent, multiplicative weights, etc. Suppose, for example, that individuals choose to spend their time amongst social media platforms proportionally to how well each platform works for them. Each platform also gathers data about its active users, which it uses to update parameters with a gradient step. For this example and for our general class of dynamics, we show that the only asymptotically stable equilibria are segmented, with sub-populations allocated to a single learner. Under mild assumptions, the utilitarian social optimum is a stable equilibrium. In contrast to previous work, which shows that repeated risk minimization can result in representation disparity and high overall loss for a single learner \citep{hashimoto2018fairness,miller2021outside}, we find that repeated myopic updates with multiple learners lead to better outcomes. We illustrate the phenomena via a simulated example initialized from real data.
Containers offer an array of advantages that benefit research reproducibility and portability across groups and systems. As container tools mature, container security improves, and High-performance computing (HPC) and cloud system tools converge, supercomputing centers are increasingly integrating containers in their workflows. The technology selection process requires sufficient information on the diverse tools available, yet the majority of research into containers still focuses on cloud environments. We consider an adaptive containerization approach, with a focus on accelerating the deployment of applications and workflows on HPC systems using containers. To this end, we discuss the specific HPC requirements regarding container tools, and analyze the entire containerization stack, including container engines and registries, in-depth. Finally, we consider various orchestrator and HPC workload manager integration scenarios.
Numerical methods such as the Finite Element Method (FEM) have been successfully adapted to utilize the computational power of GPU accelerators. However, much of the effort around applying FEM to GPU's has been focused on high-order FEM due to higher arithmetic intensity and order of accuracy. For applications such as the simulation of subsurface processes, high levels of heterogeneity results in high-resolution grids characterized by highly discontinuous (cell-wise) material property fields. Moreover, due to the significant uncertainties in the characterization of the domain of interest, e.g. geologic reservoirs, the benefits of high order accuracy are reduced, and low-order methods are typically employed. In this study, we present a strategy for implementing highly performant low-order matrix-free FEM operator kernels in the context of the conjugate gradient (CG) method. Performance results of matrix-free Laplace and isotropic elasticity operator kernels are presented and are shown to compare favorably to matrix-based SpMV operators on V100, A100, and MI250X GPUs.
This paper addresses the overwhelming computational resources needed with standard numerical approaches to simulate architected materials. Those multiscale heterogeneous lattice structures gain intensive interest in conjunction with the improvement of additive manufacturing as they offer, among many others, excellent stiffness-to-weight ratios. We develop here a dedicated HPC solver that benefits from the specific nature of the underlying problem in order to drastically reduce the computational costs (memory and time) for the full fine-scale analysis of lattice structures. Our purpose is to take advantage of the natural domain decomposition into cells and, even more importantly, of the geometrical and mechanical similarities among cells. Our solver consists in a so-called inexact FETI-DP method where the local, cell-wise operators and solutions are approximated with reduced order modeling techniques. Instead of considering independently every cell, we end up with only few principal local problems to solve and make use of the corresponding principal cell-wise operators to approximate all the others. It results in a scalable algorithm that saves numerous local factorizations. Our solver is applied for the isogeometric analysis of lattices built by spline composition, which offers the opportunity to compute the reduced basis with macro-scale data, thereby making our method also multiscale and matrix-free. The solver is tested against various 2D and 3D analyses. It shows major gains with respect to black-box solvers; in particular, problems of several millions of degrees of freedom can be solved with a simple computer within few minutes.
Sample size determination for cluster randomised trials (CRTs) is challenging as it requires robust estimation of the intra-cluster correlation coefficient (ICC). Typically, the sample size is chosen to provide a certain level of power to reject the null hypothesis in a hypothesis test. This relies on the minimal clinically important difference (MCID) and estimates for the standard deviation, ICC and possibly the coefficient of variation of the cluster size. Varying these parameters can have a strong effect on the sample size. In particular, it is sensitive to small differences in the ICC. A relevant ICC estimate is often not available, or the available estimate is imprecise. If the ICC used is far from the unknown true value, this can lead to trials which are substantially over- or under-powered. We propose a hybrid approach using Bayesian assurance to find the sample size for a CRT with a frequentist analysis. Assurance is an alternative to power which incorporates uncertainty on parameters through a prior distribution. We suggest specifying prior distributions for the standard deviation, ICC and coefficient of variation of the cluster size, while still utilising the MCID. We illustrate the approach through the design of a CRT in post-stroke incontinence. We show assurance can be used to find a sample size based on an elicited prior distribution for the ICC, when a power calculation discards all information in the prior except a single point estimate. Results show that this approach can avoid misspecifying sample sizes when prior medians for the ICC are very similar but prior distributions exhibit quite different behaviour. Assurance provides an understanding of the probability of success of a trial given an MCID and can be used to produce sample sizes which are robust to parameter uncertainty. This is especially useful when there is difficulty obtaining reliable parameter estimates.
We combine Kronecker products, and quantitative information flow, to give a novel formal analysis for the fine-grained verification of utility in complex privacy pipelines. The combination explains a surprising anomaly in the behaviour of utility of privacy-preserving pipelines -- that sometimes a reduction in privacy results also in a decrease in utility. We use the standard measure of utility for Bayesian analysis, introduced by Ghosh at al., to produce tractable and rigorous proofs of the fine-grained statistical behaviour leading to the anomaly. More generally, we offer the prospect of formal-analysis tools for utility that complement extant formal analyses of privacy. We demonstrate our results on a number of common privacy-preserving designs.
High-level synthesis (HLS) refers to the automatic translation of a software program written in a high-level language into a hardware design. Modern HLS tools have moved away from the traditional approach of static (compile time) scheduling of operations to generating dynamic circuits that schedule operations at run time. Such circuits trade-off area utilisation for increased dynamism and throughput. However, existing lowering flows in dynamically scheduled HLS tools rely on conservative assumptions on their input program due to both the intermediate representations (IR) utilised as well as the lack of formal specifications on the translation into hardware. These assumptions cause suboptimal hardware performance. In this work, we lift these assumptions by proposing a new and efficient abstraction for hardware mapping; namely h-GSA, an extension of the Gated Single Static Assignment (GSA) IR. Using this abstraction, we propose a lowering flow that transforms GSA into h-GSA and maps h-GSA into dynamically scheduled hardware circuits. We compare the schedules generated by our approach to those by the state-of-the-art dynamic-scheduling HLS tool, Dynamatic, and illustrate the potential performance improvement from hardware mapping using the proposed abstraction.