An outbreak of the Delta (B.1.617.2) variant of SARS-CoV-2 that began around mid-June 2021 in Sydney, Australia, quickly developed into a nation-wide epidemic. The ongoing epidemic is of major concern as the Delta variant is more infectious than previous variants that circulated in Australia in 2020. Using a re-calibrated agent-based model, we explored a feasible range of non-pharmaceutical interventions, including case isolation, home quarantine, school closures, and stay-at-home restrictions (i.e., "social distancing"). Our modelling indicated that the levels of reduced interactions in workplaces and across communities attained in Sydney and other parts of the nation were inadequate for controlling the outbreak. A counter-factual analysis suggested that if 70% of the population followed tight stay-at-home restrictions, then at least 45 days would have been needed for new daily cases to fall from their peak to below ten per day. Our model successfully predicted that, under a progressive vaccination rollout, if 40-50% of the Australian population follow stay-at-home restrictions, the incidence will peak by mid-October 2021. We also quantified an expected burden on the healthcare system and potential fatalities across Australia.
Organisations use issue tracking systems (ITSs) to track and document their projects' work in units called issues. This style of documentation encourages evolutionary refinement, as each issue can be independently improved, commented on, linked to other issues, and progressed through the organisational workflow. Commonly studied ITSs so far include GitHub, GitLab, and Bugzilla, while Jira, one of the most popular ITS in practice with a wealth of additional information, has yet to receive such attention. Unfortunately, diverse public Jira datasets are rare, likely due to the difficulty in finding and accessing these repositories. With this paper, we release a dataset of 16 public Jiras with 1822 projects, spanning 2.7 million issues with a combined total of 32 million changes, 9 million comments, and 1 million issue links. We believe this Jira dataset will lead to many fruitful research projects investigating issue evolution, issue linking, cross-project analysis, as well as cross-tool analysis when combined with existing well-studied ITS datasets.
We focus on parameterized policy search for reinforcement learning over continuous action spaces. Typically, one assumes the score function associated with a policy is bounded, which {fails to hold even for Gaussian policies. } To properly address this issue, one must introduce an exploration tolerance parameter to quantify the region in which it is bounded. Doing so incurs a persistent bias that appears in the attenuation rate of the expected policy gradient norm, which is inversely proportional to the radius of the action space. To mitigate this hidden bias, heavy-tailed policy parameterizations may be used, which exhibit a bounded score function, but doing so can cause instability in algorithmic updates. To address these issues, in this work, we study the convergence of policy gradient algorithms under heavy-tailed parameterizations, which we propose to stabilize with a combination of mirror ascent-type updates and gradient tracking. Our main theoretical contribution is the establishment that this scheme converges with constant step and batch sizes, whereas prior works require these parameters to respectively shrink to null or grow to infinity. Experimentally, this scheme under a heavy-tailed policy parameterization yields improved reward accumulation across a variety of settings as compared with standard benchmarks.
We implement a data assimilation framework for integrating ice surface and terminus position observations into a numerical ice-flow model. The model uses the well-known shallow shelf approximation (SSA) coupled to a level set method to capture ice motion and changes in the glacier geometry. The level set method explicitly tracks the evolving ice-atmosphere and ice-ocean boundaries for a marine outlet glacier. We use an Ensemble Transform Kalman Filter to assimilate observations of ice surface elevation and lateral ice extent by updating the level set function that describes the ice interface. Numerical experiments on an idealized marine-terminating glacier demonstrate the effectiveness of our data assimilation approach for tracking seasonal and multi-year glacier advance and retreat cycles. The model is also applied to simulate Helheim Glacier, a major tidewater-terminating glacier of the Greenland Ice Sheet that has experienced a recent history of rapid retreat. By assimilating observations from remotely-sensed surface elevation profiles we are able to more accurately track the migrating glacier terminus and glacier surface changes. These results support the use of data assimilation methodologies for obtaining more accurate predictions of short-term ice sheet dynamics.
Researchers are often faced with evaluating the effect of a policy or program that was simultaneously initiated across an entire population of units at a single point in time, and its effects over the targeted population can manifest at any time period afterwards. In the presence of data measured over time, Bayesian time series models have been used to impute what would have happened after the policy was initiated, had the policy not taken place, in order to estimate causal effects. However, the considerations regarding the definition of the target estimands, the underlying assumptions, the plausibility of such assumptions, and the choice of an appropriate model have not been thoroughly investigated. In this paper, we establish useful estimands for the evaluation of large-scale policies. We discuss that imputation of missing potential outcomes relies on an assumption which, even though untestable, can be partially evaluated using observed data. We illustrate an approach to evaluate this key causal assumption and facilitate model elicitation based on data from the time interval before policy initiation and using classic statistical techniques. As an illustration, we study the Hospital Readmissions Reduction Program (HRRP), a US federal intervention aiming to improve health outcomes for patients with pneumonia, acute myocardial infraction, or congestive failure admitted to a hospital. We evaluate the effect of the HRRP on population mortality across the US and in four geographic subregions, and at different time windows. We find that the HRRP increased mortality from the three targeted conditions across most scenarios considered, and is likely to have had a detrimental effect on public health.
Non-Volatile Memory (NVM) cells are used in neuromorphic hardware to store model parameters, which are programmed as resistance states. NVMs suffer from the read disturb issue, where the programmed resistance state drifts upon repeated access of a cell during inference. Resistance drifts can lower the inference accuracy. To address this, it is necessary to periodically reprogram model parameters (a high overhead operation). We study read disturb failures of an NVM cell. Our analysis show both a strong dependency on model characteristics such as synaptic activation and criticality, and on the voltage used to read resistance states during inference. We propose a system software framework to incorporate such dependencies in programming model parameters on NVM cells of a neuromorphic hardware. Our framework consists of a convex optimization formulation which aims to implement synaptic weights that have more activations and are critical, i.e., those that have high impact on accuracy on NVM cells that are exposed to lower voltages during inference. In this way, we increase the time interval between two consecutive reprogramming of model parameters. We evaluate our system software with many emerging inference models on a neuromorphic hardware simulator and show a significant reduction in the system overhead.
Isogeometric Analysis generalizes classical finite element analysis and intends to integrate it with the field of Computer-Aided Design. A central problem in achieving this objective is the reconstruction of analysis-suitable models from Computer-Aided Design models, which is in general a non-trivial and time-consuming task. In this article, we present a novel spline construction, that enables model reconstruction as well as simulation of high-order PDEs on the reconstructed models. The proposed almost-$C^1$ are biquadratic splines on fully unstructured quadrilateral meshes (without restrictions on placements or number of extraordinary vertices). They are $C^1$ smooth almost everywhere, that is, at all vertices and across most edges, and in addition almost (i.e. approximately) $C^1$ smooth across all other edges. Thus, the splines form $H^2$-nonconforming analysis-suitable discretization spaces. This is the lowest-degree unstructured spline construction that can be used to solve fourth-order problems. The associated spline basis is non-singular and has several B-spline-like properties (e.g., partition of unity, non-negativity, local support), the almost-$C^1$ splines are described in an explicit B\'ezier-extraction-based framework that can be easily implemented. Numerical tests suggest that the basis is well-conditioned and exhibits optimal approximation behavior.
This paper identifies the factors that have an impact on mobile recommender systems. Recommender systems have become a technology that has been widely used by various online applications in situations where there is an information overload problem. Numerous applications such as e-Commerce, video platforms and social networks provide personalized recommendations to their users and this has improved the user experience and vendor revenues. The development of recommender systems has been focused mostly on the proposal of new algorithms that provide more accurate recommendations. However, the use of mobile devices and the rapid growth of the internet and networking infrastructure has brought the necessity of using mobile recommender systems. The links between web and mobile recommender systems are described along with how the recommendations in mobile environments can be improved. This work is focused on identifying the links between web and mobile recommender systems and to provide solid future directions that aim to lead in a more integrated mobile recommendation domain.
TraQuad is an autonomous tracking quadcopter capable of tracking any moving (or static) object like cars, humans, other drones or any other object on-the-go. This article describes the applications and advantages of TraQuad and the reduction in cost (to about 250$) that has been achieved so far using the hardware and software capabilities and our custom algorithms wherever needed. This description is backed by strong data and the research analyses which have been drawn out of extant information or conducted on own when necessary. This also describes the development of completely autonomous (even GPS is optional) low-cost drone which can act as a major platform for further developments in automation, transportation, reconnaissance and more. We describe our ROS Gazebo simulator and our STATUS algorithms which form the core of our development of our object tracking drone for generic purposes.
Computer vision technologies are very attractive for practical applications running on embedded systems. For such an application, it is desirable for the deployed algorithms to run in high-speed and require no offline training. To develop a single-target tracking algorithm with these properties, we propose an ensemble of the kernelized correlation filters (KCF), we call it EnKCF. A committee of KCFs is specifically designed to address the variations in scale and translation of moving objects. To guarantee a high-speed run-time performance, we deploy each of KCFs in turn, instead of applying multiple KCFs to each frame. To minimize any potential drifts between individual KCFs transition, we developed a particle filter. Experimental results showed that the performance of ours is, on average, 70.10% for precision at 20 pixels, 53.00% for success rate for the OTB100 data, and 54.50% and 40.2% for the UAV123 data. Experimental results showed that our method is better than other high-speed trackers over 5% on precision on 20 pixels and 10-20% on AUC on average. Moreover, our implementation ran at 340 fps for the OTB100 and at 416 fps for the UAV123 dataset that is faster than DCF (292 fps) for the OTB100 and KCF (292 fps) for the UAV123. To increase flexibility of the proposed EnKCF running on various platforms, we also explored different levels of deep convolutional features.
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.