As of December 2020, the COVID-19 pandemic has infected over 75 million people, making it the deadliest pandemic in modern history. This study develops a novel compartmental epidemiological model specific to the SARS-CoV-2 virus and analyzes the effect of common preventative measures such as testing, quarantine, social distancing, and vaccination. By accounting for the most prevalent interventions that have been enacted to minimize the spread of the virus, the model establishes a paramount foundation for future mathematical modeling of COVID-19 and other modern pandemics. Specifically, the model expands on the classic SIR model and introduces separate compartments for individuals who are in the incubation period, asymptomatic, tested-positive, quarantined, vaccinated, or deceased. It also accounts for variable infection, testing, and death rates. I first analyze the outbreak in Santa Clara County, California, and later generalize the findings. The results show that, although all preventative measures reduce the spread of COVID-19, quarantine and social distancing mandates reduce the infection rate and subsequently are the most effective policies, followed by vaccine distribution and, finally, public testing. Thus, governments should concentrate resources on enforcing quarantine and social distancing policies. In addition, I find mathematical proof that the relatively high asymptomatic rate and long incubation period are driving factors of COVID-19's rapid spread.
The Covid-19 pandemic has been a scourge upon humanity, claiming the lives of more than 5 million people worldwide. Although vaccines are being distributed worldwide, there is an apparent need for affordable screening techniques to serve parts of the world that do not have access to traditional medicine. Artificial Intelligence can provide a solution utilizing cough sounds as the primary screening mode. This paper presents multiple models that have achieved relatively respectable perfor mance on the largest evaluation dataset currently presented in academic literature. Moreover, we also show that performance increases with training data size, showing the need for the world wide collection of data to help combat the Covid-19 pandemic with non-traditional means.
We consider a participatory budgeting problem in which each voter submits a proposal for how to divide a single divisible resource (such as money or time) among several possible alternatives (such as public projects or activities) and these proposals must be aggregated into a single aggregate division. Under $\ell_1$ preferences -- for which a voter's disutility is given by the $\ell_1$ distance between the aggregate division and the division he or she most prefers -- the social welfare-maximizing mechanism, which minimizes the average $\ell_1$ distance between the outcome and each voter's proposal, is incentive compatible (Goel et al. 2016). However, it fails to satisfy the natural fairness notion of proportionality, placing too much weight on majority preferences. Leveraging a connection between market prices and the generalized median rules of Moulin (1980), we introduce the independent markets mechanism, which is both incentive compatible and proportional. We unify the social welfare-maximizing mechanism and the independent markets mechanism by defining a broad class of moving phantom mechanisms that includes both. We show that every moving phantom mechanism is incentive compatible. Finally, we characterize the social welfare-maximizing mechanism as the unique Pareto-optimal mechanism in this class, suggesting an inherent tradeoff between Pareto optimality and proportionality.
Current models of COVID-19 transmission predict infection from reported or assumed interactions. Here we leverage high-resolution observations of interaction to simulate infectious processes. Ultra-Wide Radio Frequency Identification (RFID) systems were employed to track the real-time physical movements and directional orientation of children and their teachers in 4 preschool classes over a total of 34 observations. An agent-based transmission model combined observed interaction patterns (individual distance and orientation) with CDC-published risk guidelines to estimate the transmission impact of an infected patient zero attending class on the proportion of overall infections, the average transmission rate, and the time lag to the appearance of symptomatic individuals. These metrics highlighted the prophylactic role of decreased classroom density and teacher vaccinations. Reduction of classroom density to half capacity was associated with an 18.2% drop in overall infection proportion while teacher vaccination receipt was associated with a 25.3%drop. Simulation results of classroom transmission dynamics may inform public policy in the face of COVID-19 and similar infectious threats.
Continuous determinantal point processes (DPPs) are a class of repulsive point processes on $\mathbb{R}^d$ with many statistical applications. Although an explicit expression of their density is known, it is too complicated to be used directly for maximum likelihood estimation. In the stationary case, an approximation using Fourier series has been suggested, but it is limited to rectangular observation windows and no theoretical results support it. In this contribution, we investigate a different way to approximate the likelihood by looking at its asymptotic behaviour when the observation window grows towards $\mathbb{R}^d$. This new approximation is not limited to rectangular windows, is faster to compute than the previous one, does not require any tuning parameter, and some theoretical justifications are provided. It moreover provides an explicit formula for estimating the asymptotic variance of the associated estimator. The performances are assessed in a simulation study on standard parametric models on $\mathbb{R}^d$ and compare favourably to common alternative estimation methods for continuous DPPs.
The voter process is a classic stochastic process that models the invasion of a mutant trait $A$ (e.g., a new opinion, belief, legend, genetic mutation, magnetic spin) in a population of agents (e.g., people, genes, particles) who share a resident trait $B$, spread over the nodes of a graph. An agent may adopt the trait of one of its neighbors at any time, while the invasion bias $r\in(0,\infty)$ quantifies the stochastic preference towards ($r>1$) or against ($r<1$) adopting $A$ over $B$. Success is measured in terms of the fixation probability, i.e., the probability that eventually all agents have adopted the mutant trait $A$. In this paper we study the problem of fixation probability maximization under this model: given a budget $k$, find a set of $k$ agents to initiate the invasion that maximizes the fixation probability. We show that the problem is NP-hard for both $r>1$ and $r<1$, while the latter case is also inapproximable within any multiplicative factor. On the positive side, we show that when $r>1$, the optimization function is submodular and thus can be greedily approximated within a factor $1-1/e$. An experimental evaluation of some proposed heuristics corroborates our results.
Deep learning models have shown great potential for image-based diagnosis assisting clinical decision making. At the same time, an increasing number of reports raise concerns about the potential risk that machine learning could amplify existing health disparities due to human biases that are embedded in the training data. It is of great importance to carefully investigate the extent to which biases may be reproduced or even amplified if we wish to build fair artificial intelligence systems. Seyyed-Kalantari et al. advance this conversation by analysing the performance of a disease classifier across population subgroups. They raise performance disparities related to underdiagnosis as a point of concern; we identify areas from this analysis which we believe deserve additional attention. Specifically, we wish to highlight some theoretical and practical difficulties associated with assessing model fairness through testing on data drawn from the same biased distribution as the training data, especially when the sources and amount of biases are unknown.
Covid-19 has ravaged the entire world and it may not be the last such to ravage the world. COMOKIT [3] is an agent based spatial modeling tool to study the effect of covid -19 in a geographical area by creating heterogenous synthetic agents and their behaviours. This paper presents comokit based case study on Gwalior region with respect to various intervention policies to curb the spread of the disease.
Group testing can help maintain a widespread testing program using fewer resources amid a pandemic. In group testing, we are given $n$ samples, one per individual. These samples are arranged into $m < n$ pooled samples, where each pool is obtained by mixing a subset of the $n$ individual samples. Infected individuals are then identified using a group testing algorithm. In this paper, we use side information (SI) collected from contact tracing (CT) within nonadaptive/single-stage group testing algorithms. We generate CT SI data by incorporating characteristics of disease spread between individuals. These data are fed into two signal and measurement models for group testing, and numerical results show that our algorithms provide improved sensitivity and specificity. We also show how to incorporate CT SI into the design of the pooling matrix. That said, our numerical results suggest that the utilization of SI in the pooling matrix design based on the minimization of a weighted coherence measure does not yield significant performance gains beyond the incorporation of SI in the group testing algorithm.
There is a growing body of work that proposes methods for mitigating bias in machine learning systems. These methods typically rely on access to protected attributes such as race, gender, or age. However, this raises two significant challenges: (1) protected attributes may not be available or it may not be legal to use them, and (2) it is often desirable to simultaneously consider multiple protected attributes, as well as their intersections. In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name. This method leverages the societal biases that are encoded in word embeddings, eliminating the need for access to protected attributes. Crucially, it only requires access to individuals' names at training time and not at deployment time. We evaluate two variations of our proposed method using a large-scale dataset of online biographies. We find that both variations simultaneously reduce race and gender biases, with almost no reduction in the classifier's overall true positive rate.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.