This paper proposes a mathematical approach for robust control of a nanoscale drug delivery system in treatment of atherosclerosis. First, a new nonlinear lumped model is introduced for mass transport in the arterial wall, and its accuracy is evaluated in comparison with the original distributed-parameter model. Then, based on the notion of sliding-mode control, an abstract model is designed for a smart drug delivery nanoparticle. In contrast to the competing strategies on nanorobotics, the proposed nanoparticles carry simpler hardware to penetrate the interior arterial wall and become more technologically feasible. Finally, from this lumped model and the nonlinear control theory, the overall system's stability is mathematically proven in the presence of uncertainty. Simulation results on a well-known model, and comparisons with earlier benchmark approaches, reveals that even when the LDL concentration in the lumen is high, the proposed nanoscale drug delivery system successfully reduces the drug consumption levels by as much as 16% and the LDL level in the Endothelium, Intima, Internal Elastic Layer (IEL) and Media layers of an unhealthy arterial wall by as much as 14.6%, 50.5%, 51.8%, and 64.4%, respectively.
`Tracking' is the collection of data about an individual's activity across multiple distinct contexts and the retention, use, or sharing of data derived from that activity outside the context in which it occurred. This paper aims to introduce tracking on the web, smartphones, and the Internet of Things, to an audience with little or no previous knowledge. It covers these topics primarily from the perspective of computer science and human-computer interaction, but also includes relevant law and policy aspects. Rather than a systematic literature review, it aims to provide an over-arching narrative spanning this large research space. Section 1 introduces the concept of tracking. Section 2 provides a short history of the major developments of tracking on the web. Section 3 presents research covering the detection, measurement and analysis of web tracking technologies. Section 4 delves into the countermeasures against web tracking and mechanisms that have been proposed to allow users to control and limit tracking, as well as studies into end-user perspectives on tracking. Section 5 focuses on tracking on `smart' devices including smartphones and the internet of things. Section 6 covers emerging issues affecting the future of tracking across these different platforms.
We consider the problem of controlling a stochastic linear system with quadratic costs, when its system parameters are not known to the agent -- called the adaptive LQG control problem. We re-examine an approach called "Reward-Biased Maximum Likelihood Estimate" (RBMLE) that was proposed more than forty years ago, and which predates the "Upper Confidence Bound" (UCB) method as well as the definition of "regret". It simply added a term favoring parameters with larger rewards to the estimation criterion. We propose an augmented approach that combines the penalty of the RBMLE method with the constraint of the UCB method, uniting the two approaches to optimization in the face of uncertainty. We first establish that theoretically this method retains $\mathcal{O}(\sqrt{T})$ regret, the best known so far. We show through a comprehensive simulation study that this augmented RBMLE method considerably outperforms the UCB and Thompson sampling approaches, with a regret that is typically less than 50\% of the better of their regrets. The simulation study includes all examples from earlier papers as well as a large collection of randomly generated systems.
Due to the COVID 19 pandemic, smartphone-based proximity tracing systems became of utmost interest. Many of these systems use Bluetooth Low Energy (BLE) signals to estimate the distance between two persons. The quality of this method depends on many factors and, therefore, does not always deliver accurate results. In this paper, we present a multi-channel approach to improve proximity estimation, and a novel, publicly available dataset that contains matched IEEE 802.11 (2.4 GHz and 5 GHz) and BLE signal strength data, measured in four different environments. We have developed and evaluated a combined classification model based on BLE and IEEE 802.11 signals. Our approach significantly improves the distance estimation and consequently also the contact tracing accuracy. We are able to achieve good results with our approach in everyday public transport scenarios. However, in our implementation based on IEEE 802.11 probe requests, we also encountered privacy problems and limitations due to the consistency and interval at which such probes are sent. We discuss these limitations and sketch how our approach could be improved to make it suitable for real-world deployment.
Inductive Logic Programming (ILP) is a form of machine learning (ML) which in contrast to many other state of the art ML methods typically produces highly interpretable and reusable models. However, many ILP systems lack the ability to naturally learn from any noisy or partially misclassified training data. We introduce the relaxed learning from failures approach to ILP, a noise handling modification of the previously introduced learning from failures (LFF) approach which is incapable of handling noise. We additionally introduce the novel Noisy Popper ILP system which implements this relaxed approach and is a modification of the existing Popper system. Like Popper, Noisy Popper takes a generate-test-constrain loop to search its hypothesis space wherein failed hypotheses are used to construct hypothesis constraints. These constraints are used to prune the hypothesis space, making the hypothesis search more efficient. However, in the relaxed setting, constraints are generated in a more lax fashion as to avoid allowing noisy training data to lead to hypothesis constraints which prune optimal hypotheses. Constraints unique to the relaxed setting are generated via hypothesis comparison. Additional constraints are generated by weighing the accuracy of hypotheses against their sizes to avoid overfitting through an application of the minimum description length. We support this new setting through theoretical proofs as well as experimental results which suggest that Noisy Popper improves the noise handling capabilities of Popper but at the cost of overall runtime efficiency.
Exploration-exploitation is a powerful and practical tool in multi-agent learning (MAL), however, its effects are far from understood. To make progress in this direction, we study a smooth analogue of Q-learning. We start by showing that our learning model has strong theoretical justification as an optimal model for studying exploration-exploitation. Specifically, we prove that smooth Q-learning has bounded regret in arbitrary games for a cost model that explicitly captures the balance between game and exploration costs and that it always converges to the set of quantal-response equilibria (QRE), the standard solution concept for games under bounded rationality, in weighted potential games with heterogeneous learning agents. In our main task, we then turn to measure the effect of exploration in collective system performance. We characterize the geometry of the QRE surface in low-dimensional MAL systems and link our findings with catastrophe (bifurcation) theory. In particular, as the exploration hyperparameter evolves over-time, the system undergoes phase transitions where the number and stability of equilibria can change radically given an infinitesimal change to the exploration parameter. Based on this, we provide a formal theoretical treatment of how tuning the exploration parameter can provably lead to equilibrium selection with both positive as well as negative (and potentially unbounded) effects to system performance.
In this monograph, I introduce the basic concepts of Online Learning through a modern view of Online Convex Optimization. Here, online learning refers to the framework of regret minimization under worst-case assumptions. I present first-order and second-order algorithms for online learning with convex losses, in Euclidean and non-Euclidean settings. All the algorithms are clearly presented as instantiation of Online Mirror Descent or Follow-The-Regularized-Leader and their variants. Particular attention is given to the issue of tuning the parameters of the algorithms and learning in unbounded domains, through adaptive and parameter-free online learning algorithms. Non-convex losses are dealt through convex surrogate losses and through randomization. The bandit setting is also briefly discussed, touching on the problem of adversarial and stochastic multi-armed bandits. These notes do not require prior knowledge of convex analysis and all the required mathematical tools are rigorously explained. Moreover, all the proofs have been carefully chosen to be as simple and as short as possible.
Breast cancer remains a global challenge, causing over 1 million deaths globally in 2018. To achieve earlier breast cancer detection, screening x-ray mammography is recommended by health organizations worldwide and has been estimated to decrease breast cancer mortality by 20-40%. Nevertheless, significant false positive and false negative rates, as well as high interpretation costs, leave opportunities for improving quality and access. To address these limitations, there has been much recent interest in applying deep learning to mammography; however, obtaining large amounts of annotated data poses a challenge for training deep learning models for this purpose, as does ensuring generalization beyond the populations represented in the training dataset. Here, we present an annotation-efficient deep learning approach that 1) achieves state-of-the-art performance in mammogram classification, 2) successfully extends to digital breast tomosynthesis (DBT; "3D mammography"), 3) detects cancers in clinically-negative prior mammograms of cancer patients, 4) generalizes well to a population with low screening rates, and 5) outperforms five-out-of-five full-time breast imaging specialists by improving absolute sensitivity by an average of 14%. Our results demonstrate promise towards software that can improve the accuracy of and access to screening mammography worldwide.
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
The availability of large microarray data has led to a growing interest in biclustering methods in the past decade. Several algorithms have been proposed to identify subsets of genes and conditions according to different similarity measures and under varying constraints. In this paper we focus on the exclusive row biclustering problem for gene expression data sets, in which each row can only be a member of a single bicluster while columns can participate in multiple ones. This type of biclustering may be adequate, for example, for clustering groups of cancer patients where each patient (row) is expected to be carrying only a single type of cancer, while each cancer type is associated with multiple (and possibly overlapping) genes (columns). We present a novel method to identify these exclusive row biclusters through a combination of existing biclustering algorithms and combinatorial auction techniques. We devise an approach for tuning the threshold for our algorithm based on comparison to a null model in the spirit of the Gap statistic approach. We demonstrate our approach on both synthetic and real-world gene expression data and show its power in identifying large span non-overlapping rows sub matrices, while considering their unique nature. The Gap statistic approach succeeds in identifying appropriate thresholds in all our examples.
Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations can cause proficient but narrowly-learned policies to fail at test time. In this work, we propose to learn how to quickly and effectively adapt online to new situations as well as to perturbations. To enable sample-efficient meta-learning, we consider learning online adaptation in the context of model-based reinforcement learning. Our approach trains a global model such that, when combined with recent data, the model can be be rapidly adapted to the local context. Our experiments demonstrate that our approach can enable simulated agents to adapt their behavior online to novel terrains, to a crippled leg, and in highly-dynamic environments.