Diagnostic tests which can detect pre-clinical or sub-clinical infection, are one of the most powerful tools in our armoury of weapons to control infectious diseases. Considerable effort has been therefore paid to improving diagnostic testing for human, plant and animal diseases, including strategies for targeting the use of diagnostic tests towards individuals who are more likely to be infected. Here, we follow other recent proposals to further refine this concept, by using machine learning to assess the situational risk under which a diagnostic test is applied to augment its interpretation . We develop this to predict the occurrence of breakdowns of cattle herds due to bovine tuberculosis, exploiting the availability of exceptionally detailed testing records. We show that, without compromising test specificity, test sensitivity can be improved so that the proportion of infected herds detected by the skin test, improves by over 16 percentage points. While many risk factors are associated with increased risk of becoming infected, of note are several factors which suggest that, in some herds there is a higher risk of infection going undetected, including effects that are correlated to the veterinary practice conducting the test, and number of livestock moved off the herd.
Contraction coefficients give a quantitative strengthening of the data processing inequality. As such, they have many natural applications whenever closer analysis of information processing is required. However, it is often challenging to calculate these coefficients. As a remedy we discuss a quantum generalization of Doeblin coefficients. These give an efficiently computable upper bound on many contraction coefficients. We prove several properties and discuss generalizations and applications. In particular, we give additional stronger bounds. One especially for PPT channels and one for general channels based on a constraint relaxation. Additionally, we introduce reverse Doeblin coefficients that bound certain expansion coefficients.
Latent variable models serve as powerful tools to infer underlying dynamics from observed neural activity. However, due to the absence of ground truth data, prediction benchmarks are often employed as proxies. In this study, we reveal the limitations of the widely-used 'co-smoothing' prediction framework and propose an improved few-shot prediction approach that encourages more accurate latent dynamics. Utilizing a student-teacher setup with Hidden Markov Models, we demonstrate that the high co-smoothing model space can encompass models with arbitrary extraneous dynamics within their latent representations. To address this, we introduce a secondary metric -- a few-shot version of co-smoothing. This involves performing regression from the latent variables to held-out channels in the data using fewer trials. Our results indicate that among models with near-optimal co-smoothing, those with extraneous dynamics underperform in the few-shot co-smoothing compared to 'minimal' models devoid of such dynamics. We also provide analytical insights into the origin of this phenomenon. We further validate our findings on real neural data using two state-of-the-art methods: LFADS and STNDT. In the absence of ground truth, we suggest a proxy measure to quantify extraneous dynamics. By cross-decoding the latent variables of all model pairs with high co-smoothing, we identify models with minimal extraneous dynamics. We find a correlation between few-shot co-smoothing performance and this new measure. In summary, we present a novel prediction metric designed to yield latent variables that more accurately reflect the ground truth, offering a significant improvement for latent dynamics inference.
Human-in-the-loop reinforcement learning (HRL) allows the training of agents through various interfaces, even for non-expert humans. Recently, preference-based methods (PBRL), where the human has to give his preference over two trajectories, increased in popularity since they allow training in domains where more direct feedback is hard to formulate. However, the current PBRL methods have limitations and do not provide humans with an expressive interface for giving feedback. With this work, we propose a new preference-based learning method that provides humans with a more expressive interface to provide their preference over trajectories and a factual explanation (or annotation of why they have this preference). These explanations allow the human to explain what parts of the trajectory are most relevant for the preference. We allow the expression of the explanations over individual trajectory steps. We evaluate our method in various simulations using a simulated human oracle (with realistic restrictions), and our results show that our extended feedback can improve the speed of learning. Code & data: github.com/under-rewiev
Machine-learned language models have transformed everyday life: they steer us when we study, drive, manage money. They have the potential to transform our civilization. But they hallucinate. Their realities are virtual. This note provides a high-level overview of language models and outlines a low-level model of learning machines. It turns out that, after they become capable of recognizing hallucinations and dreaming safely, as humans tend to be, the language-learning machines proceed to generate broader systems of false beliefs and self-confirming theories, as humans tend to do.
In oncology, phase II or multiple expansion cohort trials are crucial for clinical development plans. This is because they aid in identifying potent agents with sufficient activity to continue development and confirm the proof of concept. Typically, these clinical trials are single-arm trials, with the primary endpoint being short-term treatment efficacy. Despite the development of several well-designed methodologies, there may be a practical impediment in that the endpoints may be observed within a sufficient time such that adaptive go/no-go decisions can be made in a timely manner at each interim monitoring. Specifically, Response Evaluation Criteria in Solid Tumors guideline defines a confirmed response and necessitates it in non-randomized trials, where the response is the primary endpoint. However, obtaining the confirmed outcome from all participants entered at interim monitoring may be time-consuming as non-responders should be followed up until the disease progresses. Thus, this study proposed an approach to accelerate the decision-making process that incorporated the outcome without confirmation by discounting its contribution to the decision-making framework using the generalized Bayes' theorem. Further, the behavior of the proposed approach was evaluated through a simple simulation study. The results demonstrated that the proposed approach made appropriate interim go/no-go decisions.
Pretrial risk assessment tools are used in jurisdictions across the country to assess the likelihood of "pretrial failure," the event where defendants either fail to appear for court or reoffend. Judicial officers, in turn, use these assessments to determine whether to release or detain defendants during trial. While algorithmic risk assessment tools were designed to predict pretrial failure with greater accuracy relative to judges, there is still concern that both risk assessment recommendations and pretrial decisions are biased against minority groups. In this paper, we develop methods to investigate the association between risk factors and pretrial failure, while simultaneously estimating misclassification rates of pretrial risk assessments and of judicial decisions as a function of defendant race. This approach adds to a growing literature that makes use of outcome misclassification methods to answer questions about fairness in pretrial decision-making. We give a detailed simulation study for our proposed methodology and apply these methods to data from the Virginia Department of Criminal Justice Services. We estimate that the VPRAI algorithm has near-perfect specificity, but its sensitivity differs by defendant race. Judicial decisions also display evidence of bias; we estimate wrongful detention rates of 39.7% and 51.4% among white and Black defendants, respectively.
For a set of robots (or agents) moving in a graph, two properties are highly desirable: confidentiality (i.e., a message between two agents must not pass through any intermediate agent) and efficiency (i.e., messages are delivered through shortest paths). These properties can be obtained if the \textsc{Geodesic Mutual Visibility} (GMV, for short) problem is solved: oblivious robots move along the edges of the graph, without collisions, to occupy some vertices that guarantee they become pairwise geodesic mutually visible. This means there is a shortest path (i.e., a ``geodesic'') between each pair of robots along which no other robots reside. In this work, we optimally solve GMV on finite hexagonal grids $G_k$. This, in turn, requires first solving a graph combinatorial problem, i.e. determining the maximum number of mutually visible vertices in $G_k$.
Infrastructure systems play a critical role in providing essential products and services for the functioning of modern society; however, they are vulnerable to disasters and their service disruptions can cause severe societal impacts. To protect infrastructure from disasters and reduce potential impacts, great achievements have been made in modeling interdependent infrastructure systems in past decades. In recent years, scholars have gradually shifted their research focus to understanding and modeling societal impacts of disruptions considering the fact that infrastructure systems are critical because of their role in societal functioning, especially under situations of modern societies. Exploring how infrastructure disruptions impair society to enhance resilient city has become a key field of study. By comprehensively reviewing relevant studies, this paper demonstrated the definition and types of societal impact of infrastructure disruptions, and summarized the modeling approaches into four types: extended infrastructure modeling approaches, empirical approaches, agent-based approaches, and big data-driven approaches. For each approach, this paper organized relevant literature in terms of modeling ideas, advantages, and disadvantages. Furthermore, the four approaches were compared according to several criteria, including the input data, types of societal impact, and application scope. Finally, this paper illustrated the challenges and future research directions in the field.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.