As the burden of herbicide resistance grows and the environmental repercussions of excessive herbicide use become clear, new ways of managing weed populations are needed. This is particularly true for cereal crops, like wheat and barley, that are staple food crops and occupy a globally significant portion of agricultural land. Even small improvements in weed management practices across these major food crops worldwide would yield considerable benefits for both the environment and global food security. Blackgrass is a major grass weed which causes particular problems in cereal crops in north-west Europe, a major cereal production area, because it has high levels of of herbicide resistance and is well adapted to agronomic practice in this region. With the use of machine vision and multispectral imaging, we investigate the effectiveness of state-of-the-art methods to identify blackgrass in wheat and barley crops. As part of this work, we provide a large dataset with which we evaluate several key aspects of blackgrass weed recognition. Firstly, we determine the performance of different CNN and transformer-based architectures on images from unseen fields. Secondly, we demonstrate the role that different spectral bands have on the performance of weed classification. Lastly, we evaluate the role of dataset size in classification performance for each of the models trialled. We find that even with a fairly modest quantity of training data an accuracy of almost 90% can be achieved on images from unseen fields.
Current methods for predicting osteoarthritis (OA) outcomes do not incorporate disease specific prior knowledge to improve the outcome prediction models. We developed a novel approach that effectively uses consecutive imaging studies to improve OA outcome predictions by incorporating an OA severity constraint. This constraint ensures that the risk of OA for a knee should either increase or remain the same over time. DL models were trained to predict TKR within multiple time periods (1 year, 2 years, and 4 years) using knee radiographs and MRI scans. Models with and without the risk constraint were evaluated using the area under the receiver operator curve (AUROC) and the area under the precision recall curve (AUPRC) analysis. The novel RiskFORM2 method, leveraging a dual model risk constraint architecture, demonstrated superior performance, yielding an AUROC of 0.87 and AUPRC of 0.47 for 1 year TKR prediction on the OAI radiograph test set, a marked improvement over the 0.79 AUROC and 0.34 AUPRC of the baseline approach. The performance advantage extended to longer followup periods, with RiskFORM2 maintaining a high AUROC of 0.86 and AUPRC of 0.75 in predicting TKR within 4 years. Additionally, when generalizing to the external MOST radiograph test set, RiskFORM2 generalized better with an AUROC of 0.77 and AUPRC of 0.25 for 1 year predictions, which was higher than the 0.71 AUROC and 0.19 AUPRC of the baseline approach. In the MRI test sets, similar patterns emerged, with RiskFORM2 outperforming the baseline approach consistently. However, RiskFORM1 exhibited the highest AUROC of 0.86 and AUPRC of 0.72 for 4 year predictions on the OAI set.
We derive an algorithm for compression of the currents and varifolds representations of shapes, using the Nystrom approximation in Reproducing Kernel Hilbert Spaces. Our method is faster than existing compression techniques, and comes with theoretical guarantees on the rate of convergence of the compressed approximation, as a function of the smoothness of the associated shape representation. The obtained compression are shown to be useful for down-line tasks such as nonlinear shape registration in the Large Deformation Metric Mapping (LDDMM) framework, even for very high compression ratios. The performance of our algorithm is demonstrated on large-scale shape data from modern geometry processing datasets, and is shown to be fast and scalable with rapid error decay.
Tails used as inertial appendages induce body rotations of animals and robots, a phenomenon that is governed largely by the ratio of the body and tail moments of inertia. However, vertebrate tails have more degrees of freedom (e.g., number of joints, rotational axes) than most current theoretical models and robotic tails. To understand how morphology affects inertial appendage function, we developed an optimization-based approach that finds the maximally effective tail trajectory and measures error from a target trajectory. For tails of equal total length and mass, increasing the number of equal-length joints increased the complexity of maximally effective tail motions. When we optimized the relative lengths of tail bones while keeping the total tail length, mass, and number of joints the same, this optimization-based approach found that the lengths match the pattern found in the tail bones of mammals specialized for inertial maneuvering. In both experiments, adding joints enhanced the performance of the inertial appendage, but with diminishing returns, largely due to the total control effort constraint. This optimization-based simulation can compare the maximum performance of diverse inertial appendages that dynamically vary in moment of inertia in 3D space, predict inertial capabilities from skeletal data, and inform the design of robotic inertial appendages.
We formulate the problem of fair and efficient completion of indivisible goods, defined as follows: Given a partial allocation of indivisible goods among agents, does there exist an allocation of the remaining goods (i.e., a completion) that satisfies fairness and economic efficiency guarantees of interest? We study the computational complexity of the completion problem for prominent fairness and efficiency notions such as envy-freeness up one good (EF1), proportionality up to one good (Prop1), maximin share (MMS), and Pareto optimality (PO), and focus on the class of additive valuations as well as its subclasses such as binary additive and lexicographic valuations. We find that while the completion problem is significantly harder than the standard fair division problem (wherein the initial partial allocation is empty), the consideration of restricted preferences facilitates positive algorithmic results for threshold-based fairness notions (Prop1 and MMS). On the other hand, the completion problem remains computationally intractable for envy-based notions such as EF1 and EF1+PO even under restricted preferences.
The Metaverse, a persistent, immersive virtual environment, has the immense potential to revolutionize healthcare by transforming patient care, medical education, and research. This paper explores the applications, benefits, and challenges associated with this transformative technology, highlighting its ability to improve patient engagement, communication, access to information, and health outcomes. The paper also examines how the analysis of Metaverse data using machine learning techniques can unlock insights to further enhance healthcare applications. The discussion summarizes key findings, analyzes the significance and practical implications of Metaverse integration, and identifies areas for future research. It underscores the role of major tech companies in developing Metaverse-based solutions and the importance of addressing emerging opportunities and challenges to unlock the transformative potential of this technology in healthcare. The paper concludes by emphasizing the need for collaboration between stakeholders to ensure the ethical and effective implementation of these technologies, ultimately leading to a more accessible, personalized, and efficient healthcare system.
For causal discovery in the presence of latent confounders, constraints beyond conditional independences exist that can enable causal discovery algorithms to distinguish more pairs of graphs. Such constraints are not well-understood yet. In the setting of linear structural equation models without bows, we study algebraic constraints and argue that these provide the most fine-grained resolution achievable. We propose efficient algorithms that decide whether two graphs impose the same algebraic constraints, or whether the constraints imposed by one graph are a subset of those imposed by another graph.
Social and behavioral determinants of health (SBDH) play a crucial role in health outcomes and are frequently documented in clinical text. Automatically extracting SBDH information from clinical text relies on publicly available good-quality datasets. However, existing SBDH datasets exhibit substantial limitations in their availability and coverage. In this study, we introduce Synth-SBDH, a novel synthetic dataset with detailed SBDH annotations, encompassing status, temporal information, and rationale across 15 SBDH categories. We showcase the utility of Synth-SBDH on three tasks using real-world clinical datasets from two distinct hospital settings, highlighting its versatility, generalizability, and distillation capabilities. Models trained on Synth-SBDH consistently outperform counterparts with no Synth-SBDH training, achieving up to 62.5% macro-F improvements. Additionally, Synth-SBDH proves effective for rare SBDH categories and under-resource constraints. Human evaluation demonstrates a Human-LLM alignment of 71.06% and uncovers areas for future refinements.
Since the introduction of the Kolmogorov complexity of binary sequences in the 1960s, there have been significant advancements in the topic of complexity measures for randomness assessment, which are of fundamental importance in theoretical computer science and of practical interest in cryptography. This survey reviews notable research from the past four decades on the linear, quadratic and maximum-order complexities of pseudo-random sequences and their relations with Lempel-Ziv complexity, expansion complexity, 2-adic complexity, and correlation measures.
The self-attention mechanism has been adopted in several widely-used message-passing neural networks (MPNNs) (e.g., GATs), which adaptively controls the amount of information that flows along the edges of the underlying graph. This usage of attention has made such models a baseline for studies on explainable AI (XAI) since interpretations via attention have been popularized in various domains (e.g., natural language processing and computer vision). However, existing studies often use naive calculations to derive attribution scores from attention, and do not take the precise and careful calculation of edge attribution into consideration. In our study, we aim to fill the gap between the widespread usage of attention-enabled MPNNs and their potential in largely under-explored explainability, a topic that has been actively investigated in other areas. To this end, as the first attempt, we formalize the problem of edge attribution from attention weights in GNNs. Then, we propose GATT, an edge attribution calculation method built upon the computation tree. Through comprehensive experiments, we demonstrate the effectiveness of our proposed method when evaluating attributions from GATs. Conversely, we empirically validate that simply averaging attention weights over graph attention layers is insufficient to interpret the GAT model's behavior. Code is publicly available at //github.com/jordan7186/GAtt/tree/main.
Understanding causality helps to structure interventions to achieve specific goals and enables predictions under interventions. With the growing importance of learning causal relationships, causal discovery tasks have transitioned from using traditional methods to infer potential causal structures from observational data to the field of pattern recognition involved in deep learning. The rapid accumulation of massive data promotes the emergence of causal search methods with brilliant scalability. Existing summaries of causal discovery methods mainly focus on traditional methods based on constraints, scores and FCMs, there is a lack of perfect sorting and elaboration for deep learning-based methods, also lacking some considers and exploration of causal discovery methods from the perspective of variable paradigms. Therefore, we divide the possible causal discovery tasks into three types according to the variable paradigm and give the definitions of the three tasks respectively, define and instantiate the relevant datasets for each task and the final causal model constructed at the same time, then reviews the main existing causal discovery methods for different tasks. Finally, we propose some roadmaps from different perspectives for the current research gaps in the field of causal discovery and point out future research directions.