In oncology, phase II or multiple expansion cohort trials are crucial for clinical development plans. This is because they aid in identifying potent agents with sufficient activity to continue development and confirm the proof of concept. Typically, these clinical trials are single-arm trials, with the primary endpoint being short-term treatment efficacy. Despite the development of several well-designed methodologies, there may be a practical impediment in that the endpoints may be observed within a sufficient time such that adaptive go/no-go decisions can be made in a timely manner at each interim monitoring. Specifically, Response Evaluation Criteria in Solid Tumors guideline defines a confirmed response and necessitates it in non-randomized trials, where the response is the primary endpoint. However, obtaining the confirmed outcome from all participants entered at interim monitoring may be time-consuming as non-responders should be followed up until the disease progresses. Thus, this study proposed an approach to accelerate the decision-making process that incorporated the outcome without confirmation by discounting its contribution to the decision-making framework using the generalized Bayes' theorem. Further, the behavior of the proposed approach was evaluated through a simple simulation study. The results demonstrated that the proposed approach made appropriate interim go/no-go decisions.
In order to be deployed safely, Large Language Models (LLMs) must be capable of dynamically adapting their behavior based on their level of knowledge and uncertainty associated with specific topics. This adaptive behavior, which we refer to as self-restraint, is non-trivial to teach since it depends on the internal knowledge of an LLM. By default, LLMs are trained to maximize the next token likelihood, which does not teach the model to modulate its answer based on its level of uncertainty. In order to learn self-restraint, we devise a utility function that can encourage the model to produce responses only when it is confident in them. This utility function can be used to score generation of different length and abstention. To optimize this function, we introduce ReSearch, a process of "self-reflection" consisting of iterative self-prompting and self-evaluation. We use the ReSearch algorithm to generate synthetic data on which we finetune our models. Compared to their original versions, our resulting models generate fewer \emph{hallucinations} overall at no additional inference cost, for both known and unknown topics, as the model learns to selectively restrain itself. In addition, our method elegantly incorporates the ability to abstain by augmenting the samples generated by the model during the search procedure with an answer expressing abstention.
We present VoxBind, a new score-based generative model for 3D molecules conditioned on protein structures. Our approach represents molecules as 3D atomic density grids and leverages a 3D voxel-denoising network for learning and generation. We extend the neural empirical Bayes formalism (Saremi & Hyvarinen, 2019) to the conditional setting and generate structure-conditioned molecules with a two-step procedure: (i) sample noisy molecules from the Gaussian-smoothed conditional distribution with underdamped Langevin MCMC using the learned score function and (ii) estimate clean molecules from the noisy samples with single-step denoising. Compared to the current state of the art, our model is simpler to train, significantly faster to sample from, and achieves better results on extensive in silico benchmarks -- the generated molecules are more diverse, exhibit fewer steric clashes, and bind with higher affinity to protein pockets. The code is available at //github.com/genentech/voxbind/.
While undulatory swimming of elongate limbless robots has been extensively studied in open hydrodynamic environments, less research has been focused on limbless locomotion in complex, cluttered aquatic environments. Motivated by the concept of mechanical intelligence, where controls for obstacle navigation can be offloaded to passive body mechanics in terrestrial limbless locomotion, we hypothesize that principles of mechanical intelligence can be extended to cluttered hydrodynamic regimes. To test this, we developed an untethered limbless robot capable of undulatory swimming on water surfaces, utilizing a bilateral cable-driven mechanism inspired by organismal muscle actuation morphology to achieve programmable anisotropic body compliance. We demonstrated through robophysical experiments that, similar to terrestrial locomotion, an appropriate level of body compliance can facilitate emergent swim through complex hydrodynamic environments under pure open-loop control. Moreover, we found that swimming performance depends on undulation frequency, with effective locomotion achieved only within a specific frequency range. This contrasts with highly damped terrestrial regimes, where inertial effects can often be neglected. Further, to enhance performance and address the challenges posed by nondeterministic obstacle distributions, we incorporated computational intelligence by developing a real-time body compliance tuning controller based on cable tension feedback. This controller improves the robot's robustness and overall speed in heterogeneous hydrodynamic environments.
Response-adaptive (RA) designs of clinical trials allow targeting a given objective by skewing the allocation of participants to treatments based on observed outcomes. RA designs face greater regulatory scrutiny due to potential type I error inflation, which limits their uptake in practice. Existing approaches to type I error control either only work for specific designs, have a risk of Monte Carlo/approximation error, are conservative, or computationally intractable. We develop a general and computationally tractable approach for exact analysis in two-arm RA designs with binary outcomes. We use the approach to construct exact tests applicable to designs that use either randomized or deterministic RA procedures, allowing for complexities such as delayed outcomes, early stopping or allocation of participants in blocks. Our efficient forward recursion implementation allows for testing of two-arm trials with 1,000 participants on a standard computer. Through an illustrative computational study of trials using randomized dynamic programming we show that, contrary to what is known for equal allocation, a conditional exact test has, almost uniformly, higher power than the unconditional test. Two real-world trials with the above-mentioned complexities are re-analyzed to demonstrate the value of our approach in controlling type I error and/or improving the statistical power.
Most scientific machine learning (SciML) applications of neural networks involve hundreds to thousands of parameters, and hence, uncertainty quantification for such models is plagued by the curse of dimensionality. Using physical applications, we show that $L_0$ sparsification prior to Stein variational gradient descent ($L_0$+SVGD) is a more robust and efficient means of uncertainty quantification, in terms of computational cost and performance than the direct application of SGVD or projected SGVD methods. Specifically, $L_0$+SVGD demonstrates superior resilience to noise, the ability to perform well in extrapolated regions, and a faster convergence rate to an optimal solution.
Recently, successes have been achieved for the high-order gas-kinetic schemes (HGKS) on unstructured meshes for compressible flows. In this paper, to accelerate the computation, HGKS is implemented with the graphical processing unit (GPU) using the compute unified device architecture (CUDA). HGKS on unstructured meshes is a fully explicit scheme, and the acceleration framework can be developed based on the cell-level parallelism. For single-GPU computation, the connectivity of geometric information is generated for the requirement of data localization and independence. Based on such data structure, the kernels and corresponding girds of CUDA are set. With the one-to-one mapping between the indices of cells and CUDA threads, the single-GPU computation using CUDA can be implemented for HGKS. For multiple-GPU computation, the domain decomposition and data exchange need to be taken into account. The domain is decomposed into subdomains by METIS, and the MPI processes are created for the control of each process and communication among GPUs. With reconstruction of connectivity and adding ghost cells, the main configuration of CUDA for single-GPU can be inherited by each GPU. The benchmark cases for compressible flows, including accuracy test and flow passing through a sphere, are presented to assess the numerical performance of HGKS with Nvidia RTX A5000 and Tesla V100 GPUs. For single-GPU computation, compared with the parallel central processing unit (CPU) code running on the Intel Xeon Gold 5120 CPU with open multi-processing (OpenMP) directives, 5x speedup is achieved by RTX A5000 and 9x speedup is achieved by Tesla V100. For multiple-GPU computation, HGKS code scales properly with the increasing number of GPU. Numerical results confirm the excellent performance of multiple-GPU accelerated HGKS on unstructured meshes.
Evidence to guide healthcare decisions is often limited by a lack of relevant and trustworthy literature as well as difficulty in contextualizing existing research for a specific patient. Large language models (LLMs) could potentially address both challenges by either summarizing published literature or generating new studies based on real-world data (RWD). We evaluated the ability of five LLM-based systems in answering 50 clinical questions and had nine independent physicians review the responses for relevance, reliability, and actionability. As it stands, general-purpose LLMs (ChatGPT-4, Claude 3 Opus, Gemini Pro 1.5) rarely produced answers that were deemed relevant and evidence-based (2% - 10%). In contrast, retrieval augmented generation (RAG)-based and agentic LLM systems produced relevant and evidence-based answers for 24% (OpenEvidence) to 58% (ChatRWD) of questions. Only the agentic ChatRWD was able to answer novel questions compared to other LLMs (65% vs. 0-9%). These results suggest that while general-purpose LLMs should not be used as-is, a purpose-built system for evidence summarization based on RAG and one for generating novel evidence working synergistically would improve availability of pertinent evidence for patient care.
Turnover consists of moving into and out of professional employees in the company in a given period. Such a phenomenon significantly impacts the software industry since it generates knowledge loss, delays in the schedule, and increased costs in the final project. Despite the efforts made by researchers and professionals to minimize the turnover, more studies are needed to understand the motivation that drives Software Engineers to leave their jobs and the main strategies CEOs adopt to retain these professionals in software development companies. In this paper, we contribute a mixed methods study involving semi-structured interviews with Software Engineers and CEOs to obtain a wider opinion of these professionals about turnover and a subsequent validation survey with additional software engineers to check and review the insights from interviews. In studying such aspects, we identified 19 different reasons for software engineers' turnover and 18 more efficient strategies used in the software development industry to reduce it. Our findings provide several implications for industry and academia, which can drive future research.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.