The sequential multiple assignment randomized trial (SMART) is the ideal study design for the evaluation of multistage treatment regimes, which comprise sequential decision rules that recommend treatments for a patient at each of a series of decision points based on their evolving characteristics. A common goal is to compare the set of so-called embedded regimes represented in the design on the basis of a primary outcome of interest. In the study of chronic diseases and disorders, this outcome is often a time to an event, and a goal is to compare the distributions of the time-to-event outcome associated with each regime in the set. We present a general statistical framework in which we develop a logrank-type test for comparison of the survival distributions associated with regimes within a specified set based on the data from a SMART with an arbitrary number of stages that allows incorporation of covariate information to enhance efficiency and can also be used with data from an observational study. The framework provides clarification of the assumptions required to yield a principled test procedure, and the proposed test subsumes or offers an improved alternative to existing methods. We demonstrate performance of the methods in a suite of simulation studies. The methods are applied to a SMART in patients with acute promyelocytic leukemia.
Accurate estimation of queuing delays is crucial for designing and optimizing communication networks, particularly in the context of Deterministic Networking (DetNet) scenarios. This study investigates the approximation of Internet queuing delays using an M/M/1 envelope model, which provides a simple methodology to find tight upper bounds of real delay percentiles. Real traffic statistics collected at large Internet Exchange Points (like Amsterdam and San Francisco) have been used to fit polynomial regression models for transforming packet queuing delays into the M/M/1 envelope models. We finally propose a methodology for providing delay percentiles in DetNet scenarios where tight latency guarantees need to be assured.
LiDAR-based 3D object detection is pivotal across many applications, yet the performance of such detection systems often degrades after deployment, especially when faced with unseen test point clouds originating from diverse locations or subjected to corruption. In this work, we introduce a new online adaptation framework for detectors named Model Synergy (MOS). Specifically, MOS dynamically assembles best-fit supermodels for each test batch from a bank of historical checkpoints, leveraging long-term knowledge to guide model updates without forgetting. The model assembly is directed by the proposed synergy weights (SW), employed for weighted averaging of the selected checkpoints to minimize redundancy in the composite supermodel. These weights are calculated by evaluating the similarity of predicted bounding boxes on test data and the feature independence among model pairs in the bank. To maintain an informative yet compact model bank, we pop out checkpoints with the lowest average SW scores and insert newly updated model weights. Our method was rigorously tested against prior test-time domain adaptation strategies on three datasets and under eight types of corruptions, demonstrating its superior adaptability to changing scenes and conditions. Remarkably, our approach achieved a 67.3% increase in performance in a complex "cross-corruption" scenario, which involves cross-dataset inconsistencies and real-world scene corruptions, providing a more realistic testbed of adaptation capabilities. The code is available at //github.com/zhuoxiao-chen/MOS.
Analyzing the similarity of internal representations within and across different models has been an important technique for understanding the behavior of deep neural networks. Most existing methods for analyzing the similarity between representations of high dimensions, such as those based on Canonical Correlation Analysis (CCA) and widely used Centered Kernel Alignment (CKA), rely on statistical properties of the representations for a set of data points. In this paper, we focus on transformer models and study the similarity of representations between the hidden layers of individual transformers. In this context, we show that a simple sample-wise cosine similarity metric is capable of capturing the similarity and aligns with the complicated CKA. Our experimental results on common transformers reveal that representations across layers are positively correlated, albeit the similarity decreases when layers are far apart. We then propose an aligned training approach to enhance the similarity between internal representations, with trained models that enjoy the following properties: (1) the last-layer classifier can be directly applied right after any hidden layers, yielding intermediate layer accuracies much higher than those under standard training, (2) the layer-wise accuracies monotonically increase and reveal the minimal depth needed for the given task, (3) when served as multi-exit models, they achieve on-par performance with standard multi-exit architectures which consist of additional classifiers designed for early exiting in shallow layers. To our knowledge, our work is the first to show that one common classifier is sufficient for multi-exit models. We conduct experiments on both vision and NLP tasks to demonstrate the performance of the proposed aligned training.
We propose a new method for combining in situ buoy measurements with Earth system models (ESMs) to improve the accuracy of temperature predictions in the ocean. The technique utilizes the dynamics \textit{and} modes identified in ESMs alongside buoy measurements to improve accuracy while preserving features such as seasonality. We use this technique, which we call Dynamic Basis Function Interpolation, to correct errors in localized temperature predictions made by the Model for Prediction Across Scales Ocean component (MPAS-O) with the Global Drifter Program's in situ ocean buoy dataset.
We consider the method of mappings for performing shape optimization for unsteady fluid-structure interaction (FSI) problems. In this work, we focus on the numerical implementation. We model the optimization problem such that it takes several theoretical results into account, such as regularity requirements on the transformations and a differential geometrical point of view on the manifold of shapes. Moreover, we discretize the problem such that we can compute exact discrete gradients. This allows for the use of general purpose optimization solvers. We focus on an FSI benchmark problem to validate our numerical implementation. The method is used to optimize parts of the outer boundary and the interface. The numerical simulations build on FEniCS, dolfin-adjoint and IPOPT. Moreover, as an additional theoretical result, we show that for a linear special case the adjoint attains the same structure as the forward problem but reverses the temporal flow of information.
The widely adopted Business Process Model and Notation (BPMN) is a cornerstone of industry standards for business process modeling. However, its ambiguous execution semantics often result in inconsistent interpretations, depending on the software used for implementation. In response, the Process Specification Language (PASS) provides formally defined semantics to overcome these interpretational challenges. Despite its clear advantages, PASS has not reached the same level of industry penetration as BPMN. This feasibility study proposes using PASS as an intermediary framework to translate and execute BPMN models. It describes the development of a prototype translator that converts specific BPMN elements into a format compatible with PASS. These models are then transformed into source code and executed in a bespoke workflow environment, marking a departure from traditional BPMN implementations. Our findings suggest that integrating PASS enhances compatibility across different modeling and execution tools and offers a more robust methodology for implementing business processes across organizations. This study lays the groundwork for more accurate and unified business process model executions, potentially transforming industry standards for process modeling and execution.
Modular, distributed and multi-core architectures are currently considered a promising approach for scalability of quantum computing systems. The integration of multiple Quantum Processing Units necessitates classical and quantum-coherent communication, introducing challenges related to noise and quantum decoherence in quantum state transfers between cores. Optimizing communication becomes imperative, and the compilation and mapping of quantum circuits onto physical qubits must minimize state transfers while adhering to architectural constraints. The compilation process, inherently an NP-hard problem, demands extensive search times even with a small number of qubits to be solved to optimality. To address this challenge efficiently, we advocate for the utilization of heuristic mappers that can rapidly generate solutions. In this work, we propose a novel approach employing Deep Reinforcement Learning (DRL) methods to learn these heuristics for a specific multi-core architecture. Our DRL agent incorporates a Transformer encoder and Graph Neural Networks. It encodes quantum circuits using self-attention mechanisms and produce outputs through an attention-based pointer mechanism that directly signifies the probability of matching logical qubits with physical cores. This enables the selection of optimal cores for logical qubits efficiently. Experimental evaluations show that the proposed method can outperform baseline approaches in terms of reducing inter-core communications and minimizing online time-to-solution. This research contributes to the advancement of scalable quantum computing systems by introducing a novel learning-based heuristic approach for efficient quantum circuit compilation and mapping.
The ultimate goal of next generation multiple access (NGMA) is to support massive terminals and facilitate multiple functionalities over the limited radio resources of wireless networks in the most efficient manner possible. However, the random and uncontrollable wireless radio environment is a major obstacle to realizing this NGMA vision. Given the prominent feature of achieving 360{\deg} smart radio environment, simultaneously transmitting and reflecting surfaces (STARS) are emerging as one key enabling technology among the family of reconfigurable intelligent surfaces for NGMA. This paper provides a comprehensive overview of the recent research progress of STARS, focusing on fundamentals, performance analysis, and full-space beamforming design, as well as promising employments of STARS in NGMA. In particular, we first introduce the basics of STARS by elaborating on the foundational principles and operating protocols as well as discussing different STARS categories and prototypes. Moreover, we systematically survey the existing performance analysis and beamforming design for STARS-aided wireless communications in terms of diverse objectives and different mathematical approaches. Given the superiority of STARS, we further discuss advanced STARS applications as well as the attractive interplay between STARS and other emerging techniques to motivate future works for realizing efficient NGMA.
Interest in bilevel optimization has grown in recent years, partially due to its applications to tackle challenging machine-learning problems. Several exciting recent works have been centered around developing efficient gradient-based algorithms that can solve bilevel optimization problems with provable guarantees. However, the existing literature mainly focuses on bilevel problems either without constraints, or featuring only simple constraints that do not couple variables across the upper and lower levels, excluding a range of complex applications. Our paper studies this challenging but less explored scenario and develops a (fully) first-order algorithm, which we term BLOCC, to tackle BiLevel Optimization problems with Coupled Constraints. We establish rigorous convergence theory for the proposed algorithm and demonstrate its effectiveness on two well-known real-world applications - hyperparameter selection in support vector machine (SVM) and infrastructure planning in transportation networks using the real data from the city of Seville.
Molecular design and synthesis planning are two critical steps in the process of molecular discovery that we propose to formulate as a single shared task of conditional synthetic pathway generation. We report an amortized approach to generate synthetic pathways as a Markov decision process conditioned on a target molecular embedding. This approach allows us to conduct synthesis planning in a bottom-up manner and design synthesizable molecules by decoding from optimized conditional codes, demonstrating the potential to solve both problems of design and synthesis simultaneously. The approach leverages neural networks to probabilistically model the synthetic trees, one reaction step at a time, according to reactivity rules encoded in a discrete action space of reaction templates. We train these networks on hundreds of thousands of artificial pathways generated from a pool of purchasable compounds and a list of expert-curated templates. We validate our method with (a) the recovery of molecules using conditional generation, (b) the identification of synthesizable structural analogs, and (c) the optimization of molecular structures given oracle functions relevant to drug discovery.