In oncology, phase II studies are crucial for clinical development plans, as they identify potent agents with sufficient activity to continue development in the subsequent phase III trials. Traditionally, phase II studies are single-arm studies, with an endpoint of treatment efficacy in the short-term. However, drug safety is also an important consideration. Thus, in the context of such multiple outcome design, predictive probabilities-based Bayesian monitoring strategies have been developed to assess if a clinical trial will show a conclusive result at the planned end of the study. In this paper, we propose a new simple index vector for summarizing the results that cannot be captured by existing strategies. Specifically, for each interim monitoring time point, we calculate the Bayesian predictive probability using our new index, and use it to assign a go/no-go decision. Finally, simulation studies are performed to evaluate the operating characteristics of the design. This analysis demonstrates that the proposed method makes appropriate interim go/no-go decisions.
Recently, the performance of neural image compression (NIC) has steadily improved thanks to the last line of study, reaching or outperforming state-of-the-art conventional codecs. Despite significant progress, current NIC methods still rely on ConvNet-based entropy coding, limited in modeling long-range dependencies due to their local connectivity and the increasing number of architectural biases and priors, resulting in complex underperforming models with high decoding latency. Motivated by the efficiency investigation of the Tranformer-based transform coding framework, namely SwinT-ChARM, we propose to enhance the latter, as first, with a more straightforward yet effective Tranformer-based channel-wise auto-regressive prior model, resulting in an absolute image compression transformer (ICT). Through the proposed ICT, we can capture both global and local contexts from the latent representations and better parameterize the distribution of the quantized latents. Further, we leverage a learnable scaling module with a sandwich ConvNeXt-based pre-/post-processor to accurately extract more compact latent codes while reconstructing higher-quality images. Extensive experimental results on benchmark datasets showed that the proposed framework significantly improves the trade-off between coding efficiency and decoder complexity over the versatile video coding (VVC) reference encoder (VTM-18.0) and the neural codec SwinT-ChARM. Moreover, we provide model scaling studies to verify the computational efficiency of our approach and conduct several objective and subjective analyses to bring to the fore the performance gap between the adaptive image compression transformer (AICT) and the neural codec SwinT-ChARM.
Optimization tasks are crucial in statistical machine learning. Recently, there has been great interest in leveraging tools from dynamical systems to derive accelerated and robust optimization methods via suitable discretizations of continuous-time systems. However, these ideas have mostly been limited to Euclidean spaces and unconstrained settings, or to Riemannian gradient flows. In this work, we propose a dissipative extension of Dirac's theory of constrained Hamiltonian systems as a general framework for solving optimization problems over smooth manifolds, including problems with nonlinear constraints. We develop geometric/symplectic numerical integrators on manifolds that are "rate-matching," i.e., preserve the continuous-time rates of convergence. In particular, we introduce a dissipative RATTLE integrator able to achieve optimal convergence rate locally. Our class of (accelerated) algorithms are not only simple and efficient but also applicable to a broad range of contexts.
This work is motivated by goal-oriented sensitivity analysis of inputs/outputs of complex simulators. More precisely we are interested in the ranking of the uncertain input variables that impact the most a feasible design domain. Most sensitivity analysis methods deal with scalar outputs. In this paper, we propose a way to perform sensitivity analysis when dealing with set-valued outputs. Our new methodology is driven by sensitivity analysis on excursion sets but can also be applied to setvalued simulators as in viability field, or when dealing with maps such as pollutant concentration maps or flooding zone maps. We propose a method based on the Hilbert Schmidt Independence Criterion (HSIC) with a kernel tailored to sets as outputs. A first contribution is the proof that this kernel is characteristic (i.e injectivity of the embedding in the associated Reproducing Kernel Hilbert Space), a required property for the HSIC interpretation in a sensitivity analysis context. We propose then to compute the HSIC-ANOVA indices which allow a decomposition of the input contributions. Using these indices, we can identify which inputs should be neglected (screening) and we can rank the others by influence (ranking). The estimation of these indices is also adapted to the set-valued outputs. Finally we test the proposed method on two test cases of excursion sets.
Breaking safety constraints in control systems can lead to potential risks, resulting in unexpected costs or catastrophic damage. Nevertheless, uncertainty is ubiquitous, even among similar tasks. In this paper, we develop a novel adaptive safe control framework that integrates meta learning, Bayesian models, and control barrier function (CBF) method. Specifically, with the help of CBF method, we learn the inherent and external uncertainties by a unified adaptive Bayesian linear regression (ABLR) model, which consists of a forward neural network (NN) and a Bayesian output layer. Meta learning techniques are leveraged to pre-train the NN weights and priors of the ABLR model using data collected from historical similar tasks. For a new control task, we refine the meta-learned models using a few samples, and introduce pessimistic confidence bounds into CBF constraints to ensure safe control. Moreover, we provide theoretical criteria to guarantee probabilistic safety during the control processes. To validate our approach, we conduct comparative experiments in various obstacle avoidance scenarios. The results demonstrate that our algorithm significantly improves the Bayesian model-based CBF method, and is capable for efficient safe exploration even with multiple uncertain constraints.
Multi armed bandit (MAB) algorithms have been increasingly used to complement or integrate with A/B tests and randomized clinical trials in e-commerce, healthcare, and policymaking. Recent developments incorporate possible delayed feedback. While existing MAB literature often focuses on maximizing the expected cumulative reward outcomes (or, equivalently, regret minimization), few efforts have been devoted to establish valid statistical inference approaches to quantify the uncertainty of learned policies. We attempt to fill this gap by providing a unified statistical inference framework for policy evaluation where a target policy is allowed to differ from the data collecting policy, and our framework allows delay to be associated with the treatment arms. We present an adaptively weighted estimator that on one hand incorporates the arm-dependent delaying mechanism to achieve consistency, and on the other hand mitigates the variance inflation across stages due to vanishing sampling probability. In particular, our estimator does not critically depend on the ability to estimate the unknown delay mechanism. Under appropriate conditions, we prove that our estimator converges to a normal distribution as the number of time points goes to infinity, which provides guarantees for large-sample statistical inference. We illustrate the finite-sample performance of our approach through Monte Carlo experiments.
The rise of mobile AI accelerators allows latency-sensitive applications to execute lightweight Deep Neural Networks (DNNs) on the client side. However, critical applications require powerful models that edge devices cannot host and must therefore offload requests, where the high-dimensional data will compete for limited bandwidth. This work proposes shifting away from focusing on executing shallow layers of partitioned DNNs. Instead, it advocates concentrating the local resources on variational compression optimized for machine interpretability. We introduce a novel framework for resource-conscious compression models and extensively evaluate our method in an environment reflecting the asymmetric resource distribution between edge devices and servers. Our method achieves 60% lower bitrate than a state-of-the-art SC method without decreasing accuracy and is up to 16x faster than offloading with existing codec standards.
In clinical trials it is often desirable to test for superiority and non-inferiority simultaneously. The hypotheses are formulated to test whether a new treatment is superior to a control on at least one endpoint, and non-inferior on all endpoints. The simulation studies of \citet{logan2008superiority} shows that most of the current testing methods are very conservative, especially when the non-inferiority margins are close to zero. In this paper we propose a method for the superiority and non-inferiority problem based on the lower one-side confidence intervals. Theoretically, we prove that our testing method can control the type I error at a pre-specified level $\alpha$, such as 0.05, 0.025 or 0.01, which is also demonstrated in our simulation study. Meanwhile, the simulation study show that our method has a higher power than several alternative methods when the non-inferiority margins are close to zero or when both endpoints have a positive value, hence, the proposed method successfully avoid the deficiency of being conservative and has a higher power. A real example about the efficacy and toxicity of an inhaled drug for asthma compared to placebo is used to illustrate the proposed method.
The Gaussian graphical model (GGM) incorporates an undirected graph to represent the conditional dependence between variables, with the precision matrix encoding partial correlation between pair of variables given the others. To achieve flexible and accurate estimation and inference of GGM, we propose the novel method FLAG, which utilizes the random effects model for pairwise conditional regression to estimate the precision matrix and applies statistical tests to recover the graph. Compared with existing methods, FLAG has several unique advantages: (i) it provides accurate estimation without sparsity assumptions on the precision matrix, (ii) it allows for element-wise inference of the precision matrix, (iii) it achieves computational efficiency by developing an efficient PX-EM algorithm and a MM algorithm accelerated with low-rank updates, and (iv) it enables joint estimation of multiple graphs using FLAG-Meta or FLAG-CA. The proposed methods are evaluated using various simulation settings and real data applications, including gene expression in the human brain, term association in university websites, and stock prices in the U.S. financial market. The results demonstrate that FLAG and its extensions provide accurate precision estimation and graph recovery.
Federated learning is an approach to collaboratively training machine learning models for multiple parties that prohibit data sharing. One of the challenges in federated learning is non-IID data between clients, as a single model can not fit the data distribution for all clients. Meta-learning, such as Per-FedAvg, is introduced to cope with the challenge. Meta-learning learns shared initial parameters for all clients. Each client employs gradient descent to adapt the initialization to local data distributions quickly to realize model personalization. However, due to non-convex loss function and randomness of sampling update, meta-learning approaches have unstable goals in local adaptation for the same client. This fluctuation in different adaptation directions hinders the convergence in meta-learning. To overcome this challenge, we use the historical local adapted model to restrict the direction of the inner loop and propose an elastic-constrained method. As a result, the current round inner loop keeps historical goals and adapts to better solutions. Experiments show our method boosts meta-learning convergence and improves personalization without additional calculation and communication. Our method achieved SOTA on all metrics in three public datasets.
Species distribution modeling (SDM) plays a crucial role in investigating habitat suitability and addressing various ecological issues. While likelihood analysis is commonly used to draw ecological conclusions, it has been observed that its statistical performance is not robust when faced with slight deviations due to misspecification in SDM. We propose a new robust estimation method based on a novel divergence for the Poisson point process model. The proposed method is characterized by weighting the log-likelihood equation to mitigate the impact of heterogeneous observations in the presence-only data, which can result from model misspecification. We demonstrate that the proposed method improves the predictive performance of the maximum likelihood estimation in our simulation studies and in the analysis of vascular plant data in Japan.