The effects of treatments on continuous outcomes can be estimated by the mean difference (i.e. by measurement units) and the relative effect scales (i.e. by percentages), both of which provide only a single effect size estimate over the study population. Quantile treatment effect (QTE) analysis is more informative as it describes the effect of the treatment across the whole population. A drawback of QTE has been that it is usually presented over the quantiles of the control group distribution, whereas presentation over the measurement units is often more informative. We developed a method to estimate back-transformed QTE (BQTE), that presents QTE as a function of the outcome value in the control group, using piecewise linear interpolation and bootstrapping. We further applied the BQTE function to provide informative bounds on the treatment effect at the upper and lower tails of the population. To illustrate the approach, we used 3 data sets of treatment for the common cold: zinc gluconate lozenges, zinc acetate lozenges, and nasal carrageenan. In all data sets, the relative scale provided a better summary of the BQTE distribution than the mean difference. The BQTE approach is particularly useful for describing the variability of effects on the duration of illnesses, length of hospital stay and other continuous outcomes that can vary greatly in the population. Using this method, it is possible to present the QTE by the measurement units, which provides an informative addition to the standard presentation by quantiles.
Diverse studies have analyzed the quality of automatically generated test cases by using test smells as the main quality attribute. But recent work reported that generated tests may suffer a number of quality issues not necessarily considered in previous studies. Little is known about these issues and their frequency within generated tests. In this paper, we report on a manual analysis of an external dataset consisting of 2,340 automatically generated tests. This analysis aimed at detecting new quality issues, not covered by past recognized test smells. We use thematic analysis to group and categorize the new quality issues found. As a result, we propose a taxonomy of 13 new quality issues grouped in four categories. We also report on the frequency of these new quality issues within the dataset and present eight recommendations that test generators may consider to improve the quality and usefulness of the automatically generated tests.
In the study of the brain, there is a hypothesis that sparse coding is realized in information representation of external stimuli, which is experimentally confirmed for visual stimulus recently. However, unlike the specific functional region in the brain, sparse coding in information processing in the whole brain has not been clarified sufficiently. In this study, we investigate the validity of sparse coding in the whole human brain by applying various matrix factorization methods to functional magnetic resonance imaging data of neural activities in the whole human brain. The result suggests sparse coding hypothesis in information representation in the whole human brain, because extracted features from sparse MF method, SparsePCA or MOD under high sparsity setting, or approximate sparse MF method, FastICA, can classify external visual stimuli more accurately than non-sparse MF method or sparse MF method under low sparsity setting.
Due to their intrinsic capabilities on parallel signal processing, optical neural networks (ONNs) have attracted extensive interests recently as a potential alternative to electronic artificial neural networks (ANNs) with reduced power consumption and low latency. Preliminary confirmation of the parallelism in optical computing has been widely done by applying the technology of wavelength division multiplexing (WDM) in the linear transformation part of neural networks. However, inter-channel crosstalk has obstructed WDM technologies to be deployed in nonlinear activation in ONNs. Here, we propose a universal WDM structure called multiplexed neuron sets (MNS) which apply WDM technologies to optical neurons and enable ONNs to be further compressed. A corresponding back-propagation (BP) training algorithm is proposed to alleviate or even cancel the influence of inter-channel crosstalk on MNS-based WDM-ONNs. For simplicity, semiconductor optical amplifiers (SOAs) are employed as an example of MNS to construct a WDM-ONN trained with the new algorithm. The result shows that the combination of MNS and the corresponding BP training algorithm significantly downsize the system and improve the energy efficiency to tens of times while giving similar performance to traditional ONNs.
Image segmentation, real-value prediction, and cross-modal translation are critical challenges in medical imaging. In this study, we propose a versatile multi-task neural network framework, based on an enhanced Transformer U-Net architecture, capable of simultaneously, selectively, and adaptively addressing these medical image tasks. Validation is performed on a public repository of human brain MR and CT images. We decompose the traditional problem of synthesizing CT images into distinct subtasks, which include skull segmentation, Hounsfield unit (HU) value prediction, and image sequential reconstruction. To enhance the framework's versatility in handling multi-modal data, we expand the model with multiple image channels. Comparisons between synthesized CT images derived from T1-weighted and T2-Flair images were conducted, evaluating the model's capability to integrate multi-modal information from both morphological and pixel value perspectives.
In this work, we use the integral definition of the fractional Laplace operator and study a sparse optimal control problem involving a fractional, semilinear, and elliptic partial differential equation as state equation; control constraints are also considered. We establish the existence of optimal solutions and first and second order optimality conditions. We also analyze regularity properties for optimal variables. We propose and analyze two finite element strategies of discretization: a fully discrete scheme, where the control variable is discretized with piecewise constant functions, and a semidiscrete scheme, where the control variable is not discretized. For both discretization schemes, we analyze convergence properties and a priori error bounds.
Robust Markov decision processes (MDPs) are used for applications of dynamic optimization in uncertain environments and have been studied extensively. Many of the main properties and algorithms of MDPs, such as value iteration and policy iteration, extend directly to RMDPs. Surprisingly, there is no known analog of the MDP convex optimization formulation for solving RMDPs. This work describes the first convex optimization formulation of RMDPs under the classical sa-rectangularity and s-rectangularity assumptions. By using entropic regularization and exponential change of variables, we derive a convex formulation with a number of variables and constraints polynomial in the number of states and actions, but with large coefficients in the constraints. We further simplify the formulation for RMDPs with polyhedral, ellipsoidal, or entropy-based uncertainty sets, showing that, in these cases, RMDPs can be reformulated as conic programs based on exponential cones, quadratic cones, and non-negative orthants. Our work opens a new research direction for RMDPs and can serve as a first step toward obtaining a tractable convex formulation of RMDPs.
Cognitive Radio Network (CRN) provides effective capabilities for resource allocation with the valuable spectrum resources in the network. It provides the effective allocation of resources to the unlicensed users or Secondary Users (SUs) to access the spectrum those are unused by the licensed users or Primary Users (Pus). This paper develops an Optimal Relay Selection scheme with the spectrum-sharing scheme in CRN. The proposed Cross-Layer Spider Swarm Shifting is implemented in CRN for the optimal relay selection with Spider Swarm Optimization (SSO). The shortest path is estimated with the data shifting model for the data transmission path in the CRN. This study examines a cognitive relay network (CRN) with interference restrictions imposed by a mobile end user (MU). Half-duplex communication is used in the proposed system model between a single primary user (PU) and a single secondary user (SU). Between the SU source and SU destination, an amplify and forward (AF) relaying mechanism is also used. While other nodes (SU Source, SU relays, and PU) are supposed to be immobile in this scenario, the mobile end user (SU destination) is assumed to travel at high vehicle speeds. The suggested method achieves variety by placing a selection combiner at the SU destination and dynamically selecting the optimal relay for transmission based on the greatest signal-to-noise (SNR) ratio. The performance of the proposed Cross-Layer Spider Swarm Shifting model is compared with the Spectrum Sharing Optimization with QoS Guarantee (SSO-QG). The comparative analysis expressed that the proposed Cross-Layer Spider Swarm Shifting model delay is reduced by 15% compared with SSO-QG. Additionally, the proposed Cross-Layer Spider Swarm Shifting exhibits the improved network performance of ~25% higher throughput compared with SSO-QG.
Psychiatry research seeks to understand the manifestations of psychopathology in behavior, as measured in questionnaire data, by identifying a small number of latent factors that explain them. While factor analysis is the traditional tool for this purpose, the resulting factors may not be interpretable, and may also be subject to confounding variables. Moreover, missing data are common, and explicit imputation is often required. To overcome these limitations, we introduce interpretability constrained questionnaire factorization (ICQF), a non-negative matrix factorization method with regularization tailored for questionnaire data. Our method aims to promote factor interpretability and solution stability. We provide an optimization procedure with theoretical convergence guarantees, and an automated procedure to detect latent dimensionality accurately. We validate these procedures using realistic synthetic data. We demonstrate the effectiveness of our method in a widely used general-purpose questionnaire, in two independent datasets (the Healthy Brain Network and Adolescent Brain Cognitive Development studies). Specifically, we show that ICQF improves interpretability, as defined by domain experts, while preserving diagnostic information across a range of disorders, and outperforms competing methods for smaller dataset sizes. This suggests that the regularization in our method matches domain characteristics. The python implementation for ICQF is available at \url{//github.com/jefferykclam/ICQF}.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.