Motivated by deterministic identification via (classical) channels, where the encoder is not allowed to use randomization, we revisit the problem of identification via quantum channels but now with the additional restriction that the message encoding must use pure quantum states, rather than general mixed states. Together with the previously considered distinction between simultaneous and general decoders, this suggests a two-dimensional spectrum of different identification capacities, whose behaviour could a priori be very different. We demonstrate two new results as our main findings: first, we show that all four combinations (pure/mixed encoder, simultaneous/general decoder) have a double-exponentially growing code size, and that indeed the corresponding identification capacities are lower bounded by the classical transmission capacity for a general quantum channel, which is given by the Holevo-Schumacher-Westmoreland Theorem. Secondly, we show that the simultaneous identification capacity of a quantum channel equals the simultaneous identification capacity with pure state encodings, thus leaving three linearly ordered identification capacities. By considering some simple examples, we finally show that these three are all different: general identification capacity can be larger than pure-state-encoded identification capacity, which in turn can be larger than pure-state-encoded simultaneous identification capacity.
This paper deals with Hermite osculatory interpolating splines. For a partition of a real interval endowed with a refinement consisting in dividing each subinterval into two small subintervals, we consider a space of smooth splines with additional smoothness at the vertices of the initial partition, and of the lowest possible degree. A normalized B-spline-like representation for the considered spline space is provided. In addition, several quasi-interpolation operators based on blossoming and control polynomials have also been developed. Some numerical tests are presented and compared with some recent works to illustrate the performance of the proposed approach.
Mediation analysis is commonly used in epidemiological research, but guidance is lacking on how multivariable missing data should be dealt with in these analyses. Multiple imputation (MI) is a widely used approach, but questions remain regarding impact of missingness mechanism, how to ensure imputation model compatibility and approaches to variance estimation. To address these gaps, we conducted a simulation study based on the Victorian Adolescent Health Cohort Study. We considered six missingness mechanisms, involving varying assumptions regarding the influence of outcome and/or mediator on missingness in key variables. We compared the performance of complete-case analysis, seven MI approaches, differing in how the imputation model was tailored, and a "substantive model compatible" MI approach. We evaluated both the MI-Boot (MI, then bootstrap) and Boot-MI (bootstrap, then MI) approaches to variance estimation. Results showed that when the mediator and/or outcome influenced their own missingness, there was large bias in effect estimates, while for other mechanisms appropriate MI approaches yielded approximately unbiased estimates. Beyond incorporating all analysis variables in the imputation model, how MI was tailored for compatibility with mediation analysis did not greatly impact point estimation bias. BootMI returned variance estimates with smaller bias than MIBoot, especially in the presence of incompatibility.
Recent advancements in Mixed Integer Optimization (MIO) algorithms, paired with hardware enhancements, have led to significant speedups in resolving MIO problems. These strategies have been utilized for optimal subset selection, specifically for choosing $k$ features out of $p$ in linear regression given $n$ observations. In this paper, we broaden this method to facilitate cluster-aware regression, where selection aims to choose $\lambda$ out of $K$ clusters in a linear mixed effects (LMM) model with $n_k$ observations for each cluster. Through comprehensive testing on a multitude of synthetic and real datasets, we exhibit that our method efficiently solves problems within minutes. Through numerical experiments, we also show that the MIO approach outperforms both Gaussian- and Laplace-distributed LMMs in terms of generating sparse solutions with high predictive power. Traditional LMMs typically assume that clustering effects are independent of individual features. However, we introduce an innovative algorithm that evaluates cluster effects for new data points, thereby increasing the robustness and precision of this model. The inferential and predictive efficacy of this approach is further illustrated through its application in student scoring and protein expression.
We show that the spectral embeddings of all known triangle-free strongly regular graphs are optimal spherical codes (the new cases are $56$ points in $20$ dimensions, $50$ points in $21$ dimensions, and $77$ points in $21$ dimensions), as are certain mutually unbiased basis arrangements constructed using Kerdock codes in up to $1024$ dimensions (namely, $2^{4k} + 2^{2k+1}$ points in $2^{2k}$ dimensions for $2 \le k \le 5$). As a consequence of the latter, we obtain optimality of the Kerdock binary codes of block length $64$, $256$, and $1024$, as well as uniqueness for block length $64$. We also prove universal optimality for $288$ points on a sphere in $16$ dimensions. To prove these results, we use three-point semidefinite programming bounds, for which only a few sharp cases were known previously. To obtain rigorous results, we develop improved techniques for rounding approximate solutions of semidefinite programs to produce exact optimal solutions.
Many classical inferential approaches fail to hold when interference exists among the population units. This amounts to the treatment status of one unit affecting the potential outcome of other units in the population. Testing for such spillover effects in this setting makes the null hypothesis non-sharp. An interesting approach to tackling the non-sharp nature of the null hypothesis in this setup is constructing conditional randomization tests such that the null is sharp on the restricted population. In randomized experiments, conditional randomized tests hold finite sample validity. Such approaches can pose computational challenges as finding these appropriate sub-populations based on experimental design can involve solving an NP-hard problem. In this paper, we view the network amongst the population as a random variable instead of being fixed. We propose a new approach that builds a conditional quasi-randomization test. Our main idea is to build the (non-sharp) null distribution of no spillover effects using random graph null models. We show that our method is exactly valid in finite-samples under mild assumptions. Our method displays enhanced power over other methods, with substantial improvement in complex experimental designs. We highlight that the method reduces to a simple permutation test, making it easy to implement in practice. We conduct a simulation study to verify the finite-sample validity of our approach and illustrate our methodology to test for interference in a weather insurance adoption experiment run in rural China.
We consider arbitrary bounded discrete time series. From its statistical feature, without any use of the Fourier transform, we find an almost periodic function which suitably characterizes the corresponding time series.
In the search for highly efficient decoders for short LDPC codes approaching maximum likelihood performance, a relayed decoding strategy, specifically activating the ordered statistics decoding process upon failure of a neural min-sum decoder, is enhanced by instilling three innovations. Firstly, soft information gathered at each step of the neural min-sum decoder is leveraged to forge a new reliability measure using a convolutional neural network. This measure aids in constructing the most reliable basis of ordered statistics decoding, bolstering the decoding process by excluding error-prone bits or concentrating them in a smaller area. Secondly, an adaptive ordered statistics decoding process is introduced, guided by a derived decoding path comprising prioritized blocks, each containing distinct test error patterns. The priority of these blocks is determined from the statistical data during the query phase. Furthermore, effective complexity management methods are devised by adjusting the decoding path's length or refining constraints on the involved blocks. Thirdly, a simple auxiliary criterion is introduced to reduce computational complexity by minimizing the number of candidate codewords before selecting the optimal estimate. Extensive experimental results and complexity analysis strongly support the proposed framework, demonstrating its advantages in terms of high throughput, low complexity, independence from noise variance, in addition to superior decoding performance.
We propose a method for obtaining parsimonious decompositions of networks into higher order interactions which can take the form of arbitrary motifs.The method is based on a class of analytically solvable generative models, where vertices are connected via explicit copies of motifs, which in combination with non-parametric priors allow us to infer higher order interactions from dyadic graph data without any prior knowledge on the types or frequencies of such interactions. Crucially, we also consider 'degree--corrected' models that correctly reflect the degree distribution of the network and consequently prove to be a better fit for many real world--networks compared to non-degree corrected models. We test the presented approach on simulated data for which we recover the set of underlying higher order interactions to a high degree of accuracy. For empirical networks the method identifies concise sets of atomic subgraphs from within thousands of candidates that cover a large fraction of edges and include higher order interactions of known structural and functional significance. The method not only produces an explicit higher order representation of the network but also a fit of the network to analytically tractable models opening new avenues for the systematic study of higher order network structures.
Developing algorithms for accurate and comprehensive neural decoding of mental contents is one of the long-cherished goals in the field of neuroscience and brain-machine interfaces. Previous studies have demonstrated the feasibility of neural decoding by training machine learning models to map brain activity patterns into a semantic vector representation of stimuli. These vectors, hereafter referred as pretrained feature vectors, are usually derived from semantic spaces based solely on image and/or text features and therefore they might have a totally different characteristics than how visual stimuli is represented in the human brain, resulting in limiting the capability of brain decoders to learn this mapping. To address this issue, we propose a representation learning framework, termed brain-grounding of semantic vectors, which fine-tunes pretrained feature vectors to better align with the neural representation of visual stimuli in the human brain. We trained this model this model with functional magnetic resonance imaging (fMRI) of 150 different visual stimuli categories, and then performed zero-shot brain decoding and identification analyses on 1) fMRI and 2) magnetoencephalography (MEG). Interestingly, we observed that by using the brain-grounded vectors, the brain decoding and identification accuracy on brain data from different neuroimaging modalities increases. These findings underscore the potential of incorporating a richer array of brain-derived features to enhance performance of brain decoding algorithms.
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.