We investigate the allocation of children to childcare facilities and propose solutions to overcome limitations in the current allocation mechanism. We introduce a natural preference domain and a priority structure that address these setbacks, aiming to enhance the allocation process. To achieve this, we present an adaptation of the Deferred Acceptance mechanism to our problem, which ensures strategy-proofness within our preference domain and yields the student-optimal stable matching. Finally, we provide a maximal domain for the existence of stable matchings using the properties that define our natural preference domain. Our results have practical implications for allocating indivisible bundles with complementarities.
In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learning dynamics. Notably, initial weight distributions with small (resp. large) variance may yield a rich (resp. lazy) regime, where significant (resp. minor) changes to network states and representation are observed over the course of learning. However, in biology, neural circuit connectivity generally has a low-rank structure and therefore differs markedly from the random initializations generally used for these studies. As such, here we investigate how the structure of the initial weights, in particular their effective rank, influences the network learning regime. Through both empirical and theoretical analyses, we discover that high-rank initializations typically yield smaller network changes indicative of lazier learning, a finding we also confirm with experimentally-driven initial connectivity in recurrent neural networks. Conversely, low-rank initialization biases learning towards richer learning. Importantly, however, as an exception to this rule, we find lazier learning can still occur with a low-rank initialization that aligns with task and data statistics. Our research highlights the pivotal role of initial weight structures in shaping learning regimes, with implications for metabolic costs of plasticity and risks of catastrophic forgetting.
Albeit the widespread application of recommender systems (RecSys) in our daily lives, rather limited research has been done on quantifying unfairness and biases present in such systems. Prior work largely focuses on determining whether a RecSys is discriminating or not but does not compute the amount of bias present in these systems. Biased recommendations may lead to decisions that can potentially have adverse effects on individuals, sensitive user groups, and society. Hence, it is important to quantify these biases for fair and safe commercial applications of these systems. This paper focuses on quantifying popularity bias that stems directly from the output of RecSys models, leading to over recommendation of popular items that are likely to be misaligned with user preferences. Four metrics to quantify popularity bias in RescSys over time in dynamic setting across different sensitive user groups have been proposed. These metrics have been demonstrated for four collaborative filtering based RecSys algorithms trained on two commonly used benchmark datasets in the literature. Results obtained show that the metrics proposed provide a comprehensive understanding of growing disparities in treatment between sensitive groups over time when used conjointly.
There has been significant progress in the study of sampling discretization of integral norms for both a designated finite-dimensional function space and a finite collection of such function spaces (universal discretization). Sampling discretization results turn out to be very useful in various applications, particularly in sampling recovery. Recent sampling discretization results typically provide existence of good sampling points for discretization. In this paper, we show that independent and identically distributed random points provide good universal discretization with high probability. Furthermore, we demonstrate that a simple greedy algorithm based on those points that are good for universal discretization provides excellent sparse recovery results in the square norm.
Data on neighbourhood characteristics are not typically collected in epidemiological studies. They are however useful in the study of small-area health inequalities. Neighbourhood characteristics are collected in some surveys and could be linked to the data of other studies. We propose to use kriging based on semi-variogram models to predict values at non-observed locations with the aim of constructing bespoke indices of neighbourhood characteristics to be linked to data from epidemiological studies. We perform a simulation study to assess the feasibility of the method as well as a case study using data from the RECORD study. Apart from having enough observed data at small distances to the non-observed locations, a good fitting semi-variogram, a larger range and the absence of nugget effects for the semi-variogram models are factors leading to a higher reliability.
The automated detection of cancerous tumors has attracted interest mainly during the last decade, due to the necessity of early and efficient diagnosis that will lead to the most effective possible treatment of the impending risk. Several machine learning and artificial intelligence methodologies has been employed aiming to provide trustworthy helping tools that will contribute efficiently to this attempt. In this article, we present a low-complexity convolutional neural network architecture for tumor classification enhanced by a robust image augmentation methodology. The effectiveness of the presented deep learning model has been investigated based on 3 datasets containing brain, kidney and lung images, showing remarkable diagnostic efficiency with classification accuracies of 99.33%, 100% and 99.7% for the 3 datasets respectively. The impact of the augmentation preprocessing step has also been extensively examined using 4 evaluation measures. The proposed low-complexity scheme, in contrast to other models in the literature, renders our model quite robust to cases of overfitting that typically accompany small datasets frequently encountered in medical classification challenges. Finally, the model can be easily re-trained in case additional volume images are included, as its simplistic architecture does not impose a significant computational burden.
Recurrent neural networks (RNNs) have yielded promising results for both recognizing objects in challenging conditions and modeling aspects of primate vision. However, the representational dynamics of recurrent computations remain poorly understood, especially in large-scale visual models. Here, we studied such dynamics in RNNs trained for object classification on MiniEcoset, a novel subset of ecoset. We report two main insights. First, upon inference, representations continued to evolve after correct classification, suggesting a lack of the notion of being ``done with classification''. Second, focusing on ``readout zones'' as a way to characterize the activation trajectories, we observe that misclassified representations exhibit activation patterns with lower L2 norm, and are positioned more peripherally in the readout zones. Such arrangements help the misclassified representations move into the correct zones as time progresses. Our findings generalize to networks with lateral and top-down connections, and include both additive and multiplicative interactions with the bottom-up sweep. The results therefore contribute to a general understanding of RNN dynamics in naturalistic tasks. We hope that the analysis framework will aid future investigations of other types of RNNs, including understanding of representational dynamics in primate vision.
The recent shift to remote learning and work has aggravated long-standing problems, such as the problem of monitoring the mental health of individuals and the progress of students towards learning targets. We introduce a novel latent process model with a view to monitoring the progress of individuals towards a hard-to-measure target of interest, measured by a set of variables. The latent process model is based on the idea of embedding both individuals and variables measuring progress towards the target of interest in a shared metric space, interpreted as an interaction map that captures interactions between individuals and variables. The fact that individuals are embedded in the same metric space as the target helps assess the progress of individuals towards the target. We demonstrate, with the help of simulations and applications, that the latent process model enables a novel look at mental health and online educational assessments in disadvantaged subpopulations.
Power posteriors "robustify" standard Bayesian inference by raising the likelihood to a constant fractional power, effectively downweighting its influence in the calculation of the posterior. Power posteriors have been shown to be more robust to model misspecification than standard posteriors in many settings. Previous work has shown that power posteriors derived from low-dimensional, parametric locally asymptotically normal models are asymptotically normal (Bernstein-von Mises) even under model misspecification. We extend these results to show that the power posterior moments converge to those of the limiting normal distribution suggested by the Bernstein-von Mises theorem. We then use this result to show that the mean of the power posterior, a point estimator, is asymptotically equivalent to the maximum likelihood estimator.
Conventional neural network elastoplasticity models are often perceived as lacking interpretability. This paper introduces a two-step machine learning approach that returns mathematical models interpretable by human experts. In particular, we introduce a surrogate model where yield surfaces are expressed in terms of a set of single-variable feature mappings obtained from supervised learning. A postprocessing step is then used to re-interpret the set of single-variable neural network mapping functions into mathematical form through symbolic regression. This divide-and-conquer approach provides several important advantages. First, it enables us to overcome the scaling issue of symbolic regression algorithms. From a practical perspective, it enhances the portability of learned models for partial differential equation solvers written in different programming languages. Finally, it enables us to have a concrete understanding of the attributes of the materials, such as convexity and symmetries of models, through automated derivations and reasoning. Numerical examples have been provided, along with an open-source code to enable third party validation.
Wide heterogeneity exists in cancer patients' survival, ranging from a few months to several decades. To accurately predict clinical outcomes, it is vital to build an accurate predictive model that relates patients' molecular profiles with patients' survival. With complex relationships between survival and high-dimensional molecular predictors, it is challenging to conduct non-parametric modeling and irrelevant predictors removing simultaneously. In this paper, we build a kernel Cox proportional hazards semi-parametric model and propose a novel regularized garrotized kernel machine (RegGKM) method to fit the model. We use the kernel machine method to describe the complex relationship between survival and predictors, while automatically removing irrelevant parametric and non-parametric predictors through a LASSO penalty. An efficient high-dimensional algorithm is developed for the proposed method. Comparison with other competing methods in simulation shows that the proposed method always has better predictive accuracy. We apply this method to analyze a multiple myeloma dataset and predict patients' death burden based on their gene expressions. Our results can help classify patients into groups with different death risks, facilitating treatment for better clinical outcomes.