Automatically evaluate the correctness of programming assignments is rather straightforward using unit and integration tests. However, programming tasks can be solved in multiple ways, many of which, although correct, are inelegant. For instance, excessive branching, poor naming or repetitiveness make the code hard to understand and maintain. These subjective qualities of code are hard to automatically assess using current techniques. In this work we investigate the use of CodeBERT to automatically assign quality score to Java code. We experiment with different models and training paradigms. We explore the accuracy of the models on a novel dataset for code quality assessment. Finally, we assess the quality of the predictions using saliency maps. We find that code quality to some extent is predictable and that transformer based models using task adapted pre-training can solve the task more efficiently than other techniques.
Challenges to reproducibility and replicability have gained widespread attention over the past decade, driven by a number of large replication projects with lukewarm success rates. A nascent work has emerged developing algorithms to estimate, or predict, the replicability of published findings. The current study explores ways in which AI-enabled signals of confidence in research might be integrated into literature search. We interview 17 PhD researchers about their current processes for literature search and ask them to provide feedback on a prototype replicability estimation tool. Our findings suggest that information about replicability can support researchers throughout literature review and research design processes. However, explainability and interpretability of system outputs is critical, and potential drawbacks of AI-enabled confidence assessment need to be further studied before such tools could be widely accepted and deployed. We discuss implications for the design of technological tools to support scholarly activities and advance reproducibility and replicability.
Positive and unlabelled learning is an important problem which arises naturally in many applications. The significant limitation of almost all existing methods lies in assuming that the propensity score function is constant (SCAR assumption), which is unrealistic in many practical situations. Avoiding this assumption, we consider parametric approach to the problem of joint estimation of posterior probability and propensity score functions. We show that under mild assumptions when both functions have the same parametric form (e.g. logistic with different parameters) the corresponding parameters are identifiable. Motivated by this, we propose two approaches to their estimation: joint maximum likelihood method and the second approach based on alternating maximization of two Fisher consistent expressions. Our experimental results show that the proposed methods are comparable or better than the existing methods based on Expectation-Maximisation scheme.
All organisms make temporal predictions, and their evolutionary fitness level depends on the accuracy of these predictions. In the context of visual perception, the motions of both the observer and objects in the scene structure the dynamics of sensory signals, allowing for partial prediction of future signals based on past ones. Here, we propose a self-supervised representation-learning framework that extracts and exploits the regularities of natural videos to compute accurate predictions. We motivate the polar architecture by appealing to the Fourier shift theorem and its group-theoretic generalization, and we optimize its parameters on next-frame prediction. Through controlled experiments, we demonstrate that this approach can discover the representation of simple transformation groups acting in data. When trained on natural video datasets, our framework achieves better prediction performance than traditional motion compensation and rivals conventional deep networks, while maintaining interpretability and speed. Furthermore, the polar computations can be restructured into components resembling normalized simple and direction-selective complex cell models of primate V1 neurons. Thus, polar prediction offers a principled framework for understanding how the visual system represents sensory inputs in a form that simplifies temporal prediction.
We propose models and algorithms for learning about random directions in simplex-valued data. The models are applied to the study of income level proportions and their changes over time in a geostatistical area. There are several notable challenges in the analysis of simplex-valued data: the measurements must respect the simplex constraint and the changes exhibit spatiotemporal smoothness and may be heterogeneous. To that end, we propose Bayesian models that draw from and expand upon building blocks in circular and spatial statistics by exploiting a suitable transformation for the simplex-valued data. Our models also account for spatial correlation across locations in the simplex and the heterogeneous patterns via mixture modeling. We describe some properties of the models and model fitting via MCMC techniques. Our models and methods are applied to an analysis of movements and trends of income categories using the Home Mortgage Disclosure Act data.
We present a new entanglement assisted classical communication scheme which can correct a fixed number of erasures or errors. The scheme transmits classical information over a quantum channel assisted by maximally entangled pairs. We establish a general framework to accomplish such a task by reducing it to a classical problem. We use direct coding or super-dense coding based on the amount of entanglement available. This results in a combination of two classical channels. For this scenario we present an explicit encoding scheme. We compare our scheme with specific bounds and find certain ranges of parameters where the scheme is optimal. The presented scheme can easily be realized. It requires only the implementation of super-dense coding which has been demonstrated successfully in experiments.
Combining test statistics from independent trials or experiments is a popular method of meta-analysis. However, there is very limited theoretical understanding of the power of the combined test, especially in high-dimensional models considering composite hypotheses tests. We derive a mathematical framework to study standard {meta-analysis} testing approaches in the context of the many normal means model, which serves as the platform to investigate more complex models. We introduce a natural and mild restriction on the meta-level combination functions of the local trials. This allows us to mathematically quantify the cost of compressing $m$ trials into real-valued test statistics and combining these. We then derive minimax lower and matching upper bounds for the separation rates of standard combination methods for e.g. p-values and e-values, quantifying the loss relative to using the full, pooled data. We observe an elbow effect, revealing that in certain cases combining the locally optimal tests in each trial results in a sub-optimal {meta-analysis} method and develop approaches to achieve the global optima. We also explore the possible gains of allowing limited coordination between the trial designs. Our results connect meta-analysis with bandwidth constraint distributed inference and build on recent information theoretic developments in the latter field.
The problem of designing learners that provide guarantees that their predictions are provably correct is of increasing importance in machine learning. However, learning theoretic guarantees have only been considered in very specific settings. In this work, we consider the design and analysis of reliable learners in challenging test-time environments as encountered in modern machine learning problems: namely `adversarial' test-time attacks (in several variations) and `natural' distribution shifts. In this work, we provide a reliable learner with provably optimal guarantees in such settings. We discuss computationally feasible implementations of the learner and further show that our algorithm achieves strong positive performance guarantees on several natural examples: for example, linear separators under log-concave distributions or smooth boundary classifiers under smooth probability distributions.
We establish quantitative compactness estimates for finite difference schemes used to solve nonlinear conservation laws. These equations involve a flux function $f(k(x,t),u)$, where the coefficient $k(x,t$ is $BV$-regular and may exhibit discontinuities along curves in the $(x,t)$ plane. Our approach, which is technically elementary, relies on a discrete interaction estimate and the existence of one entropy function. While the details are specifically outlined for the Lax-Friedrichs scheme, the same framework can be applied to other difference schemes. Notably, our compactness estimates are new even in the homogeneous case ($k\equiv 1$).
Scientists continue to develop increasingly complex mechanistic models to reflect their knowledge more realistically. Statistical inference using these models can be challenging since the corresponding likelihood function is often intractable and model simulation may be computationally burdensome. Fortunately, in many of these situations, it is possible to adopt a surrogate model or approximate likelihood function. It may be convenient to conduct Bayesian inference directly with the surrogate, but this can result in bias and poor uncertainty quantification. In this paper we propose a new method for adjusting approximate posterior samples to reduce bias and produce more accurate uncertainty quantification. We do this by optimizing a transform of the approximate posterior that maximizes a scoring rule. Our approach requires only a (fixed) small number of complex model simulations and is numerically stable. We demonstrate good performance of the new method on several examples of increasing complexity.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.