The attention towards food products characteristics, such as nutritional properties and traceability, has risen substantially in the recent years. Consequently, we are witnessing an increased demand for the development of modern tools to monitor, analyse and assess food quality and authenticity. Within this framework, an essential set of data collection techniques is provided by vibrational spectroscopy. In fact, methods such as Fourier near infrared and mid infrared spectroscopy have been often exploited to analyze different foodstuffs. Nonetheless, existing statistical methods often struggle to deal with the challenges presented by spectral data, such as their high dimensionality, paired with strong relationships among the wavelengths. Therefore, the definition of proper statistical procedures accounting for the peculiarities of spectroscopy data is paramount. In this work, motivated by two dairy science applications, we propose an adaptive functional regression framework for spectroscopy data. The method stems from the trend filtering literature, allowing the definition of a highly flexible and adaptive estimator able to handle different degrees of smoothness. We provide a fast optimization procedure that is suitable for both Gaussian and non Gaussian scalar responses, and allows for the inclusion of scalar covariates. Moreover, we develop inferential procedures for both the functional and the scalar component thus enhancing not only the interpretability of the results, but also their usability in real world scenarios. The method is applied to two sets of MIR spectroscopy data, providing excellent results when predicting milk chemical composition and cows' dietary treatments. Moreover, the developed inferential routine provides relevant insights, potentially paving the way for a richer interpretation and a better understanding of the impact of specific wavelengths on milk features.
Conformal inference is a fundamental and versatile tool that provides distribution-free guarantees for many machine learning tasks. We consider the transductive setting, where decisions are made on a test sample of $m$ new points, giving rise to $m$ conformal $p$-values. {While classical results only concern their marginal distribution, we show that their joint distribution follows a P\'olya urn model, and establish a concentration inequality for their empirical distribution function.} The results hold for arbitrary exchangeable scores, including {\it adaptive} ones that can use the covariates of the test+calibration samples at training stage for increased accuracy. We demonstrate the usefulness of these theoretical results through uniform, in-probability guarantees for two machine learning tasks of current interest: interval prediction for transductive transfer learning and novelty detection based on two-class classification.
We introduce a convergent hierarchy of lower bounds on the minimum value of a real homogeneous polynomial over the sphere. The main practical advantage of our hierarchy over the sum-of-squares (SOS) hierarchy is that the lower bound at each level of our hierarchy is obtained by a minimum eigenvalue computation, as opposed to the full semidefinite program (SDP) required at each level of SOS. In practice, this allows us to go to much higher levels than are computationally feasible for the SOS hierarchy. For both hierarchies, the underlying space at the $k$-th level is the set of homogeneous polynomials of degree $2k$. We prove that our hierarchy converges as $O(1/k)$ in the level $k$, matching the best-known convergence of the SOS hierarchy when the number of variables $n$ is less than the half-degree $d$ (the best-known convergence of SOS when $n \geq d$ is $O(1/k^2)$). More generally, we introduce a convergent hierarchy of minimum eigenvalue computations for minimizing the inner product between a real tensor and an element of the spherical Segre-Veronese variety, with similar convergence guarantees. As examples, we obtain hierarchies for computing the (real) tensor spectral norm, and for minimizing biquadratic forms over the sphere. Hierarchies of eigencomputations for more general constrained polynomial optimization problems are discussed.
We construct a bipartite generalization of Alon and Szegedy's nearly orthogonal vectors, thereby obtaining strong bounds for several extremal problems involving the Lov\'asz theta function, vector chromatic number, minimum semidefinite rank, nonnegative rank, and extension complexity of polytopes. In particular, we derive some general lower bounds for the vector chromatic number which may be of independent interest.
Generative diffusion models have achieved spectacular performance in many areas of generative modeling. While the fundamental ideas behind these models come from non-equilibrium physics, in this paper we show that many aspects of these models can be understood using the tools of equilibrium statistical mechanics. Using this reformulation, we show that generative diffusion models undergo second-order phase transitions corresponding to symmetry breaking phenomena. We argue that this lead to a form of instability that lies at the heart of their generative capabilities and that can be described by a set of mean field critical exponents. We conclude by analyzing recent work connecting diffusion models and associative memory networks in view of the thermodynamic formulations.
When multiple self-adaptive systems share the same environment and have common goals, they may coordinate their adaptations at runtime to avoid conflicts and to satisfy their goals. There are two approaches to coordination. (1) Logically centralized, where a supervisor has complete control over the individual self-adaptive systems. Such approach is infeasible when the systems have different owners or administrative domains. (2) Logically decentralized, where coordination is achieved through direct interactions. Because the individual systems have control over the information they share, decentralized coordination accommodates multiple administrative domains. However, existing techniques do not account simultaneously for both local concerns, e.g., preferences, and shared concerns, e.g., conflicts, which may lead to goals not being achieved as expected. Our idea to address this shortcoming is to express both types of concerns within the same constraint optimization problem. We propose CoADAPT, a decentralized coordination technique introducing two types of constraints: preference constraints, expressing local concerns, and consistency constraints, expressing shared concerns. At runtime, the problem is solved in a decentralized way using distributed constraint optimization algorithms implemented by each self-adaptive system. As a first step in realizing CoADAPT, we focus in this work on the coordination of adaptation planning strategies, traditionally addressed only with centralized techniques. We show the feasibility of CoADAPT in an exemplar from cloud computing and analyze experimentally its scalability.
In recent years, research interest in personalised treatments has been growing. However, treatment effect heterogeneity and possibly time-varying treatment effects are still often overlooked in clinical studies. Statistical tools are needed for the identification of treatment response patterns, taking into account that treatment response is not constant over time. We aim to provide an innovative method to obtain dynamic treatment effect phenotypes on a time-to-event outcome, conditioned on a set of relevant effect modifiers. The proposed method does not require the assumption of proportional hazards for the treatment effect, which is rarely realistic. We propose a spline-based survival neural network, inspired by the Royston-Parmar survival model, to estimate time-varying conditional treatment effects. We then exploit the functional nature of the resulting estimates to apply a functional clustering of the treatment effect curves in order to identify different patterns of treatment effects. The application that motivated this work is the discontinuation of treatment with Mineralocorticoid receptor Antagonists (MRAs) in patients with heart failure, where there is no clear evidence as to which patients it is the safest choice to discontinue treatment and, conversely, when it leads to a higher risk of adverse events. The data come from an electronic health record database. A simulation study was performed to assess the performance of the spline-based neural network and the stability of the treatment response phenotyping procedure. We provide a novel method to inform individualized medical decisions by characterising subject-specific treatment responses over time.
The prevailing statistical approach to analyzing persistence diagrams is concerned with filtering out topological noise. In this paper, we adopt a different viewpoint and aim at estimating the actual distribution of a random persistence diagram, which captures both topological signal and noise. To that effect, Chazel and Divol (2019) proved that, under general conditions, the expected value of a random persistence diagram is a measure admitting a Lebesgue density, called the persistence intensity function. In this paper, we are concerned with estimating the persistence intensity function and a novel, normalized version of it -- called the persistence density function. We present a class of kernel-based estimators based on an i.i.d. sample of persistence diagrams and derive estimation rates in the supremum norm. As a direct corollary, we obtain uniform consistency rates for estimating linear representations of persistence diagrams, including Betti numbers and persistence surfaces. Interestingly, the persistence density function delivers stronger statistical guarantees.
Numerous applications in the field of molecular communications (MC) such as healthcare systems are often event-driven. The conventional Shannon capacity may not be the appropriate metric for assessing performance in such cases. We propose the identification (ID) capacity as an alternative metric. Particularly, we consider randomized identification (RI) over the discrete-time Poisson channel (DTPC), which is typically used as a model for MC systems that utilize molecule-counting receivers. In the ID paradigm, the receiver's focus is not on decoding the message sent. However, he wants to determine whether a message of particular significance to him has been sent or not. In contrast to Shannon transmission codes, the size of ID codes for a Discrete Memoryless Channel (DMC) grows doubly exponentially fast with the blocklength, if randomized encoding is used. In this paper, we derive the capacity formula for RI over the DTPC subject to some peak and average power constraints. Furthermore, we analyze the case of state-dependent DTPC.
Confounder selection, namely choosing a set of covariates to control for confounding between a treatment and an outcome, is arguably the most important step in the design of observational studies. Previous methods, such as Pearl's celebrated back-door criterion, typically require pre-specifying a causal graph, which can often be difficult in practice. We propose an interactive procedure for confounder selection that does not require pre-specifying the graph or the set of observed variables. This procedure iteratively expands the causal graph by finding what we call "primary adjustment sets" for a pair of possibly confounded variables. This can be viewed as inverting a sequence of latent projections of the underlying causal graph. Structural information in the form of primary adjustment sets is elicited from the user, bit by bit, until either a set of covariates are found to control for confounding or it can be determined that no such set exists. Other information, such as the causal relations between confounders, is not required by the procedure. We show that if the user correctly specifies the primary adjustment sets in every step, our procedure is both sound and complete.
Feature attribution is a fundamental task in both machine learning and data analysis, which involves determining the contribution of individual features or variables to a model's output. This process helps identify the most important features for predicting an outcome. The history of feature attribution methods can be traced back to General Additive Models (GAMs), which extend linear regression models by incorporating non-linear relationships between dependent and independent variables. In recent years, gradient-based methods and surrogate models have been applied to unravel complex Artificial Intelligence (AI) systems, but these methods have limitations. GAMs tend to achieve lower accuracy, gradient-based methods can be difficult to interpret, and surrogate models often suffer from stability and fidelity issues. Furthermore, most existing methods do not consider users' contexts, which can significantly influence their preferences. To address these limitations and advance the current state-of-the-art, we define a novel feature attribution framework called Context-Aware Feature Attribution Through Argumentation (CA-FATA). Our framework harnesses the power of argumentation by treating each feature as an argument that can either support, attack or neutralize a prediction. Additionally, CA-FATA formulates feature attribution as an argumentation procedure, and each computation has explicit semantics, which makes it inherently interpretable. CA-FATA also easily integrates side information, such as users' contexts, resulting in more accurate predictions.