We develop and study a statistical test to detect synchrony in spike trains. Our test is based on the number of coincidences between two trains of spikes. The data are supplied in the form of \(n\) pairs (assumed to be independent) of spike trains. The aim is to assess whether the two trains in a pair are also independent. Our approach is based on previous results of Albert et al. (2015, 2019) and Kim et al. (2022) that we extend to our setting, focusing on the construction of a non-asymptotic criterion ensuring the detection of synchronization in the framework of permutation tests. Our criterion is constructed such that it ensures the control of the Type II error, while the Type I error is controlled by construction. We illustrate our results within two classical models of interacting neurons, the jittering Poisson model and Hawkes processes having \(M\) components interacting in a mean field frame and evolving in stationary regime. For this latter model, we obtain a lower bound of the size \(n\) of the sample necessary to detect the dependency between two neurons.
Regression models that incorporate smooth functions of predictor variables to explain the relationships with a response variable have gained widespread usage and proved successful in various applications. By incorporating smooth functions of predictor variables, these models can capture complex relationships between the response and predictors while still allowing for interpretation of the results. In situations where the relationships between a response variable and predictors are explored, it is not uncommon to assume that these relationships adhere to certain shape constraints. Examples of such constraints include monotonicity and convexity. The scam package for R has become a popular package to carry out the full fitting of exponential family generalized additive modelling with shape restrictions on smooths. The paper aims to extend the existing framework of shape-constrained generalized additive models (SCAM) to accommodate smooth interactions of covariates, linear functionals of shape-constrained smooths and incorporation of residual autocorrelation. The methods described in this paper are implemented in the recent version of the package scam, available on the Comprehensive R Archive Network (CRAN).
Benchmark instances for the unbounded knapsack problem are typically generated according to specific criteria within a given constant range $R$, and these instances can be referred to as the unbounded knapsack problem with bounded coefficients (UKPB). In order to increase the difficulty of solving these instances, the knapsack capacity $C$ is usually set to a very large value. Therefore, an exact algorithm that neither time complexity nor space complexity includes the capacity coefficient $C$ is highly anticipated. In this paper, we propose an exact algorithm with time complexity of $O(R^4)$ and space complexity of $O(R^3)$. The algorithm initially divides the multiset $N$ into two multisubsets, $N_1$ and $N_2$, based on the profit density of their types. For the multisubset $N_2$ composed of types with profit density lower than the maximum profit density type, we utilize a recent branch and bound (B\&B) result by Dey et al. (Math. Prog., pp 569-587, 2023) to determine the maximum selection number for types in $N_2$. We then employ the Unbounded-DP algorithm to exactly solve for the types in $N_2$. For the multisubset $N_1$ composed of the maximum profit density type and its counterparts with the same profit density, we transform it into a linear Diophantine equation and leverage relevant conclusions from the Frobenius problem to solve it efficiently. In particular, the proof techniques required by the algorithm are primarily covered in the first-year mathematics curriculum, which is convenient for subsequent researchers to grasp.
When complex Bayesian models exhibit implausible behaviour, one solution is to assemble available information into an informative prior. Challenges arise as prior information is often only available for the observable quantity, or some model-derived marginal quantity, rather than directly pertaining to the natural parameters in our model. We propose a method for translating available prior information, in the form of an elicited distribution for the observable or model-derived marginal quantity, into an informative joint prior. Our approach proceeds given a parametric class of prior distributions with as yet undetermined hyperparameters, and minimises the difference between the supplied elicited distribution and corresponding prior predictive distribution. We employ a global, multi-stage Bayesian optimisation procedure to locate optimal values for the hyperparameters. Three examples illustrate our approach: a cure-fraction survival model, where censoring implies that the observable quantity is a priori a mixed discrete/continuous quantity; a setting in which prior information pertains to $R^{2}$ -- a model-derived quantity; and a nonlinear regression model.
We investigate a convective Brinkman--Forchheimer problem coupled with a heat transfer equation. The investigated model considers thermal diffusion and viscosity depending on the temperature. We prove the existence of a solution without restriction on the data and uniqueness when the solution is slightly smoother and the data is suitably restricted. We propose a finite element discretization scheme for the considered model and derive convergence results and a priori error estimates. Finally, we illustrate the theory with numerical examples.
Generative diffusion models have achieved spectacular performance in many areas of generative modeling. While the fundamental ideas behind these models come from non-equilibrium physics, variational inference and stochastic calculus, in this paper we show that many aspects of these models can be understood using the tools of equilibrium statistical mechanics. Using this reformulation, we show that generative diffusion models undergo second-order phase transitions corresponding to symmetry breaking phenomena. We show that these phase-transitions are always in a mean-field universality class, as they are the result of a self-consistency condition in the generative dynamics. We argue that the critical instability that arises from the phase transitions lies at the heart of their generative capabilities, which are characterized by a set of mean field critical exponents. Furthermore, using the statistical physics of disordered systems, we show that memorization can be understood as a form of critical condensation corresponding to a disordered phase transition. Finally, we show that the dynamic equation of the generative process can be interpreted as a stochastic adiabatic transformation that minimizes the free energy while keeping the system in thermal equilibrium.
In many communication contexts, the capabilities of the involved actors cannot be known beforehand, whether it is a cell, a plant, an insect, or even a life form unknown to Earth. Regardless of the recipient, the message space and time scale could be too fast, too slow, too large, or too small and may never be decoded. Therefore, it pays to devise a way to encode messages agnostic of space and time scales. We propose the use of fractal functions as self-executable infinite-frequency carriers for sending messages, given their properties of structural self-similarity and scale invariance. We call it `fractal messaging'. Starting from a spatial embedding, we introduce a framework for a space-time scale-free messaging approach to this challenge. When considering a space and time-agnostic framework for message transmission, it would be interesting to encode a message such that it could be decoded at several spatio-temporal scales. Hence, the core idea of the framework proposed herein is to encode a binary message as waves along infinitely many frequencies (in power-like distributions) and amplitudes, transmit such a message, and then decode and reproduce it. To do so, the components of the Weierstrass function, a known fractal, are used as carriers of the message. Each component will have its amplitude modulated to embed the binary stream, allowing for a space-time-agnostic approach to messaging.
This work presents a comparative review and classification between some well-known thermodynamically consistent models of hydrogel behavior in a large deformation setting, specifically focusing on solvent absorption/desorption and its impact on mechanical deformation and network swelling. The proposed discussion addresses formulation aspects, general mathematical classification of the governing equations, and numerical implementation issues based on the finite element method. The theories are presented in a unified framework demonstrating that, despite not being evident in some cases, all of them follow equivalent thermodynamic arguments. A detailed numerical analysis is carried out where Taylor-Hood elements are employed in the spatial discretization to satisfy the inf-sup condition and to prevent spurious numerical oscillations. The resulting discrete problems are solved using the FEniCS platform through consistent variational formulations, employing both monolithic and staggered approaches. We conduct benchmark tests on various hydrogel structures, demonstrating that major differences arise from the chosen volumetric response of the hydrogel. The significance of this choice is frequently underestimated in the state-of-the-art literature but has been shown to have substantial implications on the resulting hydrogel behavior.
Reconstructing a dynamic object with affine motion in computerized tomography (CT) leads to motion artifacts if the motion is not taken into account. In most cases, the actual motion is neither known nor can be determined easily. As a consequence, the respective model that describes CT is incomplete. The iterative RESESOP-Kaczmarz method can - under certain conditions and by exploiting the modeling error - reconstruct dynamic objects at different time points even if the exact motion is unknown. However, the method is very time-consuming. To speed the reconstruction process up and obtain better results, we combine the following three steps: 1. RESESOP-Kacmarz with only a few iterations is implemented to reconstruct the object at different time points. 2. The motion is estimated via landmark detection, e.g. using deep learning. 3. The estimated motion is integrated into the reconstruction process, allowing the use of dynamic filtered backprojection. We give a short review of all methods involved and present numerical results as a proof of principle.
We introduce a multiple testing procedure that controls the median of the proportion of false discoveries (FDP) in a flexible way. The procedure only requires a vector of p-values as input and is comparable to the Benjamini-Hochberg method, which controls the mean of the FDP. Our method allows freely choosing one or several values of alpha after seeing the data -- unlike Benjamini-Hochberg, which can be very liberal when alpha is chosen post hoc. We prove these claims and illustrate them with simulations. Our procedure is inspired by a popular estimator of the total number of true hypotheses. We adapt this estimator to provide simultaneously median unbiased estimators of the FDP, valid for finite samples. This simultaneity allows for the claimed flexibility. Our approach does not assume independence. The time complexity of our method is linear in the number of hypotheses, after sorting the p-values.
Negation is an important perspective of knowledge representation. Existing negation methods are mainly applied in probability theory, evidence theory and complex evidence theory. As a generalization of evidence theory, random permutation sets theory may represent information more precisely. However, how to apply the concept of negation to random permutation sets theory has not been studied. In this paper, the negation of permutation mass function is proposed. Moreover, in the negation process, the convergence of proposed negation method is verified. The trends of uncertainty and dissimilarity after each negation operation are investigated. Numerical examples are used to demonstrate the rationality of the proposed method.