Time series of counts occurring in various applications are often overdispersed, meaning their variance is much larger than the mean. This paper proposes a novel variable selection approach for processing such data. Our approach consists in modelling them using sparse negative binomial GLARMA models. It combines estimating the autoregressive moving average (ARMA) coefficients of GLARMA models and the overdispersion parameter with performing variable selection in regression coefficients of Generalized Linear Models (GLM) with regularised methods. We describe our three-step estimation procedure, which is implemented in the NBtsVarSel package. We evaluate the performance of the approach on synthetic data and compare it to other methods. Additionally, we apply our approach to RNA sequencing data. Our approach is computationally efficient and outperforms other methods in selecting variables, i.e. recovering the non-null regression coefficients.
The dynamical equation of the boundary vorticity has been obtained, which shows that the viscosity at a solid wall is doubled as if the fluid became more viscous at the boundary. For certain viscous flows the boundary vorticity can be determined via the dynamical equation up to bounded errors for all time, without the need of knowing the details of the main stream flows. We then validate the dynamical equation by carrying out stochastic direct numerical simulations (i.e. the random vortex method for wall-bounded incompressible viscous flows) by two different means of updating the boundary vorticity, one using mollifiers of the Biot-Savart singular integral kernel, another using the dynamical equations.
Counterfactual prediction methods are required when a model will be deployed in a setting where treatment policies differ from the setting where the model was developed, or when the prediction question is explicitly counterfactual. However, estimating and evaluating counterfactual prediction models is challenging because one does not observe the full set of potential outcomes for all individuals. Here, we discuss how to tailor a model to a counterfactual estimand, how to assess the model's performance, and how to perform model and tuning parameter selection. We also provide identifiability results for measures of performance for a potentially misspecified counterfactual prediction model based on training and test data from the same (factual) source population. Last, we illustrate the methods using simulation and apply them to the task of developing a statin-na\"{i}ve risk prediction model for cardiovascular disease.
For the convolutional neural network (CNN) used for pattern classification, the training loss function is usually applied to the final output of the network, except for some regularization constraints on the network parameters. However, with the increasing of the number of network layers, the influence of the loss function on the network front layers gradually decreases, and the network parameters tend to fall into local optimization. At the same time, it is found that the trained network has significant information redundancy at all stages of features, which reduces the effectiveness of feature mapping at all stages and is not conducive to the change of the subsequent parameters of the network in the direction of optimality. Therefore, it is possible to obtain a more optimized solution of the network and further improve the classification accuracy of the network by designing a loss function for restraining the front stage features and eliminating the information redundancy of the front stage features .For CNN, this article proposes a multi-stage feature decorrelation loss (MFD Loss), which refines effective features and eliminates information redundancy by constraining the correlation of features at all stages. Considering that there are many layers in CNN, through experimental comparison and analysis, MFD Loss acts on multiple front layers of CNN, constrains the output features of each layer and each channel, and performs supervision training jointly with classification loss function during network training. Compared with the single Softmax Loss supervised learning, the experiments on several commonly used datasets on several typical CNNs prove that the classification performance of Softmax Loss+MFD Loss is significantly better. Meanwhile, the comparison experiments before and after the combination of MFD Loss and some other typical loss functions verify its good universality.
In the envy-free perfect matching problem, $n$ items with unit supply are available to be sold to $n$ buyers with unit demand. The objective is to find allocation and prices such that both seller's revenue and buyers' surpluses are maximized -- given the buyers' valuations for the items -- and all items must be sold. A previous work has shown that this problem can be solved in cubic time, using maximum weight perfect matchings to find optimal envy-free allocations and shortest paths to find optimal envy-free prices. In this work, I consider that buyers have fixed budgets, the items have quality measures and so the valuations are defined by multiplying these two quantities. Under this approach, I prove that the valuation matrix have the inverse Monge property, thus simplifying the search for optimal envy-free allocations and, consequently, for optimal envy-free prices through a strategy based on dynamic programming. As result, I propose an algorithm that finds optimal solutions in quadratic time.
We present a method for finding envy-free prices in a combinatorial auction where the consumers' number $n$ coincides with that of distinct items for sale, each consumer can buy one single item and each item has only one unit available. This is a particular case of the {\it unit-demand envy-free pricing problem}, and was recently revisited by Arbib et al. (2019). These authors proved that using a Fibonacci heap for solving the maximum weight perfect matching and the Bellman-Ford algorithm for getting the envy-free prices, the overall time complexity for solving the problem is $O(n^3)$. We propose a method based on dynamic programming design strategy that seeks the optimal envy-free prices by increasing the consumers' utilities, which has the same cubic complexity time as the aforementioned approach, but whose theoretical and empirical results indicate that our method performs faster than the shortest paths strategy, obtaining an average time reduction in determining optimal envy-free prices of approximately 48\%.
Some applications of deep learning require not only to provide accurate results but also to quantify the amount of confidence in their prediction. The management of an electric power grid is one of these cases: to avoid risky scenarios, decision-makers need both precise and reliable forecasts of, for example, power loads. For this reason, point forecasts are not enough hence it is necessary to adopt methods that provide an uncertainty quantification. This work focuses on reservoir computing as the core time series forecasting method, due to its computational efficiency and effectiveness in predicting time series. While the RC literature mostly focused on point forecasting, this work explores the compatibility of some popular uncertainty quantification methods with the reservoir setting. Both Bayesian and deterministic approaches to uncertainty assessment are evaluated and compared in terms of their prediction accuracy, computational resource efficiency and reliability of the estimated uncertainty, based on a set of carefully chosen performance metrics.
In many scientific applications the aim is to infer a function which is smooth in some areas, but rough or even discontinuous in other areas of its domain. Such spatially inhomogeneous functions can be modelled in Besov spaces with suitable integrability parameters. In this work we study adaptive Bayesian inference over Besov spaces, in the white noise model from the point of view of rates of contraction, using $p$-exponential priors, which range between Laplace and Gaussian and possess regularity and scaling hyper-parameters. To achieve adaptation, we employ empirical and hierarchical Bayes approaches for tuning these hyper-parameters. Our results show that, while it is known that Gaussian priors can attain the minimax rate only in Besov spaces of spatially homogeneous functions, Laplace priors attain the minimax or nearly the minimax rate in both Besov spaces of spatially homogeneous functions and Besov spaces permitting spatial inhomogeneities.
The problem of modeling the relationship between univariate distributions and one or more explanatory variables has found increasing interest. Traditional functional data methods cannot be applied directly to distributional data because of their inherent constraints. Modeling distributions as elements of the Wasserstein space, a geodesic metric space equipped with the Wasserstein metric that is related to optimal transport, is attractive for statistical applications. Existing approaches proceed by substituting proxy estimated distributions for the typically unknown response distributions. These estimates are obtained from available data but are problematic when for some of the distributions only few data are available. Such situations are common in practice and cannot be addressed with available approaches, especially when one aims at density estimates. We show how this and other problems associated with density estimation such as tuning parameter selection and bias issues can be side-stepped when covariates are available. We also introduce a novel version of distribution-response regression that is based on empirical measures. By avoiding the preprocessing step of recovering complete individual response distributions, the proposed approach is applicable when the sample size available for some of the distributions is small. In this case, one can still obtain consistent distribution estimates even for distributions with only few data by gaining strength across the entire sample of distributions, while traditional approaches where distributions or densities are estimated individually fail, since sparsely sampled densities cannot be consistently estimated. The proposed model is demonstrated to outperform existing approaches through simulations. Its efficacy is corroborated in two case studies on Environmental Influences on Child Health Outcomes (ECHO) data and eBay auction data.
It is known that different categorial grammars have surface representation in a fragment of first order multiplicative linear logic (MLL1). We show that the fragment of interest is equivalent to the recently introduced extended tensor type calculus (ETTC). ETTC is a calculus of specific typed terms, which represent tuples of strings, more precisely bipartite graphs decorated with strings. Types are derived from linear logic formulas, and rules correspond to concrete operations on these string-labeled graphs, so that they can be conveniently visualized. This provides the above mentioned fragment of MLL1 that is relevant for language modeling not only with some alternative syntax and intuitive geometric representation, but also with an intrinsic deductive system, which has been absent. In this work we consider a non-trivial notationally enriched variation of the previously introduced {\bf ETTC}, which allows more concise and transparent computations. We present both a cut-free sequent calculus and a natural deduction formalism.
In sampling-based Bayesian models of brain function, neural activities are assumed to be samples from probability distributions that the brain uses for probabilistic computation. However, a comprehensive understanding of how mechanistic models of neural dynamics can sample from arbitrary distributions is still lacking. We use tools from functional analysis and stochastic differential equations to explore the minimum architectural requirements for $\textit{recurrent}$ neural circuits to sample from complex distributions. We first consider the traditional sampling model consisting of a network of neurons whose outputs directly represent the samples (sampler-only network). We argue that synaptic current and firing-rate dynamics in the traditional model have limited capacity to sample from a complex probability distribution. We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution. We call such circuits reservoir-sampler networks (RSNs). We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling. We empirically demonstrate our model's ability to sample from several complex data distributions using the proposed neural dynamics and discuss its applicability to developing the next generation of sampling-based brain models.