Storage codes on graphs are an instance of \emph{codes with locality}, which are used in distributed storage schemes to provide local repairability. Specifically, the nodes of the graph correspond to storage servers, and the neighbourhood of each server constitute the set of servers it can query to repair its stored data in the event of a failure. A storage code on a graph with $n$-vertices is a set of $n$-length codewords over $\field_q$ where the $i$th codeword symbol is stored in server $i$, and it can be recovered by querying the neighbours of server $i$ according to the underlying graph. In this work, we look at binary storage codes whose repair function is the parity check, and characterise the tradeoff between the locality of the code and its rate. Specifically, we show that the maximum rate of a code on $n$ vertices with locality $r$ is bounded between $1-1/n\lceil n/(r+1)\rceil$ and $1-1/n\lceil n/(r+1)\rceil$. The lower bound on the rate is derived by constructing an explicit family of graphs with locality $r$, while the upper bound is obtained via a lower bound on the binary-field rank of a class of symmetric binary matrices. Our upper bound on maximal rate of a storage code matches the upper bound on the larger class of codes with locality derived by Tamo and Barg. As a corollary to our result, we obtain the following asymptotic separation result: given a sequence $r(n), n\geq 1$, there exists a sequence of graphs on $n$-vertices with storage codes of rate $1-o(1)$ if and only if $r(n)=\omega(1)$.
Ne\v{s}et\v{r}il and Ossona de Mendez recently proposed a new definition of graph convergence called structural convergence. The structural convergence framework is based on the probability of satisfaction of logical formulas from a fixed fragment of first-order formulas. The flexibility of choosing the fragment allows to unify the classical notions of convergence for sparse and dense graphs. Since the field is relatively young, the range of examples of convergent sequences is limited and only a few methods of construction are known. Our aim is to extend the variety of constructions by considering the gadget construction. We show that, when restricting to the set of sentences, the application of gadget construction on elementarily convergent sequences yields an elementarily convergent sequence. On the other hand, we show counterexamples witnessing that a generalization to the full first-order convergence is not possible without additional assumptions. We give several different sufficient conditions to ensure the full convergence. One of them states that the resulting sequence is first-order convergent if the replaced edges are dense in the original sequence of structures.
Compositional data arise in many real-life applications and versatile methods for properly analyzing this type of data in the regression context are needed. When parametric assumptions do not hold or are difficult to verify, non-parametric regression models can provide a convenient alternative method for prediction. To this end, we consider an extension to the classical $k$--$NN$ regression, termed $\alpha$--$k$--$NN$ regression, that yields a highly flexible non-parametric regression model for compositional data through the use of the $\alpha$-transformation. Unlike many of the recommended regression models for compositional data, zeros values (which commonly occur in practice) are not problematic and they can be incorporated into the proposed models without modification. Extensive simulation studies and real-life data analyses highlight the advantage of using these non-parametric regressions for complex relationships between the compositional response data and Euclidean predictor variables. Both suggest that $\alpha$--$k$--$NN$ regression can lead to more accurate predictions compared to current regression models which assume a, sometimes restrictive, parametric relationship with the predictor variables. In addition, the $\alpha$--$k$--$NN$ regression, in contrast to current regression techniques, enjoys a high computational efficiency rendering it highly attractive for use with large scale, massive, or big data.
Correlation matrix visualization is essential for understanding the relationships between variables in a dataset, but missing data can pose a significant challenge in estimating correlation coefficients. In this paper, we compare the effects of various missing data methods on the correlation plot, focusing on two common missing patterns: random and monotone. We aim to provide practical strategies and recommendations for researchers and practitioners in creating and analyzing the correlation plot. Our experimental results suggest that while imputation is commonly used for missing data, using imputed data for plotting the correlation matrix may lead to a significantly misleading inference of the relation between the features. We recommend using DPER, a direct parameter estimation approach, for plotting the correlation matrix based on its performance in the experiments.
We show that for log-concave real random variables with fixed variance the Shannon differential entropy is minimized for an exponential random variable. We apply this result to derive upper bounds on capacities of additive noise channels with log-concave noise. We also improve constants in the reverse entropy power inequalities for log-concave random variables.
We propose a novel algorithm for solving the composite Federated Learning (FL) problem. This algorithm manages non-smooth regularization by strategically decoupling the proximal operator and communication, and addresses client drift without any assumptions about data similarity. Moreover, each worker uses local updates to reduce the communication frequency with the server and transmits only a $d$-dimensional vector per communication round. We prove that our algorithm converges linearly to a neighborhood of the optimal solution and demonstrate the superiority of our algorithm over state-of-the-art methods in numerical experiments.
The smoothing distribution of dynamic probit models with Gaussian state dynamics was recently proved to belong to the unified skew-normal family. Although this is computationally tractable in small-to-moderate settings, it may become computationally impractical in higher dimensions. In this work, adapting a recent more general class of expectation propagation (EP) algorithms, we derive an efficient EP routine to perform inference for such a distribution. We show that the proposed approximation leads to accuracy gains over available approximate algorithms in a financial illustration.
Bayesian binary regression is a prosperous area of research due to the computational challenges encountered by currently available methods either for high-dimensional settings or large datasets, or both. In the present work, we focus on the expectation propagation (EP) approximation of the posterior distribution in Bayesian probit regression under a multivariate Gaussian prior distribution. Adapting more general derivations in Anceschi et al. (2023), we show how to leverage results on the extended multivariate skew-normal distribution to derive an efficient implementation of the EP routine having a per-iteration cost that scales linearly in the number of covariates. This makes EP computationally feasible also in challenging high-dimensional settings, as shown in a detailed simulation study.
Given an image $u_0$, the aim of minimising the Mumford-Shah functional is to find a decomposition of the image domain into sub-domains and a piecewise smooth approximation $u$ of $u_0$ such that $u$ varies smoothly within each sub-domain. Since the Mumford-Shah functional is highly non-smooth, regularizations such as the Ambrosio-Tortorelli approximation can be considered which is one of the most computationally efficient approximations of the Mumford-Shah functional for image segmentation. While very impressive numerical results have been achieved in a large range of applications when minimising the functional, no analytical results are currently available for minimizers of the functional in the piecewise smooth setting, and this is the goal of this work. Our main result is the $\Gamma$-convergence of the Ambrosio-Tortorelli approximation of the Mumford-Shah functional for piecewise smooth approximations. This requires the introduction of an appropriate function space. As a consequence of our $\Gamma$-convergence result, we can infer the convergence of minimizers of the respective functionals.
A $hole$ is an induced cycle of length at least four, and an odd hole is a hole of odd length. A {\em fork} is a graph obtained from $K_{1,3}$ by subdividing an edge once. An {\em odd balloon} is a graph obtained from an odd hole by identifying respectively two consecutive vertices with two leaves of $K_{1, 3}$. A {\em gem} is a graph that consists of a $P_4$ plus a vertex adjacent to all vertices of the $P_4$. A {\em butterfly} is a graph obtained from two traingles by sharing exactly one vertex. A graph $G$ is perfectly divisible if for each induced subgraph $H$ of $G$, $V(H)$ can be partitioned into $A$ and $B$ such that $H[A]$ is perfect and $\omega(H[B])<\omega(H)$. In this paper, we show that (odd balloon, fork)-free graphs are perfectly divisible (this generalizes some results of Karthick {\em et al}). As an application, we show that $\chi(G)\le\binom{\omega(G)+1}{2}$ if $G$ is (fork, gem)-free or (fork, butterfly)-free.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.