Secure codes are widely-studied combinatorial structures which were introduced for traitor tracing in broadcast encryption. To determine the maximum size of such structures is the main research objective. In this paper, we investigate the lower bounds for secure codes and their related structures. First, we give some improved lower bounds for the rates of $2$-frameproof codes and $\overline{2}$-separable codes for slightly large alphabet size. Then we improve the lower bounds for the rate of some related structures, i.e., strongly $2$-separable matrices and $2$-cancellative set families. Finally, we give a general method to derive new lower bounds for strongly $t$-separable matrices and $t$-cancellative set families for $t\ge 3.$
Classification of time series is a growing problem in different disciplines due to the progressive digitalization of the world. Currently, the state-of-the-art in time series classification is dominated by The Hierarchical Vote Collective of Transformation-based Ensembles. This algorithm is composed of several classifiers of different domains distributed in five large modules. The combination of the results obtained by each module weighed based on an internal evaluation process allows this algorithm to obtain the best results in state-of-the-art. One Nearest Neighbour with Dynamic Time Warping remains the base classifier in any time series classification problem for its simplicity and good results. Despite their performance, they share a weakness, which is that they are not interpretable. In the field of time series classification, there is a tradeoff between accuracy and interpretability. In this work, we propose a set of characteristics capable of extracting information on the structure of the time series to face time series classification problems. The use of these characteristics allows the use of traditional classification algorithms in time series problems. The experimental results of our proposal show no statistically significant differences from the second and third best models of the state-of-the-art. Apart from competitive results in accuracy, our proposal is able to offer interpretable results based on the set of characteristics proposed
This paper describes and compares some structure preserving techniques for the solution of linear discrete ill-posed problems with the t-product. A new randomized tensor singular value decomposition (R-tSVD) with a t-product is presented for low tubal rank tensor approximations. Regularization of linear inverse problems by truncated tensor eigenvalue decomposition (T-tEVD), truncated tSVD (T-tSVD), randomized T-tSVD (RT-tSVD), t-product Golub-Kahan bidiagonalization (tGKB) process, and t-product Lanczos (t-Lanczos) process are considered. A solution method that is based on reusing tensor Krylov subspaces generated by the tGKB process is described. The regularization parameter is the number of iterations required by each method. The discrepancy principle is used to determine this parameter. Solution methods that are based on truncated iterations are compared with solution methods that combine Tikhonov regularization with the tGKB and t-Lanczos processes. Computed examples illustrate the performance of these methods when applied to image and gray-scale video restorations. Our new RT-tSVD method is seen to require less CPU time and yields restorations of higher quality than the T-tSVD method.
Implicit Processes (IPs) are flexible priors that can describe models such as Bayesian neural networks, neural samplers and data generators. IPs allow for approximate inference in function-space. This avoids some degenerate problems of parameter-space approximate inference due to the high number of parameters and strong dependencies. For this, an extra IP is often used to approximate the posterior of the prior IP. However, simultaneously adjusting the parameters of the prior IP and the approximate posterior IP is a challenging task. Existing methods that can tune the prior IP result in a Gaussian predictive distribution, which fails to capture important data patterns. By contrast, methods producing flexible predictive distributions by using another IP to approximate the posterior process cannot fit the prior IP to the observed data. We propose here a method that can carry out both tasks. For this, we rely on an inducing-point representation of the prior IP, as often done in the context of sparse Gaussian processes. The result is a scalable method for approximate inference with IPs that can tune the prior IP parameters to the data, and that provides accurate non-Gaussian predictive distributions.
We describe a procedure to introduce general dependence structures on a set of random variables. These include order-$q$ moving average-type structures, as well as seasonal, periodic, spatial and spatio-temporal dependences. The invariant marginal distribution can be in any family that is conjugate to an exponential family with quadratic variance function. Dependence is induced via a set of suitable latent variables whose conditional distribution mirrors the sampling distribution in a Bayesian conjugate analysis of such exponential families. We obtain strict stationarity as a special case.
We revisit the classical problem of nonparametric density estimation, but impose local differential privacy constraints. Under such constraints, the original multivariate data $X_1,\ldots,X_n \in \mathbb{R}^d$ cannot be directly observed, and all estimators are functions of the randomised output of a suitable privacy mechanism. The statistician is free to choose the form of the privacy mechanism, and in this work we propose to add Laplace distributed noise to a discretisation of the location of a vector $X_i$. Based on these randomised data, we design a novel estimator of the density function, which can be viewed as a privatised version of the well-studied histogram density estimator. Our theoretical results include universal pointwise consistency and strong universal $L_1$-consistency. In addition, a convergence rate over classes of Lipschitz functions is derived, which is complemented by a matching minimax lower bound. We illustrate the trade-off between data utility and privacy by means of a small simulation study.
We consider the phase retrieval problem, in which the observer wishes to recover a $n$-dimensional real or complex signal $\mathbf{X}^\star$ from the (possibly noisy) observation of $|\mathbf{\Phi} \mathbf{X}^\star|$, in which $\mathbf{\Phi}$ is a matrix of size $m \times n$. We consider a \emph{high-dimensional} setting where $n,m \to \infty$ with $m/n = \mathcal{O}(1)$, and a large class of (possibly correlated) random matrices $\mathbf{\Phi}$ and observation channels. Spectral methods are a powerful tool to obtain approximate observations of the signal $\mathbf{X}^\star$ which can be then used as initialization for a subsequent algorithm, at a low computational cost. In this paper, we extend and unify previous results and approaches on spectral methods for the phase retrieval problem. More precisely, we combine the linearization of message-passing algorithms and the analysis of the \emph{Bethe Hessian}, a classical tool of statistical physics. Using this toolbox, we show how to derive optimal spectral methods for arbitrary channel noise and right-unitarily invariant matrix $\mathbf{\Phi}$, in an automated manner (i.e. with no optimization over any hyperparameter or preprocessing function).
We construct maximally recoverable codes (corresponding to partial MDS codes) which are based on linearized Reed-Solomon codes. The new codes have a smaller field size requirement compared with known constructions. For certain asymptotic regimes, the constructed codes have order-optimal alphabet size, asymptotically matching the known lower bound.
We investigate variational principles for the approximation of quantum dynamics that apply for approximation manifolds that do not have complex linear tangent spaces. The first one, dating back to McLachlan (1964) minimizes the residuum of the time-dependent Schr\"odinger equation, while the second one, originating from the lecture notes of Kramer--Saraceno (1981), imposes the stationarity of an action functional. We characterize both principles in terms of metric and a symplectic orthogonality conditions, consider their conservation properties, and derive an elementary a-posteriori error estimate. As an application, we revisit the time-dependent Hartree approximation and frozen Gaussian wave packets.
We consider the problem of assigning agents to resources under the two-sided preference list model with upper and lower-quotas on resources. Krishnaa et al. [17] explore two optimality notions for this setting -- envy-freeness and relaxed stability. They investigate the problem of computing a maximum size envy-free matching (MAXEFM) and a maximum size relaxed stable matching (MAXRSM) that satisfies the lower-quotas. They show that both these optimization problems cannot be approximated within a constant factor unless P = NP. In this work, we investigate parameterized complexity of MAXEFM and MAXRSM. We consider several parameters derived from the instance -- the number of resources with non-zero lower-quota, deficiency of the instance, maximum length of the preference list of a resource with non-zero lower-quota, among others. We show that MAXEFM problem is W [1]-hard for several interesting parameters and MAXRSM problem is para-NP-hard for two natural parameters. We present kernelization results and FPT algorithms on a combination of parameters for both problems.
Recent years have witnessed the enormous success of low-dimensional vector space representations of knowledge graphs to predict missing facts or find erroneous ones. Currently, however, it is not yet well-understood how ontological knowledge, e.g. given as a set of (existential) rules, can be embedded in a principled way. To address this shortcoming, in this paper we introduce a framework based on convex regions, which can faithfully incorporate ontological knowledge into the vector space embedding. Our technical contribution is two-fold. First, we show that some of the most popular existing embedding approaches are not capable of modelling even very simple types of rules. Second, we show that our framework can represent ontologies that are expressed using so-called quasi-chained existential rules in an exact way, such that any set of facts which is induced using that vector space embedding is logically consistent and deductively closed with respect to the input ontology.