The work considers the $N$-server distributed computing scenario with $K$ users requesting functions that are linearly-decomposable over an arbitrary basis of $L$ real (potentially non-linear) subfunctions. In our problem, the aim is for each user to receive their function outputs, allowing for reduced reconstruction error (distortion) $\epsilon$, reduced computing cost ($\gamma$; the fraction of subfunctions each server must compute), and reduced communication cost ($\delta$; the fraction of users each server must connect to). For any given set of $K$ requested functions -- which is here represented by a coefficient matrix $\mathbf {F} \in \mathbb{R}^{K \times L}$ -- our problem is made equivalent to the open problem of sparse matrix factorization that seeks -- for a given parameter $T$, representing the number of shots for each server -- to minimize the reconstruction distortion $\frac{1}{KL}\|\mathbf {F} - \mathbf{D}\mathbf{E}\|^2_{F}$ overall $\delta$-sparse and $\gamma$-sparse matrices $\mathbf{D}\in \mathbb{R}^{K \times NT}$ and $\mathbf{E} \in \mathbb{R}^{NT \times L}$. With these matrices respectively defining which servers compute each subfunction, and which users connect to each server, we here design our $\mathbf{D},\mathbf{E}$ by designing tessellated-based and SVD-based fixed support matrix factorization methods that first split $\mathbf{F}$ into properly sized and carefully positioned submatrices, which we then approximate and then decompose into properly designed submatrices of $\mathbf{D}$ and $\mathbf{E}$.
An $f$-edge fault-tolerant distance sensitive oracle ($f$-DSO) with stretch $\sigma \ge 1$ is a data structure that preprocesses a given undirected, unweighted graph $G$ with $n$ vertices and $m$ edges, and a positive integer $f$. When queried with a pair of vertices $s, t$ and a set $F$ of at most $f$ edges, it returns a $\sigma$-approximation of the $s$-$t$-distance in $G-F$. We study $f$-DSOs that take subquadratic space. Thorup and Zwick [JACM 2005] showed that this is only possible for $\sigma \ge 3$. We present, for any constant $f \ge 1$ and $\alpha \in (0, \frac{1}{2})$, and any $\varepsilon > 0$, a randomized $f$-DSO with stretch $ 3 + \varepsilon$ that w.h.p. takes $\widetilde{O}(n^{2-\frac{\alpha}{f+1}}) \cdot O(\log n/\varepsilon)^{f+2}$ space and has an $O(n^\alpha/\varepsilon^2)$ query time. The time to build the oracle is $\widetilde{O}(mn^{2-\frac{\alpha}{f+1}}) \cdot O(\log n/\varepsilon)^{f+1}$. We also give an improved construction for graphs with diameter at most $D$. For any positive integer $k$, we devise an $f$-DSO with stretch $2k-1$ that w.h.p. takes $O(D^{f+o(1)} n^{1+1/k})$ space and has $\widetilde{O}(D^{o(1)})$ query time, with a preprocessing time of $O(D^{f+o(1)} mn^{1/k})$. Chechik, Cohen, Fiat, and Kaplan [SODA 2017] devised an $f$-DSO with stretch $1{+}\varepsilon$ and preprocessing time $O(n^{5+o(1)}/\varepsilon^f)$, albeit with a super-quadratic space requirement. We show how to reduce their preprocessing time to $O(mn^{2+o(1)}/\varepsilon^f)$.
We study message identification over a $q$-ary uniform permutation channel, where the transmitted vector is permuted by a permutation chosen uniformly at random. For discrete memoryless channels (DMCs), the number of identifiable messages grows doubly exponentially. Identification capacity, the maximum second-order exponent, is known to be the same as the Shannon capacity of the DMC. Permutation channels support reliable communication of only polynomially many messages. A simple achievability result shows that message sizes growing as $2^{c_nn^{q-1}}$ are identifiable for any $c_n\rightarrow 0$. We prove two converse results. A ``soft'' converse shows that for any $R>0$, there is no sequence of identification codes with message size growing as $2^{Rn^{q-1}}$ with a power-law decay ($n^{-\mu}$) of the error probability. We also prove a ``strong" converse showing that for any sequence of identification codes with message size $2^{Rn^{q-1}\log n}$ ($R>0$), the sum of type I and type II error probabilities approaches at least $1$ as $n\rightarrow \infty$. To prove the soft converse, we use a sequence of steps to construct a new identification code with a simpler structure which relates to a set system, and then use a lower bound on the normalized maximum pairwise intersection of a set system. To prove the strong converse, we use results on approximation of distributions.
Variational logistic regression is a popular method for approximate Bayesian inference seeing wide-spread use in many areas of machine learning including: Bayesian optimization, reinforcement learning and multi-instance learning to name a few. However, due to the intractability of the Evidence Lower Bound, authors have turned to the use of Monte Carlo, quadrature or bounds to perform inference, methods which are costly or give poor approximations to the true posterior. In this paper we introduce a new bound for the expectation of softplus function and subsequently show how this can be applied to variational logistic regression and Gaussian process classification. Unlike other bounds, our proposal does not rely on extending the variational family, or introducing additional parameters to ensure the bound is tight. In fact, we show that this bound is tighter than the state-of-the-art, and that the resulting variational posterior achieves state-of-the-art performance, whilst being significantly faster to compute than Monte-Carlo methods.
A preference matrix $M$ has an entry for each pair of candidates in an election whose value $p_{ij}$ represents the proportion of voters that prefer candidate $i$ over candidate $j$. The matrix is rationalizable if it is consistent with a set of voters whose preferences are total orders. A celebrated open problem asks for a concise characterization of rationalizable preference matrices. In this paper, we generalize this matrix rationalizability question and study when a preference matrix is consistent with a set of voters whose preferences are partial orders of width $\alpha$. The width (the maximum cardinality of an antichain) of the partial order is a natural measure of the rationality of a voter; indeed, a partial order of width $1$ is a total order. Our primary focus concerns the rationality number, the minimum width required to rationalize a preference matrix. We present two main results. The first concerns the class of half-integral preference matrices, where we show the key parameter required in evaluating the rationality number is the chromatic number of the undirected unanimity graph associated with the preference matrix $M$. The second concerns the class of integral preference matrices, where we show the key parameter now is the dichromatic number of the directed voting graph associated with $M$.
Given a large dataset of many tuples, it is hard for users to pick out their preferred tuples. Thus, the preference query problem, which is to find the most preferred tuples from a dataset, is widely discussed in the database area. In this problem, a utility function is given by the user to evaluate to what extent the user prefers a tuple. However, considering a dataset consisting of N tuples, the existing algorithms need O(N) time to answer a query, or need O(N) time for a cold start to answer a query. The reason is that in a classical computer, a linear time is needed to evaluate the utilities by the utility function for N tuples. In this paper, we discuss the Quantum Preference Query (QPQ) problem, where the dataset is given in a quantum memory, and we use a quantum computer to return the answers. Due to quantum parallelism, the quantum algorithm can theoretically perform better than their classical competitors. We discuss this problem in different kinds of input and output. In the QPQ problem, the input can be a number k or a threshold theta. Given k, the problem is to return k tuples with the highest utilities. Given theta, the problem is to return all the tuples with utilities higher than theta. Also, in QPQ problem, the output can be classical (i.e., a list of tuples) or quantum (i.e., a superposition in quantum bits). We proposed four quantum algorithms to solve the problems in the above four scenarios. We analyze the number of memory accesses needed for each quantum algorithm, which shows that the proposed quantum algorithms are at least quadratically faster than their classical competitors. In our experiments, we show that to answer a QPQ problem, the quantum algorithms achieve up to 1000x improvement in number of memory accesses than their classical competitors, which proved that QPQ problem could be a future direction of the study of preference query problems.
The chronological order of user-item interactions can reveal time-evolving and sequential user behaviors in many recommender systems. The items that users will interact with may depend on the items accessed in the past. However, the substantial increase of users and items makes sequential recommender systems still face non-trivial challenges: (1) the hardness of modeling the short-term user interests; (2) the difficulty of capturing the long-term user interests; (3) the effective modeling of item co-occurrence patterns. To tackle these challenges, we propose a memory augmented graph neural network (MA-GNN) to capture both the long- and short-term user interests. Specifically, we apply a graph neural network to model the item contextual information within a short-term period and utilize a shared memory network to capture the long-range dependencies between items. In addition to the modeling of user interests, we employ a bilinear function to capture the co-occurrence patterns of related items. We extensively evaluate our model on five real-world datasets, comparing with several state-of-the-art methods and using a variety of performance metrics. The experimental results demonstrate the effectiveness of our model for the task of Top-K sequential recommendation.
Graph representation learning is to learn universal node representations that preserve both node attributes and structural information. The derived node representations can be used to serve various downstream tasks, such as node classification and node clustering. When a graph is heterogeneous, the problem becomes more challenging than the homogeneous graph node learning problem. Inspired by the emerging information theoretic-based learning algorithm, in this paper we propose an unsupervised graph neural network Heterogeneous Deep Graph Infomax (HDGI) for heterogeneous graph representation learning. We use the meta-path structure to analyze the connections involving semantics in heterogeneous graphs and utilize graph convolution module and semantic-level attention mechanism to capture local representations. By maximizing local-global mutual information, HDGI effectively learns high-level node representations that can be utilized in downstream graph-related tasks. Experiment results show that HDGI remarkably outperforms state-of-the-art unsupervised graph representation learning methods on both classification and clustering tasks. By feeding the learned representations into a parametric model, such as logistic regression, we even achieve comparable performance in node classification tasks when comparing with state-of-the-art supervised end-to-end GNN models.
Many current applications use recommendations in order to modify the natural user behavior, such as to increase the number of sales or the time spent on a website. This results in a gap between the final recommendation objective and the classical setup where recommendation candidates are evaluated by their coherence with past user behavior, by predicting either the missing entries in the user-item matrix, or the most likely next event. To bridge this gap, we optimize a recommendation policy for the task of increasing the desired outcome versus the organic user behavior. We show this is equivalent to learning to predict recommendation outcomes under a fully random recommendation policy. To this end, we propose a new domain adaptation algorithm that learns from logged data containing outcomes from a biased recommendation policy and predicts recommendation outcomes according to random exposure. We compare our method against state-of-the-art factorization methods, in addition to new approaches of causal recommendation and show significant improvements.
We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.
Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms -- especially the collaborative filtering (CF) based approaches with shallow or deep models -- usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amount of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users' historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. In this work, we propose to reason over knowledge base embeddings for explainable recommendation. Specifically, we propose a knowledge base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.