Stimulated by practical applications arising from economics, viral marketing and elections, this paper studies a novel Group Influence with Minimal cost which aims to find a seed set with smallest cost that can influence all target groups, where each user is associated with a cost and a group is influenced if the total score of the influenced users belonging to the group is at least a certain threshold. As the group-influence function is neither submodular nor supermodular, theoretical bounds on the quality of solutions returned by the well-known greedy approach may not be guaranteed. To address this challenge, we propose a bi-criteria polynomial-time approximation algorithm with high certainty. At the heart of the algorithms is a novel group reachable reverse sample concept, which helps to speed up the estimation of the group influence function. Finally, extensive experiments conducted on real social networks show that our proposed algorithms significantly outperform the state-of-the-art algorithms in terms of the objective value and the running time.
We prove constructively that the maximum possible number of minimal connected dominating sets in a connected undirected graph of order $n$ is in $\Omega(1.489^n)$. This improves the previously known lower bound of $\Omega(1.4422^n)$ and reduces the gap between lower and upper bounds for input-sensitive enumeration of minimal connected dominating sets in general graphs as well as some special graph classes.
Owing to the remarkable development of deep learning technology, there have been a series of efforts to build deep learning-based climate models. Whereas most of them utilize recurrent neural networks and/or graph neural networks, we design a novel climate model based on the two concepts, the neural ordinary differential equation (NODE) and the diffusion equation. Many physical processes involving a Brownian motion of particles can be described by the diffusion equation and as a result, it is widely used for modeling climate. On the other hand, neural ordinary differential equations (NODEs) are to learn a latent governing equation of ODE from data. In our presented method, we combine them into a single framework and propose a concept, called neural diffusion equation (NDE). Our NDE, equipped with the diffusion equation and one more additional neural network to model inherent uncertainty, can learn an appropriate latent governing equation that best describes a given climate dataset. In our experiments with two real-world and one synthetic datasets and eleven baselines, our method consistently outperforms existing baselines by non-trivial margins.
We present deterministic algorithms for the Hidden Subgroup Problem. The first algorithm, for abelian groups, achieves the same asymptotic worst-case query complexity as the optimal randomized algorithm, namely O($\sqrt{ n}\,$), where $n$ is the order of the group. The analogous algorithm for non-abelian groups comes within a $\sqrt{ \log n}$ factor of the optimal randomized query complexity. The best known randomized algorithm for the Hidden Subgroup Problem has expected query complexity that is sensitive to the input, namely O($\sqrt{ n/m}\,$), where $m$ is the order of the hidden subgroup. In the first version of this article (arXiv:2104.14436v1 [cs.DS]), we asked if there is a deterministic algorithm whose query complexity has a similar dependence on the order of the hidden subgroup. Prompted by this question, Ye and Li (arXiv:2110.00827v1 [cs.DS]) present deterministic algorithms for abelian groups which solve the problem with O($\sqrt{ n/m }\,$ ) queries, and find the hidden subgroup with O($\sqrt{ n (\log m) / m} + \log m$) queries. Moreover, they exhibit instances which show that in general, the deterministic query complexity of the problem may be o($\sqrt{ n/m } \,$), and that of finding the entire subgroup may also be o($\sqrt{ n/m } \,$) or even $\omega(\sqrt{ n/m } \,)$. We present a different deterministic algorithm for the Hidden Subgroup Problem that also has query complexity O($\sqrt{ n/m }\,$) for abelian groups. The algorithm is arguably simpler. Moreover, it works for non-abelian groups, and has query complexity O($\sqrt{ (n/m) \log (n/m) }\,$) for a large class of instances, such as those over supersolvable groups. We build on this to design deterministic algorithms to find the hidden subgroup for all abelian and some non-abelian instances, at the cost of a $\log m$ multiplicative factor increase in the query complexity.
The influence maximization paradigm has been used by researchers in various fields in order to study how information spreads in social networks. While previously the attention was mostly on efficiency, more recently fairness issues have been taken into account in this scope. In this paper, we propose to use randomization as a mean for achieving fairness. Similar to previous works like Fish et al. (WWW '19) and Tsang et al. (IJCAI '19), we study the maximin criterion for (group) fairness. In contrast to their work however, we model the problem in such a way that, when choosing the seed sets, probabilistic strategies are possible rather than only deterministic ones. We introduce two different variants of this probabilistic problem, one that entails probabilistic strategies over nodes (node-based problem) and a second one that entails probabilistic strategies over sets of nodes (set-based problem). While the original deterministic problem involving the maximin criterion has been shown to be inapproximable, interestingly, we show that both probabilistic variants permit approximation algorithms that achieve a constant multiplicative factor of 1-1/e plus an additive arbitrarily small error that is due to the simulation of the information spread. For an experimental study, we provide implementations of multiplicative-weight routines for both problems and compare the achieved fairness values to existing methods. Maybe non-surprisingly, we show that the ex-ante values of the computed probabilistic strategies are significantly larger than the (ex-post) fairness values of previous methods. This indicates that studying fairness via randomization is a worthwhile path to follow. Interestingly and maybe more surprisingly, we observe that even the ex-post fairness values computed by our routines, dominate over the fairness achieved by previous methods on most of the instances tested.
We consider the Cauchy problem for a first-order evolution equation with memory in a finite-dimensional Hilbert space when the integral term is related to the time derivative of the solution. The main problems of the approximate solution of such nonlocal problems are due to the necessity to work with the approximate solution for all previous time moments. We propose a transformation of the first-order integrodifferential equation to a system of local evolutionary equations. We use the approach known in the theory of Voltaire integral equations with an approximation of the difference kernel by the sum of exponents. We formulate a local problem for a weakly coupled system of equations with additional ordinary differential equations. We have given estimates of the stability of the solution by initial data and the right-hand side for the solution of the corresponding Cauchy problem. The primary attention is paid to constructing and investigating the stability of two-level difference schemes, which are convenient for computational implementation. The numerical solution of a two-dimensional model problem for the evolution equation of the first order, when the Laplace operator conditions the dependence on spatial variables, is presented.
Numerous studies have demonstrated that deep neural networks are easily misled by adversarial examples. Effectively evaluating the adversarial robustness of a model is important for its deployment in practical applications. Currently, a common type of evaluation is to approximate the adversarial risk of a model as a robustness indicator by constructing malicious instances and executing attacks. Unfortunately, there is an error (gap) between the approximate value and the true value. Previous studies manually design attack methods to achieve a smaller error, which is inefficient and may miss a better solution. In this paper, we establish the tightening of the approximation error as an optimization problem and try to solve it with an algorithm. More specifically, we first analyze that replacing the non-convex and discontinuous 0-1 loss with a surrogate loss, a necessary compromise in calculating the approximation, is one of the main reasons for the error. Then we propose AutoLoss-AR, the first method for searching loss functions for tightening the approximation error of adversarial risk. Extensive experiments are conducted in multiple settings. The results demonstrate the effectiveness of the proposed method: the best-discovered loss functions outperform the handcrafted baseline by 0.9%-2.9% and 0.7%-2.0% on MNIST and CIFAR-10, respectively. Besides, we also verify that the searched losses can be transferred to other settings and explore why they are better than the baseline by visualizing the local loss landscape.
Influence maximization is the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds, and it has been widely investigated in the past two decades. In the canonical setting, the whole social network as well as its diffusion parameters is given as input. In this paper, we consider the more realistic sampling setting where the network is unknown and we only have a set of passively observed cascades that record the set of activated nodes at each diffusion step. We study the task of influence maximization from these cascade samples (IMS), and present constant approximation algorithms for this task under mild conditions on the seed set distribution. To achieve the optimization goal, we also provide a novel solution to the network inference problem, that is, learning diffusion parameters and the network structure from the cascade data. Comparing with prior solutions, our network inference algorithm requires weaker assumptions and does not rely on maximum-likelihood estimation and convex programming. Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn, and we do not need any assumption related to the network structure or diffusion parameters.
Rankings, especially those in search and recommendation systems, often determine how people access information and how information is exposed to people. Therefore, how to balance the relevance and fairness of information exposure is considered as one of the key problems for modern IR systems. As conventional ranking frameworks that myopically sorts documents with their relevance will inevitably introduce unfair result exposure, recent studies on ranking fairness mostly focus on dynamic ranking paradigms where result rankings can be adapted in real-time to support fairness in groups (i.e., races, genders, etc.). Existing studies on fairness in dynamic learning to rank, however, often achieve the overall fairness of document exposure in ranked lists by significantly sacrificing the performance of result relevance and fairness on the top results. To address this problem, we propose a fair and unbiased ranking method named Maximal Marginal Fairness (MMF). The algorithm integrates unbiased estimators for both relevance and merit-based fairness while providing an explicit controller that balances the selection of documents to maximize the marginal relevance and fairness in top-k results. Theoretical and empirical analysis shows that, with small compromises on long list fairness, our method achieves superior efficiency and effectiveness comparing to the state-of-the-art algorithms in both relevance and fairness for top-k rankings.
Precise user and item embedding learning is the key to building a successful recommender system. Traditionally, Collaborative Filtering(CF) provides a way to learn user and item embeddings from the user-item interaction history. However, the performance is limited due to the sparseness of user behavior data. With the emergence of online social networks, social recommender systems have been proposed to utilize each user's local neighbors' preferences to alleviate the data sparsity for better user embedding modeling. We argue that, for each user of a social platform, her potential embedding is influenced by her trusted users. As social influence recursively propagates and diffuses in the social network, each user's interests change in the recursive process. Nevertheless, the current social recommendation models simply developed static models by leveraging the local neighbors of each user without simulating the recursive diffusion in the global social network, leading to suboptimal recommendation performance. In this paper, we propose a deep influence propagation model to stimulate how users are influenced by the recursive social diffusion process for social recommendation. For each user, the diffusion process starts with an initial embedding that fuses the related features and a free user latent vector that captures the latent behavior preference. The key idea of our proposed model is that we design a layer-wise influence propagation structure to model how users' latent embeddings evolve as the social diffusion process continues. We further show that our proposed model is general and could be applied when the user~(item) attributes or the social network structure is not available. Finally, extensive experimental results on two real-world datasets clearly show the effectiveness of our proposed model, with more than 13% performance improvements over the best baselines.
We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.