In this paper, we consider the case that sharing many secrets among a set of participants using the threshold schemes. All secrets are assumed to be statistically independent and the weak secure condition is focused on. Under such circumstances we investigate the infimum of the (average) information ratio and the (average) randomness ratio for any structure pair which consists of the number of the participants and the threshold values of all secrets. For two structure pairs such that the two numbers of the participants are the same and the two arrays of threshold values have the subset relationship, two leading corollaries are proved following two directions. More specifically, the bound related to the lengths of shares, secrets and randomness for the complex structure pair can be truncated for the simple one; and the linear schemes for the simple structure pair can be combined independently to be a multiple threshold scheme for the complex one. The former corollary is useful for the converse part and the latter one is helpful for the achievability part. Three new bounds special for the case that the number of secrets corresponding to the same threshold value $ t $ is lager than $ t $ and two novel linear schemes modified from the Vandermonde matrix for two similar cases are presented. Then come the optimal results for the average information ratio, the average randomness ratio and the randomness ratio. We introduce a tiny example to show that there exists another type of bound that may be crucial for the information ratio, to which we only give optimal results in three cases.
It is very important to locate survivors from collapsed buildings so that rescue operations can be arranged. Many lives are lost due to lack of competent systems to detect people in these collapsed buildings at the right time. So here we have designed a hand gesture controlled robot which is capable of detecting humans under these collapsed building parts. The proposed work can be used to access specific locations that are not humanly possible, and detect those humans trapped under the rubble of collapsed buildings. This information is then used to notify the rescue team to take adequate measures and initiate rescue operations accordingly.
This paper introduces a novel approach that seeks a middle ground for traffic control in multi-lane congestion, where prevailing traffic speeds are too fast, and speed recommendations designed to dampen traffic waves are too slow. Advanced controllers that modify the speed of an automated car for wave-dampening, eco-driving, or other goals, typically are designed with forward collision safety in mind. Our approach goes further, by considering how dangerous it can be for a controller to drive so slowly relative to prevailing traffic that it creates a significant issue for safety and comfort. This paper explores open-road scenarios where large gaps between prevailing speeds and desired speeds can exist, specifically when infrastructure-based variable speed limit systems are not strictly followed at all times by other drivers. Our designed, implemented, and deployed algorithm is able to follow variable speed limits when others also follow it, avoid collisions with vehicles ahead, and adapt to prevailing traffic when other motorists are traveling well above the posted speeds. The key is to reject unsafe speed recommendations from infrastructure-based traffic smoothing systems, based on real-time local traffic conditions observed by the vehicle under control. This solution is implemented and deployed on two control vehicles in heavy multi-lane highway congestion. The results include analysis from system design, and field tests that validate the system's performance using an existing Variable Speed Limit system as the external source for speed recommendations, and the on-board sensors of a stock Toyota Rav4 for inputs that estimate the prevailing speed of traffic around the vehicle under control.
In this paper, we propose a novel graph neural network-based recommendation model called KGLN, which leverages Knowledge Graph (KG) information to enhance the accuracy and effectiveness of personalized recommendations. We first use a single-layer neural network to merge individual node features in the graph, and then adjust the aggregation weights of neighboring entities by incorporating influence factors. The model evolves from a single layer to multiple layers through iteration, enabling entities to access extensive multi-order associated entity information. The final step involves integrating features of entities and users to produce a recommendation score. The model performance was evaluated by comparing its effects on various aggregation methods and influence factors. In tests over the MovieLen-1M and Book-Crossing datasets, KGLN shows an Area Under the ROC curve (AUC) improvement of 0.3% to 5.9% and 1.1% to 8.2%, respectively, which is better than existing benchmark methods like LibFM, DeepFM, Wide&Deep, and RippleNet.
Technology ecosystems often undergo significant transformations as they mature. For example, telephony, the Internet, and PCs all started with a single provider, but in the United States each is now served by a competitive market that uses comprehensive and universal technology standards to provide compatibility. This white paper presents our view on how the cloud ecosystem, barely over fifteen years old, could evolve as it matures.
This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pre-training and shows promising scaling behavior.
Recently, a considerable literature has grown up around the theme of Graph Convolutional Network (GCN). How to effectively leverage the rich structural information in complex graphs, such as knowledge graphs with heterogeneous types of entities and relations, is a primary open challenge in the field. Most GCN methods are either restricted to graphs with a homogeneous type of edges (e.g., citation links only), or focusing on representation learning for nodes only instead of jointly propagating and updating the embeddings of both nodes and edges for target-driven objectives. This paper addresses these limitations by proposing a novel framework, namely the Knowledge Embedding based Graph Convolutional Network (KE-GCN), which combines the power of GCNs in graph-based belief propagation and the strengths of advanced knowledge embedding (a.k.a. knowledge graph embedding) methods, and goes beyond. Our theoretical analysis shows that KE-GCN offers an elegant unification of several well-known GCN methods as specific cases, with a new perspective of graph convolution. Experimental results on benchmark datasets show the advantageous performance of KE-GCN over strong baseline methods in the tasks of knowledge graph alignment and entity classification.
This paper presents a new approach for assembling graph neural networks based on framelet transforms. The latter provides a multi-scale representation for graph-structured data. With the framelet system, we can decompose the graph feature into low-pass and high-pass frequencies as extracted features for network training, which then defines a framelet-based graph convolution. The framelet decomposition naturally induces a graph pooling strategy by aggregating the graph feature into low-pass and high-pass spectra, which considers both the feature values and geometry of the graph data and conserves the total information. The graph neural networks with the proposed framelet convolution and pooling achieve state-of-the-art performance in many types of node and graph prediction tasks. Moreover, we propose shrinkage as a new activation for the framelet convolution, which thresholds the high-frequency information at different scales. Compared to ReLU, shrinkage in framelet convolution improves the graph neural network model in terms of denoising and signal compression: noises in both node and structure can be significantly reduced by accurately cutting off the high-pass coefficients from framelet decomposition, and the signal can be compressed to less than half its original size with the prediction performance well preserved.
In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations. This model has a number of attractive properties: it not only improves language modeling performance, but is also able to annotate the posterior probability of entity spans for a given text through relations. Experiments demonstrate empirical improvements over both a word-based baseline language model and a previous approach that incorporates knowledge graph information. Qualitative analysis further demonstrates the proposed model's ability to learn to predict appropriate relations in context.
In this paper, we present an accurate and scalable approach to the face clustering task. We aim at grouping a set of faces by their potential identities. We formulate this task as a link prediction problem: a link exists between two faces if they are of the same identity. The key idea is that we find the local context in the feature space around an instance (face) contains rich information about the linkage relationship between this instance and its neighbors. By constructing sub-graphs around each instance as input data, which depict the local context, we utilize the graph convolution network (GCN) to perform reasoning and infer the likelihood of linkage between pairs in the sub-graphs. Experiments show that our method is more robust to the complex distribution of faces than conventional methods, yielding favorably comparable results to state-of-the-art methods on standard face clustering benchmarks, and is scalable to large datasets. Furthermore, we show that the proposed method does not need the number of clusters as prior, is aware of noises and outliers, and can be extended to a multi-view version for more accurate clustering accuracy.
Inferring missing links in knowledge graphs (KG) has attracted a lot of attention from the research community. In this paper, we tackle a practical query answering task involving predicting the relation of a given entity pair. We frame this prediction problem as an inference problem in a probabilistic graphical model and aim at resolving it from a variational inference perspective. In order to model the relation between the query entity pair, we assume that there exists an underlying latent variable (paths connecting two nodes) in the KG, which carries the equivalent semantics of their relations. However, due to the intractability of connections in large KGs, we propose to use variation inference to maximize the evidence lower bound. More specifically, our framework (\textsc{Diva}) is composed of three modules, i.e. a posterior approximator, a prior (path finder), and a likelihood (path reasoner). By using variational inference, we are able to incorporate them closely into a unified architecture and jointly optimize them to perform KG reasoning. With active interactions among these sub-modules, \textsc{Diva} is better at handling noise and coping with more complex reasoning scenarios. In order to evaluate our method, we conduct the experiment of the link prediction task on multiple datasets and achieve state-of-the-art performances on both datasets.