Group authentication is a method of confirmation that a set of users belong to a group and of distributing a common key among them. Unlike the standard authentication schemes where one central authority authenticates users one by one, group authentication can handle the authentication process at once for all members of the group. The recently presented group authentication algorithms mainly exploit Lagrange's polynomial interpolation along with elliptic curve groups over finite fields. As a fresh approach, this work suggests use of linear spaces for group authentication and key establishment for a group of any size. The approach with linear spaces introduces a reduced computation and communication load to establish a common shared key among the group members. The advantages of using vector spaces make the proposed method applicable to energy and resource constrained devices. In addition to providing lightweight authentication and key agreement, this proposal allows any user in a group to make a non-member to be a member, which is expected to be useful for autonomous systems in the future. The scheme is designed in a way that the sponsors of such members can easily be recognized by anyone in the group. Unlike the other group authentication schemes based on Lagrange's polynomial interpolation, the proposed scheme doesn't provide a tool for adversaries to compromise the whole group secrets by using only a few members' shares as well as it allows to recognize a non-member easily, which prevents service interruption attacks.
Inspired by branch-and-bound and cutting plane proofs in mixed-integer optimization and proof complexity, we develop a general approach via Hoffman's Helly systems. This helps to distill the main ideas behind optimality and infeasibility certificates in optimization. The first part of the paper formalizes the notion of a certificate and its size in this general setting. The second part of the paper establishes lower and upper bounds on the sizes of these certificates in various different settings. We show that some important techniques existing in the literature are purely combinatorial in nature and do not depend on any underlying geometric notions.
In comparison to graphs, combinatorial methods for the isomorphism problem of finite groups are less developed than algebraic ones. To be able to investigate the descriptive complexity of finite groups and the group isomorphism problem, we define the Weisfeiler-Leman algorithm for groups. In fact we define three versions of the algorithm. In contrast to graphs, where the three analogous versions readily agree, for groups the situation is more intricate. For groups, we show that their expressive power is linearly related. We also give descriptions in terms of counting logics and bijective pebble games for each of the versions. In order to construct examples of groups, we devise an isomorphism and non-isomorphism preserving transformation from graphs to groups. Using graphs of high Weisfeiler-Leman dimension, we construct highly similar but non-isomorphic groups with equal $\Theta(\sqrt{\log n})$-subgroup-profiles, which nevertheless have Weisfeiler-Leman dimension 3. These groups are nilpotent groups of class 2 and exponent $p$, they agree in many combinatorial properties such as the combinatorics of their conjugacy classes and have highly similar commuting graphs. The results indicate that the Weisfeiler-Leman algorithm can be more effective in distinguishing groups than in distinguishing graphs based on similar combinatorial constructions.
By the asymptotic oracle property, non-convex penalties represented by minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD) have attracted much attentions in high-dimensional data analysis, and have been widely used in signal processing, image restoration, matrix estimation, etc. However, in view of their non-convex and non-smooth characteristics, they are computationally challenging. Almost all existing algorithms converge locally, and the proper selection of initial values is crucial. Therefore, in actual operation, they often combine a warm-starting technique to meet the rigid requirement that the initial value must be sufficiently close to the optimal solution of the corresponding problem. In this paper, based on the DC (difference of convex functions) property of MCP and SCAD penalties, we aim to design a global two-stage algorithm for the high-dimensional least squares linear regression problems. A key idea for making the proposed algorithm to be efficient is to use the primal dual active set with continuation (PDASC) method, which is equivalent to the semi-smooth Newton (SSN) method, to solve the corresponding sub-problems. Theoretically, we not only prove the global convergence of the proposed algorithm, but also verify that the generated iterative sequence converges to a d-stationary point. In terms of computational performance, the abundant research of simulation and real data show that the algorithm in this paper is superior to the latest SSN method and the classic coordinate descent (CD) algorithm for solving non-convex penalized high-dimensional linear regression problems.
The Internet of Things (IoT) comprises of a heterogeneous mix of smart devices which vary widely in their size, usage, energy capacity, computational power etc. IoT devices are typically connected to the Cloud via Fog nodes for fast processing and response times. In a rush to deploy devices quickly into the real-world and to maximize market share, the issue of security is often considered as an afterthought by the manufacturers of such devices. Some well-known security concerns of IoT are - data confidentiality, authentication of devices, location privacy, device integrity etc. We believe that the majority of security schemes proposed to date are too heavyweight for them to be of any practical value for the IoT. In this paper we propose a lightweight encryption scheme loosely based on the classic one-time pad, and make use of hash functions for the generation and management of keys. Our scheme imposes minimal computational and storage requirements on the network nodes, which makes it a viable candidate for the encryption of data transmitted by IoT devices in the Fog.
Queue length violation probability, i.e., the tail distribution of the queue length, is a widely used statistical quality-of-service (QoS) metric in wireless communications. Characterizing and optimizing the queue length violation probability have great significance in time sensitive networking (TSN) and ultra reliable and low-latency communications (URLLC). However, it still remains an open problem. In this paper, we put our focus on the analysis of the tail distribution of the queue length from the perspective of cross-layer design in wireless link transmission. We find that, under the finite average power consumption constraint, the queue length violation probability can achieve zero with diversity gains, while it can have a linear-decay-rate exponent according to large deviation theory (LDT) with limited receiver sensitivity. Besides, we find that the arbitrary-decay-rate queue length tail distribution with the finite average power consumption exists in the Rayleigh fading channel. Then, we generalize the sufficient conditions for the communication system belonging to these three scenarios, respectively. Moreover, we apply the above results to analyze the wireless link transmission in the Nakagami-m fading channel. Numerical results with approximation validate our analysis.
We present an implicit Split-Step explicit Euler type Method (dubbed SSM) for the simulation of McKean-Vlasov Stochastic Differential Equations (MV-SDEs) with drifts of superlinear growth in space, Lipschitz in measure and non-constant Lipschitz diffusion coefficient. The scheme is designed to leverage the structure induced by the interacting particle approximation system, including parallel implementation and the solvability of the implicit equation. The scheme attains the classical $1/2$ root mean square error (rMSE) convergence rate in stepsize and closes the gap left by [18, "Simulation of McKean-Vlasov SDEs with super-linear growth" in IMA Journal of Numerical Analysis, 01 2021. draa099] regarding efficient implicit methods and their convergence rate for this class of McKean-Vlasov SDEs. A sufficient condition for mean-square contractivity of the scheme is presented. Several numerical examples are presented, including a comparative analysis to other known algorithms for this class (Taming and Adaptive time-stepping) across parallel and non-parallel implementations.
Ensuring and validating the safe operation of automated vehicles are key challenges for their market launch. Scenario-based development and test approaches are currently being pursued as possible solutions. An essential prerequisite for researching, applying, and standardizing these approaches is a consistent and agreed-upon taxonomy. This taxonomy must include relevant terms, their description, and the relationships between the respective terms. To the best of our knowledge, such a taxonomy does not yet exist, which often leads to misunderstandings, for example, in coordination processes and discussions. This publication contributes to this taxonomy. For this purpose, we propose a framework for structuring the taxonomy. Within this framework, we propose a basic vocabulary by identifying and describing terms that we consider particularly relevant for an overview of such scenario-based development and test approaches. Additionally, we visualize the proposed terms and the relationships between these terms with UML diagrams and explain the application of the proposed basic vocabulary with an example.
Streaming is a model where an input graph is provided one edge at a time, instead of being able to inspect it at will. In this work, we take a parameterized approach by assuming a vertex cover of the graph is given, building on work of Bishnu et al. [COCOON 2020]. We show the further potency of combining this parameter with the Adjacency List streaming model to obtain results for vertex deletion problems. This includes kernels, parameterized algorithms, and lower bounds for the problems of Pi-free Deletion, H-free Deletion, and the more specific forms of Cluster Vertex Deletion and Odd Cycle Transversal. We focus on the complexity in terms of the number of passes over the input stream, and the memory used. This leads to a pass/memory trade-off, where a different algorithm might be favourable depending on the context and instance. We also discuss implications for parameterized complexity in the non-streaming setting.
Solutions of certain partial differential equations (PDEs) are often represented by the steepest descent curves of corresponding functionals. Minimizing movement scheme was developed in order to study such curves in metric spaces. Especially, Jordan-Kinderlehrer-Otto studied the Fokker-Planck equation in this way with respect to the Wasserstein metric space. In this paper, we propose a deep learning-based minimizing movement scheme for approximating the solutions of PDEs. The proposed method is highly scalable for high-dimensional problems as it is free of mesh generation. We demonstrate through various kinds of numerical examples that the proposed method accurately approximates the solutions of PDEs by finding the steepest descent direction of a functional even in high dimensions.
A* is a classic and popular method for graphs search and path finding. It assumes the existence of a heuristic function $h(u,t)$ that estimates the shortest distance from any input node $u$ to the destination $t$. Traditionally, heuristics have been handcrafted by domain experts. However, over the last few years, there has been a growing interest in learning heuristic functions. Such learned heuristics estimate the distance between given nodes based on "features" of those nodes. In this paper we formalize and initiate the study of such feature-based heuristics. In particular, we consider heuristics induced by norm embeddings and distance labeling schemes, and provide lower bounds for the tradeoffs between the number of dimensions or bits used to represent each graph node, and the running time of the A* algorithm. We also show that, under natural assumptions, our lower bounds are almost optimal.