Cryptographic signatures can be used to increase the resilience of distributed systems against adversarial attacks, by increasing the number of faulty parties that can be tolerated. While this is well-studied for consensus, it has been underexplored in the context of fault-tolerant clock synchronization, even in fully connected systems. Here, the honest parties of an $n$-node system are required to compute output clocks of small skew (i.e., maximum phase offset) despite local clock rates varying between $1$ and $\vartheta>1$, end-to-end communication delays varying between $d-u$ and $d$, and the interference from malicious parties. So far, it is only known that clock pulses of skew $d$ can be generated with (trivially optimal) resilience of $\lceil n/2\rceil-1$ (PODC `19), improving over the tight bound of $\lceil n/3\rceil-1$ holding without signatures for \emph{any} skew bound (STOC `84, PODC `85). Since typically $d\gg u$ and $\vartheta-1\ll 1$, this is far from the lower bound of $u+(\vartheta-1)d$ that applies even in the fault-free case (IPL `01). We prove matching upper and lower bounds of $\Theta(u+(\vartheta-1)d)$ on the skew for the resilience range from $\lceil n/3\rceil$ to $\lceil n/2\rceil-1$. The algorithm showing the upper bound is, under the assumption that the adversary cannot forge signatures, deterministic. The lower bound holds even if clocks are initially perfectly synchronized, message delays between honest nodes are known, $\vartheta$ is arbitrarily close to one, and the synchronization algorithm is randomized. This has crucial implications for network designers that seek to leverage signatures for providing more robust time. In contrast to the setting without signatures, they must ensure that an attacker cannot easily bypass the lower bound on the delay on links with a faulty endpoint.
In this work, we study the problem of community detection in the stochastic block model with adversarial node corruptions. Our main result is an efficient algorithm that can tolerate an $\epsilon$-fraction of corruptions and achieves error $O(\epsilon) + e^{-\frac{C}{2} (1 \pm o(1))}$ where $C = (\sqrt{a} - \sqrt{b})^2$ is the signal-to-noise ratio and $a/n$ and $b/n$ are the inter-community and intra-community connection probabilities respectively. These bounds essentially match the minimax rates for the SBM without corruptions. We also give robust algorithms for $\mathbb{Z}_2$-synchronization. At the heart of our algorithm is a new semidefinite program that uses global information to robustly boost the accuracy of a rough clustering. Moreover, we show that our algorithms are doubly-robust in the sense that they work in an even more challenging noise model that mixes adversarial corruptions with unbounded monotone changes, from the semi-random model.
The challenge of simulating random variables is a central problem in Statistics and Machine Learning. Given a tractable proposal distribution $P$, from which we can draw exact samples, and a target distribution $Q$ which is absolutely continuous with respect to $P$, the A* sampling algorithm allows simulating exact samples from $Q$, provided we can evaluate the Radon-Nikodym derivative of $Q$ with respect to $P$. Maddison et al. originally showed that for a target distribution $Q$ and proposal distribution $P$, the runtime of A* sampling is upper bounded by $\mathcal{O}(\exp(D_{\infty}[Q||P]))$ where $D_{\infty}[Q||P]$ is the Renyi divergence from $Q$ to $P$. This runtime can be prohibitively large for many cases of practical interest. Here, we show that with additional restrictive assumptions on $Q$ and $P$, we can achieve much faster runtimes. Specifically, we show that if $Q$ and $P$ are distributions on $\mathbb{R}$ and their Radon-Nikodym derivative is unimodal, the runtime of A* sampling is $\mathcal{O}(D_{\infty}[Q||P])$, which is exponentially faster than A* sampling without assumptions.
ASIC hash engines are specifically optimized for parallel computations of cryptographic hashes and thus a natural environment for mounting brute-force attacks on hash functions. Two fundamental advantages of ASICs over general purpose computers are the area advantage and the energy efficiency. The memory-hard functions approach the problem by reducing the area advantage of ASICs compared to general-purpose computers. Traditionally, memory-hard functions have been analyzed in the (parallel) random oracle model. However, as the memory-hard security game is multi-stage, indifferentiability does not apply and instantiating the random oracle becomes a non-trivial problem. Chen and Tessaro (CRYPTO 2019) considered this issue and showed how random oracles should be instantiated in the context of memory-hard functions. The Bandwidth-Hard functions, introduced by Ren and Devadas (TCC 2017), aim to provide ASIC resistance by reducing the energy advantage of ASICs. In particular, bandwidth-hard functions provide ASIC resistance by guaranteeing high run time energy cost if the available cache is not large enough. Previously, bandwidth-hard functions have been analyzed in the parallel random oracle model. In this work, we show how those random oracles can be instantiated using random permutations in the context of bandwidth-hard functions. Our results are generic and valid for any hard-to-pebble graphs.
One-way-broadcast based flooding time synchronization algorithms are commonly used in wireless sensor networks (WSNs). However, the packet delay and clock drift pose challenges to accuracy, as they entail serious by-hop error accumulation problems in the WSNs. To overcome it, a rapid flooding multi-broadcast time synchronization with real-time delay compensation (RDC-RMTS) is proposed in this paper. By using a rapid-flooding protocol, flooding latency of the referenced time information is significantly reduced in the RDC-RMTS. In addition, a new joint clock skew-offset maximum likelihood estimation is developed to obtain the accurate clock parameter estimations, and the real-time packet delay estimation. Moreover, an innovative implementation of the RDC-RMTS is designed with an adaptive clock offset estimation. The experimental results indicate that, the RDC-RMTS can easily reduce the variable delay and significantly slow the growth of by-hop error accumulation. Thus, the proposed RDC-RMTS can achieve accurate time synchronization in large-scale complex WSNs.
While neural networks have been remarkably successful in a wide array of applications, implementing them in resource-constrained hardware remains an area of intense research. By replacing the weights of a neural network with quantized (e.g., 4-bit, or binary) counterparts, massive savings in computation cost, memory, and power consumption are attained. To that end, we generalize a post-training neural-network quantization method, GPFQ, that is based on a greedy path-following mechanism. Among other things, we propose modifications to promote sparsity of the weights, and rigorously analyze the associated error. Additionally, our error analysis expands the results of previous work on GPFQ to handle general quantization alphabets, showing that for quantizing a single-layer network, the relative square error essentially decays linearly in the number of weights -- i.e., level of over-parametrization. Our result holds across a range of input distributions and for both fully-connected and convolutional architectures thereby also extending previous results. To empirically evaluate the method, we quantize several common architectures with few bits per weight, and test them on ImageNet, showing only minor loss of accuracy compared to unquantized models. We also demonstrate that standard modifications, such as bias correction and mixed precision quantization, further improve accuracy.
To mitigate the imbalance in the number of assignees in the Hospitals/Residents problem, Goko et al. [Goko et al., Maximally Satisfying Lower Quotas in the Hospitals/Residents Problem with Ties, Proc. STACS 2022, pp. 31:1--31:20] studied the Hospitals/Residents problem with lower quotas whose goal is to find a stable matching that satisfies lower quotas as much as possible. In their paper, preference lists are assumed to be complete, that is, the preference list of each resident (resp., hospital) is assumed to contain all the hospitals (resp., residents). In this paper, we study a more general model where preference lists may be incomplete. For four natural scenarios, we obtain maximum gaps of the best and worst solutions, approximability results, and inapproximability results.
The UK, particularly London, is a global hub for money laundering, a significant portion of which uses domestic property. However, understanding the distribution and characteristics of offshore domestic property in the UK is challenging due to data availability. This paper attempts to remedy that situation by enhancing a publicly available dataset of UK property owned by offshore companies. We create a data processing pipeline which draws on several datasets and machine learning techniques to create a parsed set of addresses classified into six use classes. The enhanced dataset contains 138,000 properties 44,000 more than the original dataset. The majority are domestic (95k), with a disproportionate amount of those in London (42k). The average offshore domestic property in London is worth 1.33 million GBP collectively this amounts to approximately 56 Billion GBP. We perform an in-depth analysis of the offshore domestic property in London, comparing the price, distribution and entropy/concentration with Airbnb property, low-use/empty property and conventional domestic property. We estimate that the total amount of offshore, low-use and airbnb property in London is between 144,000 and 164,000 and that they are collectively worth between 145-174 billion GBP. Furthermore, offshore domestic property is more expensive and has higher entropy/concentration than all other property types. In addition, we identify two different types of offshore property, nested and individual, which have different price and distribution characteristics. Finally, we release the enhanced offshore property dataset, the complete low-use London dataset and the pipeline for creating the enhanced dataset to reduce the barriers to studying this topic.
Suppose we are given integer $k \leq n$ and $n$ boxes labeled $1,\ldots, n$ by an adversary, each containing a number chosen from an unknown distribution. We have to choose an order to sequentially open these boxes, and each time we open the next box in this order, we learn its number. If we reject a number in a box, the box cannot be recalled. Our goal is to accept the $k$ largest of these numbers, without necessarily opening all boxes. This is the free order multiple-choice secretary problem. Free order variants were studied extensively for the secretary and prophet problems. Kesselheim, Kleinberg, and Niazadeh KKN (STOC'15) initiated a study of randomness-efficient algorithms (with the cheapest order in terms of used random bits) for the free order secretary problems. We present an algorithm for free order multiple-choice secretary, which is simultaneously optimal for the competitive ratio and used amount of randomness. I.e., we construct a distribution on orders with optimal entropy $\Theta(\log\log n)$ such that a deterministic multiple-threshold algorithm is $1-O(\sqrt{\log k/k})$-competitive. This improves in three ways the previous best construction by KKN, whose competitive ratio is $1 - O(1/k^{1/3}) - o(1)$. Our competitive ratio is (near)optimal for the multiple-choice secretary problem; it works for exponentially larger parameter $k$; and our algorithm is a simple deterministic multiple-threshold algorithm, while that in KKN is randomized. We also prove a corresponding lower bound on the entropy of optimal solutions for the multiple-choice secretary problem, matching entropy of our algorithm, where no such previous lower bound was known. We obtain our algorithmic results with a host of new techniques, and with these techniques we also improve significantly the previous results of KKN about constructing entropy-optimal distributions for the classic free order secretary.
As the importance of intrusion detection and prevention systems (IDPSs) increases, great costs are incurred to manage the signatures that are generated by malicious communication pattern files. Experts in network security need to classify signatures by importance for an IDPS to work. We propose and evaluate a machine learning signature classification model with a reject option (RO) to reduce the cost of setting up an IDPS. To train the proposed model, it is essential to design features that are effective for signature classification. Experts classify signatures with predefined if-then rules. An if-then rule returns a label of low, medium, high, or unknown importance based on keyword matching of the elements in the signature. Therefore, we first design two types of features, symbolic features (SFs) and keyword features (KFs), which are used in keyword matching for the if-then rules. Next, we design web information and message features (WMFs) to capture the properties of signatures that do not match the if-then rules. The WMFs are extracted as term frequency-inverse document frequency (TF-IDF) features of the message text in the signatures. The features are obtained by web scraping from the referenced external attack identification systems described in the signature. Because failure needs to be minimized in the classification of IDPS signatures, as in the medical field, we consider introducing a RO in our proposed model. The effectiveness of the proposed classification model is evaluated in experiments with two real datasets composed of signatures labeled by experts: a dataset that can be classified with if-then rules and a dataset with elements that do not match an if-then rule. In the experiment, the proposed model is evaluated. In both cases, the combined SFs and WMFs performed better than the combined SFs and KFs. In addition, we also performed feature analysis.
Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.