Due to the adoption of horizontal business models following the globalization of semiconductor manufacturing, the overproduction of integrated circuits (ICs) and the piracy of intellectual properties (IPs) can lead to significant damage to the integrity of the semiconductor supply chain. Logic locking emerges as a primary design-for-security measure to counter these threats, where ICs become fully functional only when unlocked with a secret key. However, Boolean satisfiability-based attacks have rendered most locking schemes ineffective. This gives rise to numerous defenses and new locking methods to achieve SAT resiliency. This paper provides a unique perspective on the SAT attack efficiency based on conjunctive normal form (CNF) stored in SAT solver. First, we show how the attack learns new relations between keys in every iteration using distinguishing input patterns and the corresponding oracle responses. The input-output pairs result in new CNF clauses of unknown keys to be appended to the SAT solver, which leads to an exponential reduction in incorrect key values. Second, we demonstrate that the SAT attack can break any locking scheme within linear iteration complexity of key size. Moreover, we show how key constraints on point functions affect the SAT attack complexity. We explain why proper key constraint on AntiSAT reduces the complexity effectively to constant 1. The same constraint helps the breaking of CAS-Lock down to linear iteration complexity. Our analysis provides a new perspective on the capabilities of SAT attack against multiplier benchmark c6288, and we provide new directions to achieve SAT resiliency.
In this paper, we consider low-rank approximations for the solutions to the stochastic Helmholtz equation with random coefficients. A Stochastic Galerkin finite element method is used for the discretization of the Helmholtz problem. Existence theory for the low-rank approximation is established when the system matrix is indefinite. The low-rank algorithm does not require the construction of a large system matrix which results in an advantage in terms of CPU time and storage. Numerical results show that, when the operations in a low-rank method are performed efficiently, it is possible to obtain an advantage in terms of storage and CPU time compared to computations in full rank. We also propose a general approach to implement a preconditioner using the low-rank format efficiently.
COSPAS-SARSAT is an International programme for "Search and Rescue" (SAR) missions based on the "Satellite Aided Tracking" system (SARSAT). It is designed to provide accurate, timely, and reliable distress alert and location data to help SAR authorities of participating countries to assist persons and vessels in distress. Two types of satellite constellations serve COSPAS-SARSAT, low earth orbit search and rescue (LEOSAR) and geostationary orbiting search and rescue (GEOSAR). Despite its nearly-global deployment and critical importance, unfortunately enough, we found that COSPAS-SARSAT protocols and standard 406 MHz transmissions lack essential means of cybersecurity. In this paper, we investigate the cybersecurity aspects of COSPAS-SARSAT space-/satellite-based systems. In particular, we practically and successfully implement and demonstrate the first (to our knowledge) attacks on COSPAS-SARSAT 406 MHz protocols, namely replay, spoofing, and protocol fuzzing on EPIRB protocols. We also identify a set of core research challenges preventing more effective cybersecurity research in the field and outline the main cybersecurity weaknesses and possible mitigations to increase the system's cybersecurity level.
A method for detecting and approximating fault lines or surfaces, respectively, or decision curves in two and three dimensions with guaranteed accuracy is presented. Reformulated as a classification problem, our method starts from a set of scattered points along with the corresponding classification algorithm to construct a representation of a decision curve by points with prescribed maximal distance to the true decision curve. Hereby, our algorithm ensures that the representing point set covers the decision curve in its entire extent and features local refinement based on the geometric properties of the decision curve. We demonstrate applications of our method to problems related to the detection of faults, to Multi-Criteria Decision Aid and, in combination with Kirsch's factorization method, to solving an inverse acoustic scattering problem. In all applications we considered in this work, our method requires significantly less pointwise classifications than previously employed algorithms.
In this work, we study the computational complexity of quantum determinants, a $q$-deformation of matrix permanents: Given a complex number $q$ on the unit circle in the complex plane and an $n\times n$ matrix $X$, the $q$-permanent of $X$ is defined as $$\mathrm{Per}_q(X) = \sum_{\sigma\in S_n} q^{\ell(\sigma)}X_{1,\sigma(1)}\ldots X_{n,\sigma(n)},$$ where $\ell(\sigma)$ is the inversion number of permutation $\sigma$ in the symmetric group $S_n$ on $n$ elements. The function family generalizes determinant and permanent, which correspond to the cases $q=-1$ and $q=1$ respectively. For worst-case hardness, by Liouville's approximation theorem and facts from algebraic number theory, we show that for primitive $m$-th root of unity $q$ for odd prime power $m=p^k$, exactly computing $q$-permanent is $\mathsf{Mod}_p\mathsf{P}$-hard. This implies that an efficient algorithm for computing $q$-permanent results in a collapse of the polynomial hierarchy. Next, we show that computing $q$-permanent can be achieved using an oracle that approximates to within a polynomial multiplicative error and a membership oracle for a finite set of algebraic integers. From this, an efficient approximation algorithm would also imply a collapse of the polynomial hierarchy. By random self-reducibility, computing $q$-permanent remains to be hard for a wide range of distributions satisfying a property called the strong autocorrelation property. Specifically, this is proved via a reduction from $1$-permanent to $q$-permanent for $O(1/n^2)$ points $z$ on the unit circle. Since the family of permanent functions shares common algebraic structure, various techniques developed for the hardness of permanent can be generalized to $q$-permanents.
Graphical model selection is a seemingly impossible task when many pairs of variables are never jointly observed; this requires inference of conditional dependencies with no observations of corresponding marginal dependencies. This under-explored statistical problem arises in neuroimaging, for example, when different partially overlapping subsets of neurons are recorded in non-simultaneous sessions. We call this statistical challenge the "Graph Quilting" problem. We study this problem in the context of sparse inverse covariance learning, and focus on Gaussian graphical models where we show that missing parts of the covariance matrix yields an unidentifiable precision matrix specifying the graph. Nonetheless, we show that, under mild conditions, it is possible to correctly identify edges connecting the observed pairs of nodes. Additionally, we show that we can recover a minimal superset of edges connecting variables that are never jointly observed. Thus, one can infer conditional relationships even when marginal relationships are unobserved, a surprising result! To accomplish this, we propose an $\ell_1$-regularized partially observed likelihood-based graph estimator and provide performance guarantees in population and in high-dimensional finite-sample settings. We illustrate our approach using synthetic data, as well as for learning functional neural connectivity from calcium imaging data.
We develop a linear time algorithm for finding the diameter of an asteroidal triple-free (AT-free) graph. Furthermore, we update the definition of polar pairs and develop new properties of polar pairs for (weak) dominating pair graphs. We prove that the problem of computing a simplicial vertex in a general graph can be accomplished in O(n^2) based on an existing reduction to the problem of finding diameter in an AT-free graph. We improve the best-known run-time complexities of several graph theoretical problems.
2-Opt is probably the most basic local search heuristic for the TSP. This heuristic achieves amazingly good results on real world Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited. Not even its worst case running time on 2-dimensional Euclidean instances was known so far. We clarify this issue by presenting, for every $p\in\mathbb{N}$, a family of $L_p$ instances on which 2-Opt can take an exponential number of steps. Previous probabilistic analyses were restricted to instances in which $n$ points are placed uniformly at random in the unit square $[0,1]^2$. We consider a more advanced model in which the points can be placed independently according to general distributions on $[0,1]^d$, for an arbitrary $d\ge 2$. In particular, we allow different distributions for different points. We study the expected number of local improvements in terms of the number $n$ of points and the maximal density $\phi$ of the probability distributions. We show an upper bound on the expected length of any 2-Opt improvement path of $\tilde{O}(n^{4+1/3}\cdot\phi^{8/3})$. When starting with an initial tour computed by an insertion heuristic, the upper bound on the expected number of steps improves even to $\tilde{O}(n^{4+1/3-1/d}\cdot\phi^{8/3})$. If the distances are measured according to the Manhattan metric, then the expected number of steps is bounded by $\tilde{O}(n^{4-1/d}\cdot\phi)$. In addition, we prove an upper bound of $O(\sqrt[d]{\phi})$ on the expected approximation factor with respect to all $L_p$ metrics. Let us remark that our probabilistic analysis covers as special cases the uniform input model with $\phi=1$ and a smoothed analysis with Gaussian perturbations of standard deviation $\sigma$ with $\phi\sim1/\sigma^d$.
With an increased focus on incorporating fairness in machine learning models, it becomes imperative not only to assess and mitigate bias at each stage of the machine learning pipeline but also to understand the downstream impacts of bias across stages. Here we consider a general, but realistic, scenario in which a predictive model is learned from (potentially biased) training data, and model predictions are assessed post-hoc for fairness by some auditing method. We provide a theoretical analysis of how a specific form of data bias, differential sampling bias, propagates from the data stage to the prediction stage. Unlike prior work, we evaluate the downstream impacts of data biases quantitatively rather than qualitatively and prove theoretical guarantees for detection. Under reasonable assumptions, we quantify how the amount of bias in the model predictions varies as a function of the amount of differential sampling bias in the data, and at what point this bias becomes provably detectable by the auditor. Through experiments on two criminal justice datasets -- the well-known COMPAS dataset and historical data from NYPD's stop and frisk policy -- we demonstrate that the theoretical results hold in practice even when our assumptions are relaxed.
When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.
A variety of deep neural networks have been applied in medical image segmentation and achieve good performance. Unlike natural images, medical images of the same imaging modality are characterized by the same pattern, which indicates that same normal organs or tissues locate at similar positions in the images. Thus, in this paper we try to incorporate the prior knowledge of medical images into the structure of neural networks such that the prior knowledge can be utilized for accurate segmentation. Based on this idea, we propose a novel deep network called knowledge-based fully convolutional network (KFCN) for medical image segmentation. The segmentation function and corresponding error is analyzed. We show the existence of an asymptotically stable region for KFCN which traditional FCN doesn't possess. Experiments validate our knowledge assumption about the incorporation of prior knowledge into the convolution kernels of KFCN and show that KFCN can achieve a reasonable segmentation and a satisfactory accuracy.