We consider Group Control by Adding Individuals (GCAI) in the setting of group identification for two procedural rules -- the consensus-start-respecting rule and the liberal-start-respecting rule. It is known that GCAI for both rules are NP-hard, but whether they are fixed-parameter tractable with respect to the number of distinguished individuals remained open. We resolve both open problems in the affirmative. In addition, we strengthen the NP-hardness of GCAI by showing that, with respect to the natural parameter the number of added individuals, GCAI for both rules are W[2]-hard. Notably, the W[2]-hardness for the liberal-start-respecting rule holds even when restricted to a very special case where the qualifications of individuals satisfy the so-called consecutive ones property. However, for the consensus-start-respecting rule, the problem becomes polynomial-time solvable in this special case. We also study a dual restriction where the disqualifications of individuals fulfill the consecutive ones property, and show that under this restriction GCAI for both rules turn out to be polynomial-time solvable. Our reductions for showing W[2]-hardness also imply several lower bounds concerning kernelization and exact algorithms.
We investigate a family of bilevel imaging learning problems where the lower-level instance corresponds to a convex variational model involving first- and second-order nonsmooth sparsity-based regularizers. By using geometric properties of the primal-dual reformulation of the lower-level problem and introducing suitable auxiliar variables, we are able to reformulate the original bilevel problems as Mathematical Programs with Complementarity Constraints (MPCC). For the latter, we prove tight constraint qualification conditions (MPCC-RCPLD and partial MPCC-LICQ) and derive Mordukhovich (M-) and Strong (S-) stationarity conditions. The stationarity systems for the MPCC turn also into stationarity conditions for the original formulation. Second-order sufficient optimality conditions are derived as well, together with a local uniqueness result for stationary points. The proposed reformulation may be extended to problems in function spaces, leading to MPCC's with constraints on the gradient of the state. The MPCC reformulation also leads to the efficient use of available large-scale nonlinear programming solvers, as shown in a companion paper, where different imaging applications are studied.
In applications of group testing in networks, e.g. identifying individuals who are infected by a disease spread over a network, exploiting correlation among network nodes provides fundamental opportunities in reducing the number of tests needed. We model and analyze group testing on $n$ correlated nodes whose interactions are specified by a graph $G$. We model correlation through an edge-faulty random graph formed from $G$ in which each edge is dropped with probability $1-r$, and all nodes in the same component have the same state. We consider three classes of graphs: cycles and trees, $d$-regular graphs and stochastic block models or SBM, and obtain lower and upper bounds on the number of tests needed to identify the defective nodes. Our results are expressed in terms of the number of tests needed when the nodes are independent and they are in terms of $n$, $r$, and the target error. In particular, we quantify the fundamental improvements that exploiting correlation offers by the ratio between the total number of nodes $n$ and the equivalent number of independent nodes in a classic group testing algorithm. The lower bounds are derived by illustrating a strong dependence of the number of tests needed on the expected number of components. In this regard, we establish a new approximation for the distribution of component sizes in "$d$-regular trees" which may be of independent interest and leads to a lower bound on the expected number of components in $d$-regular graphs. The upper bounds are found by forming dense subgraphs in which nodes are more likely to be in the same state. When $G$ is a cycle or tree, we show an improvement by a factor of $log(1/r)$. For grid, a graph with almost $2n$ edges, the improvement is by a factor of ${(1-r) \log(1/r)}$, indicating drastic improvement compared to trees. When $G$ has a larger number of edges, as in SBM, the improvement can scale in $n$.
Privacy protection methods, such as differentially private mechanisms, introduce noise into resulting statistics which often results in complex and intractable sampling distributions. In this paper, we propose to use the simulation-based "repro sample" approach to produce statistically valid confidence intervals and hypothesis tests based on privatized statistics. We show that this methodology is applicable to a wide variety of private inference problems, appropriately accounts for biases introduced by privacy mechanisms (such as by clamping), and improves over other state-of-the-art inference methods such as the parametric bootstrap in terms of the coverage and type I error of the private inference. We also develop significant improvements and extensions for the repro sample methodology for general models (not necessarily related to privacy), including 1) modifying the procedure to ensure guaranteed coverage and type I errors, even accounting for Monte Carlo error, and 2) proposing efficient numerical algorithms to implement the confidence intervals and $p$-values.
We investigate the parameterized complexity of several problems formalizing cluster identification in graphs. In other words we ask whether a graph contains a large enough and sufficiently connected subgraph. We study here three relaxations of CLIQUE: $s$-CLUB and $s$-CLIQUE, in which the relaxation is focused on the distances in respectively the cluster and the original graph, and $\gamma$-COMPLETE SUBGRAPH in which the relaxation is made on the minimal degree in the cluster. As these three problems are known to be NP-hard, we study here their parameterized complexities. We prove that $s$-CLUB and $s$-CLIQUE are NP-hard even restricted to graphs of degeneracy $\le 3$ whenever $s \ge 3$, and to graphs of degeneracy $\le 2$ whenever $s \ge 5$, which is a strictly stronger result than its W[1]-hardness parameterized by the degeneracy. We also obtain that these problems are solvable in polynomial time on graphs of degeneracy $1$. Concerning $\gamma$-COMPLETE SUBGRAPH, we prove that it is W[1]-hard parameterized by both the degeneracy, which implies the W[1]-hardness parameterized by the number of vertices in the $\gamma$-complete-subgraph, and the number of elements outside the $\gamma$-complete subgraph.
Switch-like responses arising from bistability have been linked to cell signaling processes and memory. Revealing the shape and properties of the set of parameters that lead to bistability is necessary to understand the underlying biological mechanisms, but is a complex mathematical problem. We present an efficient approach to determine a basic topological property of the parameter region of multistationary, namely whether it is connected or not. The connectivity of this region can be interpreted in terms of the biological mechanisms underlying bistability and the switch-like patterns that the system can create. We provide an algorithm to assert that the parameter region of multistationarity is connected, targeting reaction networks with mass-action kinetics. We show that this is the case for numerous relevant cell signaling motifs, previously described to exhibit bistability. However, we show that for a motif displaying a phosphorylation cycle with allosteric enzyme regulation, the region of multistationarity has two distinct connected components, corresponding to two different, but symmetric, biological mechanisms. The method relies on linear programming and bypasses the expensive computational cost of direct and generic approaches to study parametric polynomial systems. This characteristic makes it suitable for mass-screening of reaction networks.
We study the problem of verification and synthesis of robust control barrier functions (CBF) for control-affine polynomial systems with bounded additive uncertainty and convex polynomial constraints on the control. We first formulate robust CBF verification and synthesis as multilevel polynomial optimization problems (POP), where verification optimizes -- in three levels -- the uncertainty, control, and state, while synthesis additionally optimizes the parameter of a chosen parametric CBF candidate. We then show that, by invoking the KKT conditions of the inner optimizations over uncertainty and control, the verification problem can be simplified as a single-level POP and the synthesis problem reduces to a min-max POP. This reduction leads to multilevel semidefinite relaxations. For the verification problem, we apply Lasserre's hierarchy of moment relaxations. For the synthesis problem, we draw connections to existing relaxation techniques for robust min-max POP, which first use sum-of-squares programming to find increasingly tight polynomial lower bounds to the unknown value function of the verification POP, and then call Lasserre's hierarchy again to maximize the lower bounds. Both semidefinite relaxations guarantee asymptotic global convergence to optimality. We provide an in-depth study of our framework on the controlled Van der Pol Oscillator, both with and without additive uncertainty.
We consider the numerical solution of the real time equilibrium Dyson equation, which is used in calculations of the dynamical properties of quantum many-body systems. We show that this equation can be written as a system of coupled, nonlinear, convolutional Volterra integro-differential equations, for which the kernel depends self-consistently on the solution. As is typical in the numerical solution of Volterra-type equations, the computational bottleneck is the quadratic-scaling cost of history integration. However, the structure of the nonlinear Volterra integral operator precludes the use of standard fast algorithms. We propose a quasilinear-scaling FFT-based algorithm which respects the structure of the nonlinear integral operator. The resulting method can reach large propagation times, and is thus well-suited to explore quantum many-body phenomena at low energy scales. We demonstrate the solver with two standard model systems: the Bethe graph, and the Sachdev-Ye-Kitaev model.
The flocking motion control is concerned with managing the possible conflicts between local and team objectives of multi-agent systems. The overall control process guides the agents while monitoring the flock-cohesiveness and localization. The underlying mechanisms may degrade due to overlooking the unmodeled uncertainties associated with the flock dynamics and formation. On another side, the efficiencies of the various control designs rely on how quickly they can adapt to different dynamic situations in real-time. An online model-free policy iteration mechanism is developed here to guide a flock of agents to follow an independent command generator over a time-varying graph topology. The strength of connectivity between any two agents or the graph edge weight is decided using a position adjacency dependent function. An online recursive least squares approach is adopted to tune the guidance strategies without knowing the dynamics of the agents or those of the command generator. It is compared with another reinforcement learning approach from the literature which is based on a value iteration technique. The simulation results of the policy iteration mechanism revealed fast learning and convergence behaviors with less computational effort.
Nonlinearity parameter tomography leads to the problem of identifying a coefficient in a nonlinear wave equation (such as the Westervelt equation) modeling ultrasound propagation. In this paper we transfer this into frequency domain, where the Westervelt equation gets replaced by a coupled system of Helmholtz equations with quadratic nonlinearities. For the case of the to-be-determined nonlinearity coefficient being a characteristic function of an unknown, not necessarily connected domain $D$, we devise and test a reconstruction algorithm based on weighted point source approximations combined with Newton's method. In a more abstract setting, convergence of a regularised Newton type method for this inverse problem is proven by verifying a range invariance condition of the forward operator and establishing injectivity of its linearisation.
Deep neural networks have achieved remarkable success in computer vision tasks. Existing neural networks mainly operate in the spatial domain with fixed input sizes. For practical applications, images are usually large and have to be downsampled to the predetermined input size of neural networks. Even though the downsampling operations reduce computation and the required communication bandwidth, it removes both redundant and salient information obliviously, which results in accuracy degradation. Inspired by digital signal processing theories, we analyze the spectral bias from the frequency perspective and propose a learning-based frequency selection method to identify the trivial frequency components which can be removed without accuracy loss. The proposed method of learning in the frequency domain leverages identical structures of the well-known neural networks, such as ResNet-50, MobileNetV2, and Mask R-CNN, while accepting the frequency-domain information as the input. Experiment results show that learning in the frequency domain with static channel selection can achieve higher accuracy than the conventional spatial downsampling approach and meanwhile further reduce the input data size. Specifically for ImageNet classification with the same input size, the proposed method achieves 1.41% and 0.66% top-1 accuracy improvements on ResNet-50 and MobileNetV2, respectively. Even with half input size, the proposed method still improves the top-1 accuracy on ResNet-50 by 1%. In addition, we observe a 0.8% average precision improvement on Mask R-CNN for instance segmentation on the COCO dataset.