By the MAXSAT problem, we are given a set $V$ of $m$ variables and a collection $C$ of $n$ clauses over $V$. We will seek a truth assignment to maximize the number of satisfied clauses. This problem is $\textit{NP}$-hard even for its restricted version, the 2-MAXSAT problem by which every clause contains at most 2 literals. In this paper, we discuss a polynomial time algorithm to solve this problem. Its time complexity is bounded by O($n^2m^3$). Hence, we provide a proof of $P$ = $\textit{NP}$.
While conformal predictors reap the benefits of rigorous statistical guarantees for their error frequency, the size of their corresponding prediction sets is critical to their practical utility. Unfortunately, there is currently a lack of finite-sample analysis and guarantees for their prediction set sizes. To address this shortfall, we theoretically quantify the expected size of the prediction set under the split conformal prediction framework. As this precise formulation cannot usually be calculated directly, we further derive point estimates and high probability intervals that can be easily computed, providing a practical method for characterizing the expected prediction set size across different possible realizations of the test and calibration data. Additionally, we corroborate the efficacy of our results with experiments on real-world datasets, for both regression and classification problems.
The purpose of this book is to help you program shared-memory parallel systems without risking your sanity. Nevertheless, you should think of the information in this book as a foundation on which to build, rather than as a completed cathedral. Your mission, if you choose to accept, is to help make further progress in the exciting field of parallel programming-progress that will in time render this book obsolete. Parallel programming in the 21st century is no longer focused solely on science, research, and grand-challenge projects. And this is all to the good, because it means that parallel programming is becoming an engineering discipline. Therefore, as befits an engineering discipline, this book examines specific parallel-programming tasks and describes how to approach them. In some surprisingly common cases, these tasks can be automated. This book is written in the hope that presenting the engineering discipline underlying successful parallel-programming projects will free a new generation of parallel hackers from the need to slowly and painstakingly reinvent old wheels, enabling them to instead focus their energy and creativity on new frontiers. However, what you get from this book will be determined by what you put into it. It is hoped that simply reading this book will be helpful, and that working the Quick Quizzes will be even more helpful. However, the best results come from applying the techniques taught in this book to real-life problems. As always, practice makes perfect. But no matter how you approach it, we sincerely hope that parallel programming brings you at least as much fun, excitement, and challenge that it has brought to us!
By the MAXSAT problem, we are given a set $V$ of $m$ variables and a collection $C$ of $n$ clauses over $V$. We will seek a truth assignment to maximize the number of satisfied clauses. This problem is $\textit{NP}$-hard even for its restricted version, the 2-MAXSAT problem by which every clause contains at most 2 literals. In this paper, we discuss a polynomial time algorithm to solve this problem. Its time complexity is bounded by O($n^2m^3$). So we believe that $\textit{P}$ = $\textit{NP}$.
The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is the most prominent multi-objective evolutionary algorithm for real-world applications. While it performs evidently well on bi-objective optimization problems, empirical studies suggest that it is less effective when applied to problems with more than two objectives. A recent mathematical runtime analysis confirmed this observation by proving the NGSA-II for an exponential number of iterations misses a constant factor of the Pareto front of the simple 3-objective OneMinMax problem. In this work, we provide the first mathematical runtime analysis of the NSGA-III, a refinement of the NSGA-II aimed at better handling more than two objectives. We prove that the NSGA-III with sufficiently many reference points -- a small constant factor more than the size of the Pareto front, as suggested for this algorithm -- computes the complete Pareto front of the 3-objective OneMinMax benchmark in an expected number of O(n log n) iterations. This result holds for all population sizes (that are at least the size of the Pareto front). It shows a drastic advantage of the NSGA-III over the NSGA-II on this benchmark. The mathematical arguments used here and in previous work on the NSGA-II suggest that similar findings are likely for other benchmarks with three or more objectives.
The $L_p$-discrepancy is a quantitative measure for the irregularity of distribution of an $N$-element point set in the $d$-dimensional unit cube, which is closely related to the worst-case error of quasi-Monte Carlo algorithms for numerical integration. Its inverse for dimension $d$ and error threshold $\varepsilon \in (0,1)$ is the minimal number of points in $[0,1)^d$ such that the minimal normalized $L_p$-discrepancy is less or equal $\varepsilon$. It is well known, that the inverse of $L_2$-discrepancy grows exponentially fast with the dimension $d$, i.e., we have the curse of dimensionality, whereas the inverse of $L_{\infty}$-discrepancy depends exactly linearly on $d$. The behavior of inverse of $L_p$-discrepancy for general $p \not\in \{2,\infty\}$ has been an open problem for many years. In this paper we show that the $L_p$-discrepancy suffers from the curse of dimensionality for all $p$ in $(1,2]$ which are of the form $p=2 \ell/(2 \ell -1)$ with $\ell \in \mathbb{N}$. This result follows from a more general result that we show for the worst-case error of numerical integration in an anchored Sobolev space with anchor 0 of once differentiable functions in each variable whose first derivative has finite $L_q$-norm, where $q$ is an even positive integer satisfying $1/p+1/q=1$.
We introduce the local information cost (LIC), which quantifies the amount of information that nodes in a network need to learn when solving a graph problem. We show that the local information cost presents a natural lower bound on the communication complexity of distributed algorithms. For the synchronous CONGEST $KT_1$ model, where each node has initial knowledge of its neighbors' IDs, we prove that $\Omega(\frac{\text{LIC}_\gamma(P)}{\log\tau \log n})$ bits are required for solving a graph problem $P$ with a $\tau$-round algorithm that errs with probability at most $\gamma$. Our result is the first lower bound that yields a general trade-off between communication and time for graph problems in the CONGEST $KT_1$ model. We demonstrate how to apply the local information cost by deriving a lower bound on the communication complexity of computing routing tables for all-pairs-shortest-paths (APSP) routing, as well as for computing a spanner with multiplicative stretch $2t-1$ that consists of at most $O(n^{1+\frac{1}{t} + \epsilon})$ edges, where $\epsilon = O( {1}/{t^2} )$. More concretely, we derive the following lower bounds in the CONGEST model under the $KT_1$ assumption: For constructing routing tables, we show that any $O(\text{poly}(n))$-time algorithm has a communication complexity of $\Omega( {n^2}/{\log^2 n} )$ bits. Our main result is for constructing graph spanners: We show that any $O(\text{poly}(n))$-time algorithm must send at least $\tilde\Omega(\tfrac{1}{t^2} n^{1+{1}/{2t}})$ bits. Previously, only a trivial lower bound of $\tilde \Omega(n)$ bits was known for these problems.
Every polygon $P$ can be companioned by a cap polygon $\hat P$ such that $P$ and $\hat P$ serve as two parts of the boundary surface of a polyhedron $V$. Pairs of vertices on $P$ and $\hat P$ are identified successively to become vertices of $V$. In this paper, we study the cap construction that asserts equal angular defects at these pairings. We exhibit a linear relation that arises from the cap construction algorithm, which in turn demonstrates an abundance of polygons that satisfy the closed cap condition, that is, those that can successfully undergo the cap construction process.
Recently the authors [CCLMST23] introduced the notion of shortcut partition of planar graphs and obtained several results from the partition, including a tree cover with $O(1)$ trees for planar metrics and an additive embedding into small treewidth graphs. In this note, we apply the same partition to resolve the Steiner point removal (SPR) problem in planar graphs: Given any set $K$ of terminals in an arbitrary edge-weighted planar graph $G$, we construct a minor $M$ of $G$ whose vertex set is $K$, which preserves the shortest-path distances between all pairs of terminals in $G$ up to a constant factor. This resolves in the affirmative an open problem that has been asked repeatedly in literature.
In this paper we investigate formal verification problems for Neural Network computations. Various reachability problems will be in the focus, such as: Given symbolic specifications of allowed inputs and outputs in form of Linear Programming instances, one question is whether valid inputs exist such that the given network computes a valid output? Does this property hold for all valid inputs? The former question's complexity has been investigated recently by S\"alzer and Lange for nets using the Rectified Linear Unit and the identity function as their activation functions. We complement their achievements by showing that the problem is NP-complete for piecewise linear functions with rational coefficients that are not linear, NP-hard for almost all suitable activation functions including non-linear ones that are continuous on an interval, complete for the Existential Theory of the Reals $\exists \mathbb R$ for every non-linear polynomial and $\exists \mathbb R$-hard for the exponential function and various sigmoidal functions. For the completeness results, linking the verification tasks with the theory of Constraint Satisfaction Problems turns out helpful.
The Non-dominated Sorting Genetic Algorithm-II (NSGA-II) is one of the most prominent algorithms to solve multi-objective optimization problems. Recently, the first mathematical runtime guarantees have been obtained for this algorithm, however only for synthetic benchmark problems. In this work, we give the first proven performance guarantees for a classic optimization problem, the NP-complete bi-objective minimum spanning tree problem. More specifically, we show that the NSGA-II with population size $N \ge 4((n-1) w_{\max} + 1)$ computes all extremal points of the Pareto front in an expected number of $O(m^2 n w_{\max} \log(n w_{\max}))$ iterations, where $n$ is the number of vertices, $m$ the number of edges, and $w_{\max}$ is the maximum edge weight in the problem instance. This result confirms, via mathematical means, the good performance of the NSGA-II observed empirically. It also shows that mathematical analyses of this algorithm are not only possible for synthetic benchmark problems, but also for more complex combinatorial optimization problems. As a side result, we also obtain a new analysis of the performance of the global SEMO algorithm on the bi-objective minimum spanning tree problem, which improves the previous best result by a factor of $|F|$, the number of extremal points of the Pareto front, a set that can be as large as $n w_{\max}$. The main reason for this improvement is our observation that both multi-objective evolutionary algorithms find the different extremal points in parallel rather than sequentially, as assumed in the previous proofs.