We study the distributions of waiting times in variations of the negative binomial distribution of order $k$. One variation apply different enumeration scheme on the runs of successes. Another case considers binary trials for which the probability of ones is geometrically varying. We investigate the exact distribution of the waiting time for the $r$-th occurrence of success run of a specified length (non-overlapping, overlapping, at least, exactly, $\ell$-overlapping) in a $q$-sequence of binary trials. The main theorems are Type $1$, $2$, $3$ and $4$ $q$-negative binomial distribution of order $k$ and $q$-negative binomial distribution of order $k$ in the $\ell$-overlapping case. In the present work, we consider a sequence of independent binary zero and one trials with not necessarily identical distribution with the probability of ones varying according to a geometric rule. Exact formulae for the distributions obtained by means of enumerative combinatorics.
We give a poly-time algorithm for the $k$-edge-connected spanning subgraph ($k$-ECSS) problem that returns a solution of cost no greater than the cheapest $(k+10)$-ECSS on the same graph. Our approach enhances the iterative relaxation framework with a new ingredient, which we call ghost values, that allows for high sparsity in intermediate problems. Our guarantees improve upon the best-known approximation factor of $2$ for $k$-ECSS whenever the optimal value of $(k+10)$-ECSS is close to that of $k$-ECSS. This is a property that holds for the closely related problem $k$-edge-connected spanning multi-subgraph ($k$-ECSM), which is identical to $k$-ECSS except edges can be selected multiple times at the same cost. As a consequence, we obtain a $\left(1+O\left(\frac{1}{k}\right)\right)$-approximation algorithm for $k$-ECSM, which resolves a conjecture of Pritchard and improves upon a recent $\left(1+O\left(\frac{1}{\sqrt{k}}\right)\right)$-approximation algorithm of Karlin, Klein, Oveis Gharan, and Zhang. Moreover, we present a matching lower bound for $k$-ECSM, showing that our approximation ratio is tight up to the constant factor in $O\left(\frac{1}{k}\right)$, unless $P=NP$.
The adaptive probability $P_{\text{\tiny{adp}}}$ formalized in Adapt-$P$ is developed based on the remaining number of SNs $\zeta$ and optimal clustering $\kappa_{\text{\tiny{max}}}$, yet $P_{\text{\tiny{adp}}}$ does not implement the probabilistic ratios of energy and distance factors in the network. Furthermore, Adapt-$P$ does not localize cluster-heads in the first round properly because of its reliance on distance computations defined in LEACH, that might result in uneven distribution of cluster-heads in the WSN area and hence might at some rounds yield inefficient consumption of energy. This paper utilizes \nolinebreak{$k$\small{-}means\small{++}} and Adapt-$P$ to propose \nolinebreak{$P_{\text{c}} \kappa_{\text{\tiny{max}}}$\small{-}means\small{++}} clustering algorithm that better manages the distribution of cluster-heads and produces an enhanced performance. The algorithm employs an optimized cluster-head election probability $P_\text{c}$ developed based on energy-based $P_{\eta(j,i)}$ and distance-based $P\!\!\!_{\psi(j,i)}$ quality probabilities along with the adaptive probability $P_{\text{\tiny{adp}}}$, utilizing the energy $\varepsilon$ and distance optimality $d\!_{\text{\tiny{opt}}}$ factors. Furthermore, the algorithm utilizes the optimal clustering $\kappa_{\text{\tiny{max}}}$ derived in Adapt-$P$ to perform adaptive clustering through \nolinebreak{$\kappa_{\text{\tiny{max}}}$\small{-}means\small{++}}. The proposed \nolinebreak{$P_{\text{c}} \kappa_{\text{\tiny{max}}}${\small{-}}means{\small{++}}} is compared with the energy-based algorithm \nolinebreak{$P_\eta \varepsilon \kappa_{\text{\tiny{max}}}${\small{-}}means{\small{++}}} and distance-based \nolinebreak{$P_\psi d_{\text{\tiny{opt}}} \kappa_{\text{\tiny{max}}}${\small{-}}means{\small{++}}} algorithm, and has shown an optimized performance in term of residual energy and stability period of the network.
Higher Type Arithmetic (HA$^w$) is a first-order many-sorted theory. It is a conservative extension of Heyting Arithmetic obtained by extending the syntax of terms to all of System T: the objects of interest here are the functionals of higher types. While equality between natural numbers is specified by the axioms of Peano, how can equality between functionals be defined? From this question, different versions of HA$^w$ arise, such as an extensional version (E-HA$^w$) and an intentional version (I-HA$^w$). In this work, we will see how the study of partial equivalence relations leads us to design a translation by parametricity from E-HA$^w$ to HA$^w$.
Because $\Sigma^p_2$- and $\Sigma^p_3$-hardness proofs are usually tedious and difficult, not so many complete problems for these classes are known. This is especially true in the areas of min-max regret robust optimization, network interdiction, most vital vertex problems, blocker problems, and two-stage adjustable robust optimization problems. Even though these areas are well-researched for over two decades and one would naturally expect many (if not most) of the problems occurring in these areas to be complete for the above classes, almost no completeness results exist in the literature. We address this lack of knowledge by introducing over 70 new $\Sigma^p_2$-complete and $\Sigma^p_3$-complete problems. We achieve this result by proving a new meta-theorem, which shows $\Sigma^p_2$- and $\Sigma^p_3$-completeness simultaneously for a huge class of problems. The majority of all earlier publications on $\Sigma^p_2$- and $\Sigma^p_3$-completeness in said areas are special cases of our meta-theorem. Our precise result is the following: We introduce a large list of problems for which the meta-theorem is applicable (including clique, vertex cover, knapsack, TSP, facility location and many more). For every problem on this list, we show: The interdiction/minimum cost blocker/most vital nodes problem (with element costs) is $\Sigma^p_2$-complete. The min-max-regret problem with interval uncertainty is $\Sigma^p_2$-complete. The two-stage adjustable robust optimization problem with discrete budgeted uncertainty is $\Sigma^p_3$-complete. In summary, our work reveals the interesting insight that a large amount of NP-complete problems have the property that their min-max versions are 'automatically' $\Sigma^p_2$-complete.
We introduce a relaxation for homomorphism problems that combines semidefinite programming with linear Diophantine equations, and propose a framework for the analysis of its power based on the spectral theory of association schemes. We use this framework to establish an unconditional lower bound against the semidefinite programming + linear equations model, by showing that the relaxation does not solve the approximate graph homomorphism problem and thus, in particular, the approximate graph colouring problem.
We present polynomial-time SDP-based algorithms for the following problem: For fixed $k \leq \ell$, given a real number $\epsilon>0$ and a graph $G$ that admits a $k$-colouring with a $\rho$-fraction of the edges coloured properly, it returns an $\ell$-colouring of $G$ with an $(\alpha \rho - \epsilon)$-fraction of the edges coloured properly in polynomial time in $G$ and $1 / \epsilon$. Our algorithms are based on the algorithms of Frieze and Jerrum [Algorithmica'97] and of Karger, Motwani and Sudan [JACM'98]. For $k = 2, \ell = 3$, our algorithm achieves an approximation ratio $\alpha = 1$, which is the best possible. When $k$ is fixed and $\ell$ grows large, our algorithm achieves an approximation ratio of $\alpha = 1 - o(1 / \ell)$. When $k, \ell$ are both large, our algorithm achieves an approximation ratio of $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell) - O(1 / k^2)$; if we fix $d = \ell - k$ and allow $k, \ell$ to grow large, this is $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell)$. By extending the results of Khot, Kindler, Mossel and O'Donnell [SICOMP'07] to the promise setting, we show that for large $k$ and $\ell$, assuming Khot's Unique Games Conjecture (UGC), it is \NP-hard to achieve an approximation ratio $\alpha$ greater than $1 - 1 / \ell + 2 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided that $\ell$ is bounded by a function that is $o(\exp(\sqrt[3]{k}))$. For the case where $d = \ell - k$ is fixed, this bound matches the performance of our algorithm up to $o(\ln \ell / k \ell)$. Furthermore, by extending the results of Guruswami and Sinop [ToC'13] to the promise setting, we prove that it is NP-hard to achieve an approximation ratio greater than $1 - 1 / \ell + 8 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided again that $\ell$ is bounded as before (but this time without assuming the UGC).
We study the high-order local discontinuous Galerkin (LDG) method for the $p$-Laplace equation. We reformulate our spatial discretization as an equivalent convex minimization problem and use a preconditioned gradient descent method as the nonlinear solver. For the first time, a weighted preconditioner that provides $hk$-independent convergence is applied in the LDG setting. For polynomial order $k \geqslant 1$, we rigorously establish the solvability of our scheme and provide a priori error estimates in a mesh-dependent energy norm. Our error estimates are under a different and non-equivalent distance from existing LDG results. For arbitrarily high-order polynomials under the assumption that the exact solution has enough regularity, the error estimates demonstrate the potential for high-order accuracy. Our numerical results exhibit the desired convergence speed facilitated by the preconditioner, and we observe best convergence rates in gradient variables in alignment with linear LDG, and optimal rates in the primal variable when $1 < p \leqslant 2$.
Variational autoencoder (VAE) architectures have the potential to develop reduced-order models (ROMs) for chaotic fluid flows. We propose a method for learning compact and near-orthogonal ROMs using a combination of a $\beta$-VAE and a transformer, tested on numerical data from a two-dimensional viscous flow in both periodic and chaotic regimes. The $\beta$-VAE is trained to learn a compact latent representation of the flow velocity, and the transformer is trained to predict the temporal dynamics in latent space. Using the $\beta$-VAE to learn disentangled representations in latent-space, we obtain a more interpretable flow model with features that resemble those observed in the proper orthogonal decomposition, but with a more efficient representation. Using Poincar\'e maps, the results show that our method can capture the underlying dynamics of the flow outperforming other prediction models. The proposed method has potential applications in other fields such as weather forecasting, structural dynamics or biomedical engineering.
This paper develops a general asymptotic theory of local polynomial (LP) regression for spatial data observed at irregularly spaced locations in a sampling region $R_n \subset \mathbb{R}^d$. We adopt a stochastic sampling design that can generate irregularly spaced sampling sites in a flexible manner including both pure increasing and mixed increasing domain frameworks. We first introduce a nonparametric regression model for spatial data defined on $\mathbb{R}^d$ and then establish the asymptotic normality of LP estimators with general order $p \geq 1$. We also propose methods for constructing confidence intervals and establishing uniform convergence rates of LP estimators. Our dependence structure conditions on the underlying processes cover a wide class of random fields such as L\'evy-driven continuous autoregressive moving average random fields. As an application of our main results, we discuss a two-sample testing problem for mean functions and their partial derivatives.
When the eigenvalues of the coefficient matrix for a linear scalar ordinary differential equation are of large magnitude, its solutions exhibit complicated behaviour, such as high-frequency oscillations, rapid growth or rapid decay. The cost of representing such solutions using standard techniques grows with the magnitudes of the eigenvalues. As a consequence, the running times of most solvers for ordinary differential equations also grow with these eigenvalues. However, a large class of scalar ordinary differential equations with slowly-varying coefficients admit slowly-varying phase functions that can be represented at a cost which is bounded independent of the magnitudes of the eigenvalues of the corresponding coefficient matrix. Here, we introduce a numerical algorithm for constructing slowly-varying phase functions which represent the solutions of a linear scalar ordinary differential equation. Our method's running time depends on the complexity of the equation's coefficients, but is bounded independent of the magnitudes of the equation's eigenvalues. Once the phase functions have been constructed, essentially any reasonable initial or boundary value problem for the scalar equation can be easily solved. We present the results of numerical experiments showing that, despite its greater generality, our algorithm is competitive with state-of-the-art methods for solving highly-oscillatory second order differential equations. We also compare our method with Magnus-type exponential integrators and find that our approach is orders of magnitude faster in the high-frequency regime.