This paper considers a downlink satellite communication system where a satellite cluster, i.e., a satellite swarm consisting of one leader and multiple follower satellites, serves a ground terminal. The satellites in the cluster form either a linear or circular formation moving in a group and cooperatively send their signals by maximum ratio transmission precoding. We first conduct a coordinate transformation to effectively capture the relative positions of satellites in the cluster. Next, we derive an exact expression for the orbital configuration-dependent outage probability under the Nakagami fading by using the distribution of the sum of independent Gamma random variables. In addition, we obtain a simpler approximated expression for the outage probability with the help of second-order moment-matching. We also analyze asymptotic behavior in the high signal-to-noise ratio regime and the diversity order of the outage performance. Finally, we verify the analytical results through Monte Carlo simulations. Our analytical results provide the performance of satellite cluster-based communication systems based on specific orbital configurations, which can be used to design reliable satellite clusters in terms of cluster size, formation, and orbits.
In a simple connected graph $G=(V,E)$, a subset of vertices $S \subseteq V$ is a dominating set if any vertex $v \in V\setminus S$ is adjacent to some vertex $x$ from this subset. A number of real-life problems can be modeled using this problem which is known to be among the difficult NP-hard problems in its class. We formulate the problem as an integer liner program (ILP) and compare the performance with the two earlier existing exact state-of-the-art algorithms and exact implicit enumeration and heuristic algorithms that we propose here. Our exact algorithm was able to find optimal solutions much faster than ILP and the above two exact algorithms for middle-dense instances. For graphs with a considerable size, our heuristic algorithm was much faster than both, ILP and our exact algorithm. It found an optimal solution for more than half of the tested instances, whereas it improved the earlier known state-of-the-art solutions for almost all the tested benchmark instances. Among the instances where the optimum was not found, it gave an average approximation error of $1.18$.
The problem Power Dominating Set (PDS) is motivated by the placement of phasor measurement units to monitor electrical networks. It asks for a minimum set of vertices in a graph that observes all remaining vertices by exhaustively applying two observation rules. Our contribution is twofold. First, we determine the parameterized complexity of PDS by proving it is $W[P]$-complete when parameterized with respect to the solution size. We note that it was only known to be $W[2]$-hard before. Our second and main contribution is a new algorithm for PDS that efficiently solves practical instances. Our algorithm consists of two complementary parts. The first is a set of reduction rules for PDS that can also be used in conjunction with previously existing algorithms. The second is an algorithm for solving the remaining kernel based on the implicit hitting set approach. Our evaluation on a set of power grid instances from the literature shows that our solver outperforms previous state-of-the-art solvers for PDS by more than one order of magnitude on average. Furthermore, our algorithm can solve previously unsolved instances of continental scale within a few minutes.
Reliable probabilistic primality tests are fundamental in public-key cryptography. In adversarial scenarios, a composite with a high probability of passing a specific primality test could be chosen. In such cases, we need worst-case error estimates for the test. However, in many scenarios the numbers are randomly chosen and thus have significantly smaller error probability. Therefore, we are interested in average case error estimates. In this paper, we establish such bounds for the strong Lucas primality test, as only worst-case, but no average case error bounds, are currently available. This allows us to use this test with more confidence. We examine an algorithm that draws odd $k$-bit integers uniformly and independently, runs $t$ independent iterations of the strong Lucas test with randomly chosen parameters, and outputs the first number that passes all $t$ consecutive rounds. We attain numerical upper bounds on the probability on returing a composite. Furthermore, we consider a modified version of this algorithm that excludes integers divisible by small primes, resulting in improved bounds. Additionally, we classify the numbers that contribute most to our estimate.
Leveraging Trace Theory, we investigate the efficient parallelization of direct solvers for large linear equation systems. Our focus lies on a multi-frontal algorithm, and we present a methodology for achieving near-optimal scheduling on modern massively parallel machines. By employing trace theory with Diekert Graphs and Foata Normal Form, we rigorously validate the effectiveness of our proposed solution. To establish a strong link between the mesh and elimination tree of the multi-frontal solver, we conduct extensive testing on matrices derived from the Finite Element Method (FEM). Furthermore, we assess the performance of computations on both GPU and CPU platforms, employing practical implementation strategies.
Low Earth Orbit (LEO) satellites present a compelling opportunity for the establishment of a global quantum information network. However, satellite-based entanglement distribution from a networking perspective has not been fully investigated. Existing works often do not account for satellite movement over time when distributing entanglement and/or often do not permit entanglement distribution along inter-satellite links, which are two shortcomings we address in this paper. We first define a system model which considers both satellite movement over time and inter-satellite links. We next formulate the optimal entanglement distribution (OED) problem under this system model and show how to convert the OED problem in a dynamic physical network to one in a static logical graph which can be used to solve the OED problem in the dynamic physical network. We then propose a polynomial time greedy algorithm for computing satellite-assisted multi-hop entanglement paths. We also design an integer linear programming (ILP)-based algorithm to compute optimal solutions as a baseline to study the performance of our greedy algorithm. We present evaluation results to demonstrate the advantage of our model and algorithms.
Rate splitting multiple access (RSMA) and non-orthogonal multiple access (NOMA) are the key enabling multiple access techniques to enable massive connectivity. However, it is unclear whether RSMA would consistently outperform NOMA from a system sum-rate perspective, users' fairness, as well as convergence and feasibility of the resource allocation solutions. This paper investigates the weighted sum-rate maximization problem to optimize power and rate allocations in a hybrid RSMA-NOMA network. In the hybrid RSMA-NOMA, by optimally allocating the maximum power budget to each scheme, the BS operates on NOMA and RSMA in two orthogonal channels, allowing users to simultaneously receive signals on both RSMA and NOMA. Based on the successive convex approximation (SCA) approach, we jointly optimize the power allocation of users in NOMA and RSMA, the rate allocation of users in RSMA, and the power budget allocation for NOMA and RSMA considering successive interference cancellation (SIC) constraints. Numerical results demonstrate the trade-offs that hybrid RSMA-NOMA access offers in terms of system sum rate, fairness, convergence, and feasibility of the solutions.
This paper develops an approximation to the (effective) $p$-resistance and applies it to multi-class clustering. Spectral methods based on the graph Laplacian and its generalization to the graph $p$-Laplacian have been a backbone of non-euclidean clustering techniques. The advantage of the $p$-Laplacian is that the parameter $p$ induces a controllable bias on cluster structure. The drawback of $p$-Laplacian eigenvector based methods is that the third and higher eigenvectors are difficult to compute. Thus, instead, we are motivated to use the $p$-resistance induced by the $p$-Laplacian for clustering. For $p$-resistance, small $p$ biases towards clusters with high internal connectivity while large $p$ biases towards clusters of small ``extent,'' that is a preference for smaller shortest-path distances between vertices in the cluster. However, the $p$-resistance is expensive to compute. We overcome this by developing an approximation to the $p$-resistance. We prove upper and lower bounds on this approximation and observe that it is exact when the graph is a tree. We also provide theoretical justification for the use of $p$-resistance for clustering. Finally, we provide experiments comparing our approximated $p$-resistance clustering to other $p$-Laplacian based methods.
We consider the problem of estimating a scalar target parameter in the presence of nuisance parameters. Replacing the unknown nuisance parameter with a nonparametric estimator, e.g.,a machine learning (ML) model, is convenient but has shown to be inefficient due to large biases. Modern methods, such as the targeted minimum loss-based estimation (TMLE) and double machine learning (DML), achieve optimal performance under flexible assumptions by harnessing ML estimates while mitigating the plug-in bias. To avoid a sub-optimal bias-variance trade-off, these methods perform a debiasing step of the plug-in pre-estimate. Existing debiasing methods require the influence function of the target parameter as input. However, deriving the IF requires specialized expertise and thus obstructs the adaptation of these methods by practitioners. We propose a novel way to debias plug-in estimators which (i) is efficient, (ii) does not require the IF to be implemented, (iii) is computationally tractable, and therefore can be readily adapted to new estimation problems and automated without analytic derivations by the user. We build on the TMLE framework and update a plug-in estimate with a regularized likelihood maximization step over a nonparametric model constructed with a reproducing kernel Hilbert space (RKHS), producing an efficient plug-in estimate for any regular target parameter. Our method, thus, offers the efficiency of competing debiasing techniques without sacrificing the utility of the plug-in approach.
We present a simple quantum interactive proof (QIP) protocol using the quantum state teleportation (QST) and quantum energy teleportation (QET) protocols. QET is a technique that allows a receiver at a distance to extract the local energy by local operations and classical communication (LOCC), using the energy injected by the supplier as collateral. QET works for any local Hamiltonian with entanglement and, for our study, it is important that getting the ground state of a generic local Hamiltonian is quantum Merlin Arthur (QMA)-hard. The key motivations behind employing QET for these purposes are clarified. Firstly, in cases where a prover possesses the correct state and executes the appropriate operations, the verifier can effectively validate the presence of negative energy with a high probability (Completeness). Failure to select the appropriate operators or an incorrect state renders the verifier incapable of observing negative energy (Soundness). Importantly, the verifier solely observes a single qubit from the prover's transmitted state, while remaining oblivious to the prover's Hamiltonian and state (Zero-knowledge). Furthermore, the analysis is extended to distributed quantum interactive proofs, where we propose multiple solutions for the verification of each player's measurement. The complexity class of our protocol in the most general case belongs to QIP(3)=PSPACE, hence it provides a secure quantum authentication scheme that can be implemented in small quantum communication devices. It is straightforward to extend our protocol to Quantum Multi-Prover Interactive Proof (QMIP) systems, where the complexity is expected to be more powerful (PSPACE$\subset$QMIP=NEXPTIME). In our case, all provers share the ground state entanglement, hence it should belong to a more powerful complexity class QMIP$^*$.
This paper proposes the transition-net, a robust transition strategy that expands the versatility of robot locomotion in the real-world setting. To this end, we start by distributing the complexity of different gaits into dedicated locomotion policies applicable to real-world robots. Next, we expand the versatility of the robot by unifying the policies with robust transitions into a single coherent meta-controller by examining the latent state representations. Our approach enables the robot to iteratively expand its skill repertoire and robustly transition between any policy pair in a library. In our framework, adding new skills does not introduce any process that alters the previously learned skills. Moreover, training of a locomotion policy takes less than an hour with a single consumer GPU. Our approach is effective in the real-world and achieves a 19% higher average success rate for the most challenging transition pairs in our experiments compared to existing approaches.