We analyse an algorithm solving stochastic mean-payoff games, combining the ideas of relative value iteration and of Krasnoselskii-Mann damping. We derive parameterized complexity bounds for several classes of games satisfying irreducibility conditions. We show in particular that an $\epsilon$-approximation of the value of an irreducible concurrent stochastic game can be computed in a number of iterations in $O(|\log\epsilon|)$ where the constant in the $O(\cdot)$ is explicit, depending on the smallest non-zero transition probabilities. This should be compared with a bound in $O(|\epsilon|^{-1}|\log(\epsilon)|)$ obtained by Chatterjee and Ibsen-Jensen (ICALP 2014) for the same class of games, and to a $O(|\epsilon|^{-1})$ bound by Allamigeon, Gaubert, Katz and Skomra (ICALP 2022) for turn-based games. We also establish parameterized complexity bounds for entropy games, a class of matrix multiplication games introduced by Asarin, Cervelle, Degorre, Dima, Horn and Kozyakin. We derive these results by methods of variational analysis, establishing contraction properties of the relative Krasnoselskii-Mann iteration with respect to Hilbert's semi-norm.
In this paper we study the threshold model of \emph{geometric inhomogeneous random graphs} (GIRGs); a generative random graph model that is closely related to \emph{hyperbolic random graphs} (HRGs). These models have been observed to capture complex real-world networks well with respect to the structural and algorithmic properties. Following comprehensive studies regarding their \emph{connectivity}, i.e., which parts of the graphs are connected, we have a good understanding under which circumstances a \emph{giant} component (containing a constant fraction of the graph) emerges. While previous results are rather technical and challenging to work with, the goal of this paper is to provide more accessible proofs. At the same time we significantly improve the previously known probabilistic guarantees, showing that GIRGs contain a giant component with probability $1 - \exp(-\Omega(n^{(3-\tau)/2}))$ for graph size $n$ and a degree distribution with power-law exponent $\tau \in (2, 3)$. Based on that we additionally derive insights about the connectivity of certain induced subgraphs of GIRGs.
Reliable probabilistic primality tests are fundamental in public-key cryptography. In adversarial scenarios, a composite with a high probability of passing a specific primality test could be chosen. In such cases, we need worst-case error estimates for the test. However, in many scenarios the numbers are randomly chosen and thus have significantly smaller error probability. Therefore, we are interested in average case error estimates. In this paper, we establish such bounds for the strong Lucas primality test, as only worst-case, but no average case error bounds, are currently available. This allows us to use this test with more confidence. We examine an algorithm that draws odd $k$-bit integers uniformly and independently, runs $t$ independent iterations of the strong Lucas test with randomly chosen parameters, and outputs the first number that passes all $t$ consecutive rounds. We attain numerical upper bounds on the probability on returing a composite. Furthermore, we consider a modified version of this algorithm that excludes integers divisible by small primes, resulting in improved bounds. Additionally, we classify the numbers that contribute most to our estimate.
In this paper we consider an orthonormal basis, generated by a tensor product of Fourier basis functions, half period cosine basis functions, and the Chebyshev basis functions. We deal with the approximation problem in high dimensions related to this basis and design a fast algorithm to multiply with the underlying matrix, consisting of rows of the non-equidistant Fourier matrix, the non-equidistant cosine matrix and the non-equidistant Chebyshev matrix, and its transposed. This leads us to an ANOVA (analysis of variance) decomposition for functions with partially periodic boundary conditions through using the Fourier basis in some dimensions and the half period cosine basis or the Chebyshev basis in others. We consider sensitivity analysis in this setting, in order to find an adapted basis for the underlying approximation problem. More precisely, we find the underlying index set of the multidimensional series expansion. Additionally, we test this ANOVA approximation with mixed basis at numerical experiments, and refer to the advantage of interpretable results.
Given subsets of uncertain values, we study the problem of identifying the subset of minimum total value (sum of the uncertain values) by querying as few values as possible. This set selection problem falls into the field of explorable uncertainty and is of intrinsic importance therein as it implies strong adversarial lower bounds for a wide range of interesting combinatorial problems such as knapsack and matchings. We consider a stochastic problem variant and give algorithms that, in expectation, improve upon these adversarial lower bounds. The key to our results is to prove a strong structural connection to a seemingly unrelated covering problem with uncertainty in the constraints via a linear programming formulation. We exploit this connection to derive an algorithmic framework that can be used to solve both problems under uncertainty, obtaining nearly tight bounds on the competitive ratio. This is the first non-trivial stochastic result concerning the sum of unknown values without further structure known for the set. With our novel methods, we lay the foundations for solving more general problems in the area of explorable uncertainty.
Position based dynamics is a powerful technique for simulating a variety of materials. Its primary strength is its robustness when run with limited computational budget. We develop a novel approach to address problems with PBD for quasistatic hyperelastic materials. Even though PBD is based on the projection of static constraints, PBD is best suited for dynamic simulations. This is particularly relevant since the efficient creation of large data sets of plausible, but not necessarily accurate elastic equilibria is of increasing importance with the emergence of quasistatic neural networks. Furthermore, PBD projects one constraint at a time. We show that ignoring the effects of neighboring constraints limits its convergence and stability properties. Recent works have shown that PBD can be related to the Gauss-Seidel approximation of a Lagrange multiplier formulation of backward Euler time stepping, where each constraint is solved/projected independently of the others in an iterative fashion. We show that a position-based, rather than constraint-based nonlinear Gauss-Seidel approach solves these problems. Our approach retains the essential PBD feature of stable behavior with constrained computational budgets, but also allows for convergent behavior with expanded budgets. We demonstrate the efficacy of our method on a variety of representative hyperelastic problems and show that both successive over relaxation (SOR) and Chebyshev acceleration can be easily applied.
We implement full, three-dimensional constrained mixture theory for vascular growth and remodeling into a finite element fluid-structure interaction (FSI) solver. The resulting "fluid-solid-growth" (FSG) solver allows long term, patient-specific predictions of changing hemodynamics, vessel wall morphology, tissue composition, and material properties. This extension from short term (FSI) to long term (FSG) simulations increases clinical relevance by enabling mechanobioloigcally-dependent studies of disease progression in complex domains.
The ability to reliably predict the future quality of a wireless channel, as seen by the media access control layer, is a key enabler to improve performance of future industrial networks that do not rely on wires. Knowing in advance how much channel behavior may change can speed up procedures for adaptively selecting the best channel, making the network more deterministic, reliable, and less energy-hungry, possibly improving device roaming capabilities at the same time. To this aim, popular approaches based on moving averages and regression were compared, using multiple key performance indicators, on data captured from a real Wi-Fi setup. Moreover, a simple technique based on a linear combination of outcomes from different techniques was presented and analyzed, to further reduce the prediction error, and some considerations about lower bounds on achievable errors have been reported. We found that the best model is the exponential moving average, which managed to predict the frame delivery ratio with a 2.10\% average error and, at the same time, has lower computational complexity and memory consumption than the other models we analyzed.
This paper presents CLIPXPlore, a new framework that leverages a vision-language model to guide the exploration of the 3D shape space. Many recent methods have been developed to encode 3D shapes into a learned latent shape space to enable generative design and modeling. Yet, existing methods lack effective exploration mechanisms, despite the rich information. To this end, we propose to leverage CLIP, a powerful pre-trained vision-language model, to aid the shape-space exploration. Our idea is threefold. First, we couple the CLIP and shape spaces by generating paired CLIP and shape codes through sketch images and training a mapper network to connect the two spaces. Second, to explore the space around a given shape, we formulate a co-optimization strategy to search for the CLIP code that better matches the geometry of the shape. Third, we design three exploration modes, binary-attribute-guided, text-guided, and sketch-guided, to locate suitable exploration trajectories in shape space and induce meaningful changes to the shape. We perform a series of experiments to quantitatively and visually compare CLIPXPlore with different baselines in each of the three exploration modes, showing that CLIPXPlore can produce many meaningful exploration results that cannot be achieved by the existing solutions.
Simultaneous localization and mapping (SLAM) stands as one of the critical challenges in robot navigation. Recent advancements suggest that methods based on supervised learning deliver impressive performance in front-end odometry, while traditional optimization-based methods still play a vital role in the back-end for minimizing estimation drift. In this paper, we found that such decoupled paradigm can lead to only sub-optimal performance, consequently curtailing system capabilities and generalization potential. To solve this problem, we proposed a novel self-supervised learning framework, imperative SLAM (iSLAM), which fosters reciprocal correction between the front-end and back-end, thus enhancing performance without necessitating any external supervision. Specifically, we formulate a SLAM system as a bi-level optimization problem so that the two components are bidirectionally connected. As a result, the front-end model is able to learn global geometric knowledge obtained through pose graph optimization by back-propagating the residuals from the back-end. This significantly improves the generalization ability of the entire system and thus achieves the accuracy improvement up to 45%. To the best of our knowledge, iSLAM is the first SLAM system showing that the front-end and back-end can learn jointly and mutually contribute to each other in a self-supervised manner.
Causal effect estimation from observational data is a fundamental task in empirical sciences. It becomes particularly challenging when unobserved confounders are involved in a system. This paper focuses on front-door adjustment -- a classic technique which, using observed mediators allows to identify causal effects even in the presence of unobserved confounding. While the statistical properties of the front-door estimation are quite well understood, its algorithmic aspects remained unexplored for a long time. Recently, Jeong, Tian, and Barenboim [NeurIPS 2022] have presented the first polynomial-time algorithm for finding sets satisfying the front-door criterion in a given directed acyclic graph (DAG), with an $O(n^3(n+m))$ run time, where $n$ denotes the number of variables and $m$ the number of edges of the causal graph. In our work, we give the first linear-time, i.e., $O(n+m)$, algorithm for this task, which thus reaches the asymptotically optimal time complexity. This result implies an $O(n(n+m))$ delay enumeration algorithm of all front-door adjustment sets, again improving previous work by Jeong et al. by a factor of $n^3$. Moreover, we provide the first linear-time algorithm for finding a minimal front-door adjustment set. We offer implementations of our algorithms in multiple programming languages to facilitate practical usage and empirically validate their feasibility, even for large graphs.