In this paper, we introduce a mixed integer quadratic formulation for the congested variant of the partial set covering location problem, which involves determining a subset of facility locations to open and efficiently allocating customers to these facilities to minimize the combined costs of facility opening and congestion while ensuring target coverage. To enhance the resilience of the solution against demand fluctuations, we address the case under uncertain customer demand using $\Gamma$-robustness. We formulate the deterministic problem and its robust counterpart as mixed-integer quadratic problems. We investigate the effect of the protection level in adapted instances from the literature to provide critical insights into how sensitive the planning is to the protection level. Moreover, since the size of the robust counterpart grows with the number of customers, which could be significant in real-world contexts, we propose the use of Benders decomposition to effectively reduce the number of variables by projecting out of the master problem all the variables dependent on the number of customers. We illustrate how to incorporate our Benders approach within a mixed-integer second-order cone programming (MISOCP) solver, addressing explicitly all the ingredients that are instrumental for its success. We discuss single-tree and multi-tree approaches and introduce a perturbation technique to deal with the degeneracy of the Benders subproblem efficiently. Our tailored Benders approaches outperform the perspective reformulation solved using the state-of-the-art MISOCP solver Gurobi on adapted instances from the literature.
As the development of formal proofs is a time-consuming task, it is important to devise ways of sharing the already written proofs to prevent wasting time redoing them. One of the challenges in this domain is to translate proofs written in proof assistants based on impredicative logics to proof assistants based on predicative logics, whenever impredicativity is not used in an essential way. In this paper we present a transformation for sharing proofs with a core predicative system supporting prenex universe polymorphism (like in Agda). It consists in trying to elaborate each term into a predicative universe polymorphic term as general as possible. The use of universe polymorphism is justified by the fact that mapping each universe to a fixed one in the target theory is not sufficient in most cases. During the elaboration, we need to solve unification problems in the equational theory of universe levels. In order to do this, we give a complete characterization of when a single equation admits a most general unifier. This characterization is then employed in a partial algorithm which uses a constraint-postponement strategy for trying to solve unification problems. The proposed translation is of course partial, but in practice allows one to translate many proofs that do not use impredicativity in an essential way. Indeed, it was implemented in the tool Predicativize and then used to translate semi-automatically many non-trivial developments from Matita's library to Agda, including proofs of Bertrand's Postulate and Fermat's Little Theorem, which (as far as we know) were not available in Agda yet.
We prove that QMA where the verifier may also make a single non-collapsing measurement is equal to NEXP, resolving an open question of Aaronson. We show this is a corollary to a modified proof of QMA+ = NEXP [arXiv:2306.13247]. At the core of many results inspired by Blier and Tapp [arXiv:0709.0738] is an unphysical property testing problem deciding whether a quantum state is close to an element of a fixed basis.
This paper investigates extremal quantiles under two-way cluster dependence. We demonstrate that the limiting distribution of the unconditional intermediate order quantiles in the tails converges to a Gaussian distribution. This is remarkable as two-way cluster dependence entails potential non-Gaussianity in general, but extremal quantiles do not suffer from this issue. Building upon this result, we extend our analysis to extremal quantile regressions of intermediate order.
In this paper, we introduce tiled graphs as models of learning and maturing processes. We show how tiled graphs can combine graphs of learning spaces or antimatroids (partial hypercubes) and maturity models (total orders) to yield models of learning processes. For the visualization of these processes it is a natural approach to aim for certain optimal drawings. We show for most of the more detailed models that the drawing problems resulting from them are NP-complete. The terse model of a maturing process that ignores the details of learning, however, results in a polynomially solvable graph drawing problem. In addition, this model provides insight into the process by ordering the subjects at each test of their maturity. We investigate extremal and random instances of this problem, and provide exact results and bounds on their optimal crossing number. Graph-theoretic models offer two approaches to the design of optimal maturity models given observed data: (1) minimizing intra-subject inconsistencies, which manifest as regressions of subjects, is modeled as the well-known feedback arc set problem. We study the alternative of (2) finding a maturity model by minimizing the inter-subject inconsistencies, which manifest as crossings in the respective drawing. We show this to be NP-complete.
We develop a new, powerful method for counting elements in a multiset. As a first application, we use this algorithm to study the number of occurrences of patterns in a permutation. For patterns of length 3 there are two Wilf classes, and the general behaviour of these is reasonably well-known. We slightly extend some of the known results in that case, and exhaustively study the case of patterns of length 4, about which there is little previous knowledge. For such patterns, there are seven Wilf classes, and based on extensive enumerations and careful series analysis, we have conjectured the asymptotic behaviour for all classes.
In this paper, we introduce the applications of third-order reduced biquaternion tensors in color video processing. We first develop algorithms for computing the singular value decomposition (SVD) of a third-order reduced biquaternion tensor via a new Ht-product. As theoretical applications, we define the Moore-Penrose inverse of a third-order reduced biquaternion tensor and develop its characterizations. In addition, we discuss the general (or Hermitian) solutions to reduced biquaternion tensor equation $\mathcal{A}\ast_{Ht} \mathcal{X}=\mathcal{B}$ as well as its least-square solution. Finally, we compress the color video by this SVD, and the experimental data shows that our method is faster than the compared scheme.
In this paper, we investigate the properties of standard and multilevel Monte Carlo methods for weak approximation of solutions of stochastic differential equations (SDEs) driven by the infinite-dimensional Wiener process and Poisson random measure with Lipschitz payoff function. The error of the truncated dimension randomized numerical scheme, which is determined by two parameters, i.e grid density $n \in \mathbb{N}_{+}$ and truncation dimension parameter $M \in \mathbb{N}_{+},$ is of the order $n^{-1/2}+\delta(M)$ such that $\delta(\cdot)$ is positive and decreasing to $0$. We derive complexity model and provide proof for the upper complexity bound of the multilevel Monte Carlo method which depends on two increasing sequences of parameters for both $n$ and $M.$ The complexity is measured in terms of upper bound for mean-squared error and compared with the complexity of the standard Monte Carlo algorithm. The results from numerical experiments as well as Python and CUDA C implementation are also reported.
We address an optimal sensor placement problem through Bayesian experimental design for seismic full waveform inversion for the recovery of the associated moment tensor. The objective is that of optimally choosing the location of the sensors (stations) from which to collect the observed data. The Shannon expected information gain is used as the objective function to search for the optimal network of sensors. A closed form for such objective is available due to the linear structure of the forward problem, as well as the Gaussian modeling of the observational errors and prior distribution. The resulting problem being inherently combinatorial, a greedy algorithm is deployed to sequentially select the sensor locations that form the best network for learning the moment tensor. Numerical results are presented and analyzed under several instances of the problem, including: use of full three-dimensional velocity-models, cases in which the earthquake-source location is unknown, as well as moment tensor inversion under model misspecification
In this paper, we introduce the problem of zero-shot text-guided exploration of the solutions to open-domain image super-resolution. Our goal is to allow users to explore diverse, semantically accurate reconstructions that preserve data consistency with the low-resolution inputs for different large downsampling factors without explicitly training for these specific degradations. We propose two approaches for zero-shot text-guided super-resolution - i) modifying the generative process of text-to-image \textit{T2I} diffusion models to promote consistency with low-resolution inputs, and ii) incorporating language guidance into zero-shot diffusion-based restoration methods. We show that the proposed approaches result in diverse solutions that match the semantic meaning provided by the text prompt while preserving data consistency with the degraded inputs. We evaluate the proposed baselines for the task of extreme super-resolution and demonstrate advantages in terms of restoration quality, diversity, and explorability of solutions.
This paper presents a method for thematic agreement assessment of geospatial data products of different semantics and spatial granularities, which may be affected by spatial offsets between test and reference data. The proposed method uses a multi-scale framework allowing for a probabilistic evaluation whether thematic disagreement between datasets is induced by spatial offsets due to different nature of the datasets or not. We test our method using real-estate derived settlement locations and remote-sensing derived building footprint data.