A preference system $\mathcal{I}$ is an undirected graph where vertices have preferences over their neighbors, and $\mathcal{I}$ admits a master list if all preferences can be derived from a single ordering over all vertices. We study the problem of deciding whether a given preference system $\mathcal{I}$ is close to admitting a master list based on three different distance measures. We determine the computational complexity of the following questions: can $\mathcal{I}$ be modified by (i) $k$ swaps in the preferences, (ii) $k$ edge deletions, or (iii) $k$ vertex deletions so that the resulting instance admits a master list? We investigate these problems in detail from the viewpoint of parameterized complexity and of approximation. We also present two applications related to stable and popular matchings.
Continual learning protocols are attracting increasing attention from the medical imaging community. In continual environments, datasets acquired under different conditions arrive sequentially; and each is only available for a limited period of time. Given the inherent privacy risks associated with medical data, this setup reflects the reality of deployment for deep learning diagnostic radiology systems. Many techniques exist to learn continuously for image classification, and several have been adapted to semantic segmentation. Yet most struggle to accumulate knowledge in a meaningful manner. Instead, they focus on preventing the problem of catastrophic forgetting, even when this reduces model plasticity and thereon burdens the training process. This puts into question whether the additional overhead of knowledge preservation is worth it - particularly for medical image segmentation, where computation requirements are already high - or if maintaining separate models would be a better solution. We propose UNEG, a simple and widely applicable multi-model benchmark that maintains separate segmentation and autoencoder networks for each training stage. The autoencoder is built from the same architecture as the segmentation network, which in our case is a full-resolution nnU-Net, to bypass any additional design decisions. During inference, the reconstruction error is used to select the most appropriate segmenter for each test image. Open this concept, we develop a fair evaluation scheme for different continual learning settings that moves beyond the prevention of catastrophic forgetting. Our results across three regions of interest (prostate, hippocampus, and right ventricle) show that UNEG outperforms several continual learning methods, reinforcing the need for strong baselines in continual learning research.
In recent years, many connections have been made between minimal codes, a classical object in coding theory, and other remarkable structures in finite geometry and combinatorics. One of the main problems related to minimal codes is to give lower and upper bounds on the length $m(k,q)$ of the shortest minimal codes of a given dimension $k$ over the finite field $\mathbb{F}_q$. It has been recently proved that $m(k, q) \geq (q+1)(k-1)$. In this note, we prove that $\liminf_{k \rightarrow \infty} \frac{m(k, q)}{k} \geq (q+ \varepsilon(q) )$, where $\varepsilon$ is an increasing function such that $1.52 <\varepsilon(2)\leq \varepsilon(q) \leq \sqrt{2} + \frac{1}{2}$. Hence, the previously known lower bound is not tight for large enough $k$. We then focus on the binary case and prove some structural results on minimal codes of length $3(k-1)$. As a byproduct, we are able to show that, if $k = 5 \pmod 8$ and for other small values of $k$, the bound is not tight.
Given a dataset on actions and resulting long-term rewards, a direct estimation approach fits value functions that minimize prediction error on the training data. Temporal difference learning (TD) methods instead fit value functions by minimizing the degree of temporal inconsistency between estimates made at successive time-steps. Focusing on finite state Markov chains, we provide a crisp asymptotic theory of the statistical advantages of this approach. First, we show that an intuitive inverse trajectory pooling coefficient completely characterizes the percent reduction in mean-squared error of value estimates. Depending on problem structure, the reduction could be enormous or nonexistent. Next, we prove that there can be dramatic improvements in estimates of the difference in value-to-go for two states: TD's errors are bounded in terms of a novel measure - the problem's trajectory crossing time - which can be much smaller than the problem's time horizon.
Suppose we are given access to $n$ independent samples from distribution $\mu$ and we wish to output one of them with the goal of making the output distributed as close as possible to a target distribution $\nu$. In this work we show that the optimal total variation distance as a function of $n$ is given by $\tilde\Theta(\frac{D}{f'(n)})$ over the class of all pairs $\nu,\mu$ with a bounded $f$-divergence $D_f(\nu\|\mu)\leq D$. Previously, this question was studied only for the case when the Radon-Nikodym derivative of $\nu$ with respect to $\mu$ is uniformly bounded. We then consider an application in the seemingly very different field of smoothed online learning, where we show that recent results on the minimax regret and the regret of oracle-efficient algorithms still hold even under relaxed constraints on the adversary (to have bounded $f$-divergence, as opposed to bounded Radon-Nikodym derivative). Finally, we also study efficacy of importance sampling for mean estimates uniform over a function class and compare importance sampling with rejection sampling.
Motivated by a real-world application, we model and solve a complex staff scheduling problem. Tasks are to be assigned to workers for supervision. Multiple tasks can be covered in parallel by a single worker, with worker shifts being flexible within availabilities. Each worker has a different skill set, enabling them to cover different tasks. Tasks require assignment according to priority and skill requirements. The objective is to maximize the number of assigned tasks weighted by their priorities, while minimizing assignment penalties. We develop an adaptive large neighborhood search (ALNS) algorithm, relying on tailored destroy and repair operators. It is tested on benchmark instances derived from real-world data and compared to optimal results obtained by means of a commercial MIP-solver. Furthermore, we analyze the impact of considering three additional alternative objective functions. When applied to large-scale company data, the developed ALNS outperforms the previously applied solution approach.
Autonomous UAV path planning for 3D reconstruction has been actively studied in various applications for high-quality 3D models. However, most existing works have adopted explore-then-exploit, prior-based or exploration-based strategies, demonstrating inefficiency with repeated flight and low autonomy. In this paper, we propose PredRecon, a prediction-boosted planning framework that can autonomously generate paths for high 3D reconstruction quality. We obtain inspiration from humans can roughly infer the complete construction structure from partial observation. Hence, we devise a surface prediction module (SPM) to predict the coarse complete surfaces of the target from the current partial reconstruction. Then, the uncovered surfaces are produced by online volumetric mapping waiting for observation by UAV. Lastly, a hierarchical planner plans motions for 3D reconstruction, which sequentially finds efficient global coverage paths, plans local paths for maximizing the performance of Multi-View Stereo (MVS), and generates smooth trajectories for image-pose pairs acquisition. We conduct benchmarks in the realistic simulator, which validates the performance of PredRecon compared with the classical and state-of-the-art methods. The open-source code is released at //github.com/HKUST-Aerial-Robotics/PredRecon.
Full Waveform Inversion (FWI) is a large-scale nonlinear ill-posed problem for which implementation of the Newton-type methods is computationally expensive. Moreover, these methods can trap in undesirable local minima when the starting model lacks low-wavenumber part and the recorded data lack low-frequency content. In this paper, the Gauss-Newton (GN) method is modified to address these issues. We rewrite the GN system for multisoure multireceiver FWI in an equivalent matrix equation form whose solution is a diagonal matrix, instead of a vector in the standard system. Then we relax the diagonality constraint, lifting the search direction from a vector to a matrix. This relaxation is equivalent to introducing an extra degree of freedom in the subsurface offset axis for the search direction. Furthermore, it makes the Hessian matrix separable and easy to invert. The relaxed system is solved explicitly for computing the desired search direction, requiring only inversion of two small matrices that deblur the data residual matrix along the source and receiver dimensions. Application of the Extended GN (EGN) method to solve the extended-source FWI leads to an algorithm that has the advantages of both model extension and source extension. Numerical examples are presented showing robustness and stability of EGN algorithm for waveform inversion.
To mitigate the privacy leakages and communication burdens of Federated Learning (FL), decentralized FL (DFL) discards the central server and each client only communicates with its neighbors in a decentralized communication network. However, existing DFL suffers from high inconsistency among local clients, which results in severe distribution shift and inferior performance compared with centralized FL (CFL), especially on heterogeneous data or sparse communication topology. To alleviate this issue, we propose two DFL algorithms named DFedSAM and DFedSAM-MGS to improve the performance of DFL. Specifically, DFedSAM leverages gradient perturbation to generate local flat models via Sharpness Aware Minimization (SAM), which searches for models with uniformly low loss values. DFedSAM-MGS further boosts DFedSAM by adopting Multiple Gossip Steps (MGS) for better model consistency, which accelerates the aggregation of local flat models and better balances communication complexity and generalization. Theoretically, we present improved convergence rates $\small \mathcal{O}\big(\frac{1}{\sqrt{KT}}+\frac{1}{T}+\frac{1}{K^{1/2}T^{3/2}(1-\lambda)^2}\big)$ and $\small \mathcal{O}\big(\frac{1}{\sqrt{KT}}+\frac{1}{T}+\frac{\lambda^Q+1}{K^{1/2}T^{3/2}(1-\lambda^Q)^2}\big)$ in non-convex setting for DFedSAM and DFedSAM-MGS, respectively, where $1-\lambda$ is the spectral gap of gossip matrix and $Q$ is the number of MGS. Empirically, our methods can achieve competitive performance compared with CFL methods and outperform existing DFL methods.
Learning causal relationships between variables is a fundamental task in causal inference and directed acyclic graphs (DAGs) are a popular choice to represent the causal relationships. As one can recover a causal graph only up to its Markov equivalence class from observations, interventions are often used for the recovery task. Interventions are costly in general and it is important to design algorithms that minimize the number of interventions performed. In this work, we study the problem of identifying the smallest set of interventions required to learn the causal relationships between a subset of edges (target edges). Under the assumptions of faithfulness, causal sufficiency, and ideal interventions, we study this problem in two settings: when the underlying ground truth causal graph is known (subset verification) and when it is unknown (subset search). For the subset verification problem, we provide an efficient algorithm to compute a minimum sized interventional set; we further extend these results to bounded size non-atomic interventions and node-dependent interventional costs. For the subset search problem, in the worst case, we show that no algorithm (even with adaptivity or randomization) can achieve an approximation ratio that is asymptotically better than the vertex cover of the target edges when compared with the subset verification number. This result is surprising as there exists a logarithmic approximation algorithm for the search problem when we wish to recover the whole causal graph. To obtain our results, we prove several interesting structural properties of interventional causal graphs that we believe have applications beyond the subset verification/search problems studied here.
Seeking the equivalent entities among multi-source Knowledge Graphs (KGs) is the pivotal step to KGs integration, also known as \emph{entity alignment} (EA). However, most existing EA methods are inefficient and poor in scalability. A recent summary points out that some of them even require several days to deal with a dataset containing 200,000 nodes (DWY100K). We believe over-complex graph encoder and inefficient negative sampling strategy are the two main reasons. In this paper, we propose a novel KG encoder -- Dual Attention Matching Network (Dual-AMN), which not only models both intra-graph and cross-graph information smartly, but also greatly reduces computational complexity. Furthermore, we propose the Normalized Hard Sample Mining Loss to smoothly select hard negative samples with reduced loss shift. The experimental results on widely used public datasets indicate that our method achieves both high accuracy and high efficiency. On DWY100K, the whole running process of our method could be finished in 1,100 seconds, at least 10* faster than previous work. The performances of our method also outperform previous works across all datasets, where Hits@1 and MRR have been improved from 6% to 13%.