We introduce a new poset structure on Dyck paths where the covering relation is a particular case of the relation inducing the Tamari lattice. We prove that the transitive closure of this relation endows Dyck paths with a lattice structure. We provide a trivariate generating function counting the number of Dyck paths with respect to the semilength, the numbers of outgoing and incoming edges in the Hasse diagram. We deduce the numbers of coverings, meet and join irreducible elements. As a byproduct, we present a new involution on Dyck paths that transports the bistatistic of the numbers of outgoing and incoming edges into its reverse. Finally, we give a generating function for the number of intervals, and we compare this number with the number of intervals in the Tamari lattice.
Solving ill-posed inverse problems requires careful formulation of prior beliefs over the signals of interest and an accurate description of their manifestation into noisy measurements. Handcrafted signal priors based on e.g. sparsity are increasingly replaced by data-driven deep generative models, and several groups have recently shown that state-of-the-art score-based diffusion models yield particularly strong performance and flexibility. In this paper, we show that the powerful paradigm of posterior sampling with diffusion models can be extended to include rich, structured, noise models. To that end, we propose a joint conditional reverse diffusion process with learned scores for the noise and signal-generating distribution. We demonstrate strong performance gains across various inverse problems with structured noise, outperforming competitive baselines that use normalizing flows and adversarial networks. This opens up new opportunities and relevant practical applications of diffusion modeling for inverse problems in the context of non-Gaussian measurement models.
Weakly-supervised segmentation with label-efficient sparse annotations has attracted increasing research attention to reduce the cost of laborious pixel-wise labeling process, while the pairwise affinity modeling techniques play an essential role in this task. Most of the existing approaches focus on using the local appearance kernel to model the neighboring pairwise potentials. However, such a local operation fails to capture the long-range dependencies and ignores the topology of objects. In this work, we formulate the affinity modeling as an affinity propagation process, and propose a local and a global pairwise affinity terms to generate accurate soft pseudo labels. An efficient algorithm is also developed to reduce significantly the computational cost. The proposed approach can be conveniently plugged into existing segmentation networks. Experiments on three typical label-efficient segmentation tasks, i.e. box-supervised instance segmentation, point/scribble-supervised semantic segmentation and CLIP-guided semantic segmentation, demonstrate the superior performance of the proposed approach.
We introduce Deceptive-NeRF, a novel methodology for few-shot NeRF reconstruction, which leverages diffusion models to synthesize plausible pseudo-observations to improve the reconstruction. This approach unfolds through three key steps: 1) reconstructing a coarse NeRF from sparse input data; 2) utilizing the coarse NeRF to render images and subsequently generating pseudo-observations based on them; 3) training a refined NeRF model utilizing input images augmented with pseudo-observations. We develop a deceptive diffusion model that adeptly transitions RGB images and depth maps from coarse NeRFs into photo-realistic pseudo-observations, all while preserving scene semantics for reconstruction. Furthermore, we propose a progressive strategy for training the Deceptive-NeRF, using the current NeRF renderings to create pseudo-observations that enhance the next iteration's NeRF. Extensive experiments demonstrate that our approach is capable of synthesizing photo-realistic novel views, even for highly complex scenes with very sparse inputs. Codes will be released.
We proposed a parallel-in-time method based on preconditioner for Biot's consolidation model in poroelasticity. In order to achieve a fast and stable convergence for the matrix system of the Biot's model, we design two preconditioners with approximations of the Schur complement. The parallel-in-time method employs an inverted time-stepping scheme that iterates to solve the preconditioned linear system in the outer loop and advances the time step in the inner loop. This allows us to parallelize the iterations with a theoretical parallel efficiency that approaches 1 as the number of time steps and spatial steps grows. We demonstrate the stability, accuracy, and linear speedup of our method on HPC platform through numerical experiments.
Over the last three decades, innovations in the memory subsystem were primarily targeted at overcoming the data movement bottleneck. In this paper, we focus on a specific market trend in memory technology: 3D-stacked memory and caches. We investigate the impact of extending the on-chip memory capabilities in future HPC-focused processors, particularly by 3D-stacked SRAM. First, we propose a method oblivious to the memory subsystem to gauge the upper-bound in performance improvements when data movement costs are eliminated. Then, using the gem5 simulator, we model two variants of a hypothetical LARge Cache processor (LARC), fabricated in 1.5 nm and enriched with high-capacity 3D-stacked cache. With a volume of experiments involving a broad set of proxy-applications and benchmarks, we aim to reveal how HPC CPU performance will evolve, and conclude an average boost of 9.56x for cache-sensitive HPC applications, on a per-chip basis. Additionally, we exhaustively document our methodological exploration to motivate HPC centers to drive their own technological agenda through enhanced co-design.
In this study, a gait phase classification method based on SVM multiclass classification is introduced, with a focus on the precise identification of the stance and swing phases, which are further subdivided into seven phases. Data from individual IMU sensors, such as Shank Acceleration X, Y, Z, Shank Gyro X, and Knee Angles, are used as features in this classification model. The suggested technique successfully classifies the various gait phases with a significant accuracy of about 90.3%. Gait phase classification is crucial, especially in the domains of exoskeletons and prosthetics, where accurate identification of gait phases enables seamless integration with assistive equipment, improving mobility, stability, and energy economy. This study extends the study of gait and offers an effective method for correctly identifying gait phases from Shank IMU sensor data, with potential applications in biomechanical research, exoskeletons, rehabilitation, and prosthetics.
We present a preconditioning method for the linear systems arising from the boundary element discretization of the Laplace hypersingular equation on a $2$-dimensional triangulated surface $\Gamma$ in $\mathbb{R}^3$. We allow $\Gamma$ to belong to a large class of geometries that we call polygonal multiscreens, which can be non-manifold. After introducing a new, simple conforming Galerkin discretization, we analyze a substructuring domain-decomposition preconditioner based on ideas originally developed for the Finite Element Method. The surface $\Gamma$ is subdivided into non-overlapping regions, and the application of the preconditioner is obtained via the solution of the hypersingular equation on each patch, plus a coarse subspace correction. We prove that the condition number of the preconditioned linear system grows poly-logarithmically with $H/h$, the ratio of the coarse mesh and fine mesh size, and our numerical results indicate that this bound is sharp. This domain-decomposition algorithm therefore guarantees significant speedups for iterative solvers, even when a large number of subdomains is used.
We present a new approach, the Topograph, which reconstructs underlying physics processes, including the intermediary particles, by leveraging underlying priors from the nature of particle physics decays and the flexibility of message passing graph neural networks. The Topograph not only solves the combinatoric assignment of observed final state objects, associating them to their original mother particles, but directly predicts the properties of intermediate particles in hard scatter processes and their subsequent decays. In comparison to standard combinatoric approaches or modern approaches using graph neural networks, which scale exponentially or quadratically, the complexity of Topographs scales linearly with the number of reconstructed objects. We apply Topographs to top quark pair production in the all hadronic decay channel, where we outperform the standard approach and match the performance of the state-of-the-art machine learning technique.
In the literature on Kleene algebra, a number of variants have been proposed which impose additional structure specified by a theory, such as Kleene algebra with tests (KAT) and the recent Kleene algebra with observations (KAO), or make specific assumptions about certain constants, as for instance in NetKAT. Many of these variants fit within the unifying perspective offered by Kleene algebra with hypotheses, which comes with a canonical language model constructed from a given set of hypotheses. For the case of KAT, this model corresponds to the familiar interpretation of expressions as languages of guarded strings. A relevant question therefore is whether Kleene algebra together with a given set of hypotheses is complete with respect to its canonical language model. In this paper, we revisit, combine and extend existing results on this question to obtain tools for proving completeness in a modular way. We showcase these tools by giving new and modular proofs of completeness for KAT, KAO and NetKAT, and we prove completeness for new variants of KAT: KAT extended with a constant for the full relation, KAT extended with a converse operation, and a version of KAT where the collection of tests only forms a distributive lattice.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.