A graph G is a multi-interval PCG if there exist an edge weighted tree T with non-negative real values and disjoint intervals of the non-negative real half-line such that each node of G is uniquely associated to a leaf of T and there is an edge between two nodes in G if and only if the weighted distance between their corresponding leaves in T lies within any such intervals. If the number of intervals is k, then we call the graph a k-interval-PCG; in symbols, G = k-interval-PCG (T, I1, . . . , Ik). It is known that 2-interval-PCGs do not contain all graphs and the smallest known graph outside this class has 135 nodes. Here we prove that all graphs with at most 8 nodes are 2-interval-PCGs, so doing one step towards the determination of the smallest value of n such that there exists an n node graph that is not a 2-interval-PCG.
This paper investigates the applicability of the DK and DKM shell element classes for the first time within the framework of the standard (and sequential) finite-element-based limit analysis, which is a direct method used for determining the plastic collapse (and post-collapse) behaviour of structures. Despite these elements not being able to guarantee a strict upper bound, it is shown that they exhibit a significantly lower discretization error compared to the strict upper-bound formulated shell elements used in literature. Among these elements, the superiority of quadrangle elements over triangle elements is observed. The numerical results are also compared with the MITC shell elements, which have been shown to converge only in the pure membrane case.
Diffusion probabilistic models (DPMs) have exhibited excellent performance for high-fidelity image generation while suffering from inefficient sampling. Recent works accelerate the sampling procedure by proposing fast ODE solvers that leverage the specific ODE form of DPMs. However, they highly rely on specific parameterization during inference (such as noise/data prediction), which might not be the optimal choice. In this work, we propose a novel formulation towards the optimal parameterization during sampling that minimizes the first-order discretization error of the ODE solution. Based on such formulation, we propose \textit{DPM-Solver-v3}, a new fast ODE solver for DPMs by introducing several coefficients efficiently computed on the pretrained model, which we call \textit{empirical model statistics}. We further incorporate multistep methods and a predictor-corrector framework, and propose some techniques for improving sample quality at small numbers of function evaluations (NFE) or large guidance scales. Experiments show that DPM-Solver-v3 achieves consistently better or comparable performance in both unconditional and conditional sampling with both pixel-space and latent-space DPMs, especially in 5$\sim$10 NFEs. We achieve FIDs of 12.21 (5 NFE), 2.51 (10 NFE) on unconditional CIFAR10, and MSE of 0.55 (5 NFE, 7.5 guidance scale) on Stable Diffusion, bringing a speed-up of 15\%$\sim$30\% compared to previous state-of-the-art training-free methods. Code is available at \url{//github.com/thu-ml/DPM-Solver-v3}.
We describe a new dependent-rounding algorithmic framework for bipartite graphs. Given a fractional assignment $y$ of values to edges of graph $G = (U \cup V, E)$, the algorithms return an integral solution $Y$ such that each right-node $v \in V$ has at most one neighboring edge $f$ with $Y_f = 1$, and where the variables $Y_e$ also satisfy broad nonpositive-correlation properties. In particular, for any edges $e_1, e_2$ sharing a left-node $u \in U$, the variables $Y_{e_1}, Y_{e_2}$ have strong negative-correlation properties, i.e. the expectation of $Y_{e_1} Y_{e_2}$ is significantly below $y_{e_1} y_{e_2}$. This algorithm is based on generating negatively-correlated Exponential random variables and using them in a contention-resolution scheme inspired by an algorithm Im & Shadloo (2020). Our algorithm gives stronger and much more flexible negative correlation properties. Dependent rounding schemes with negative correlation properties have been used for approximation algorithms for job-scheduling on unrelated machines to minimize weighted completion times (Bansal, Srinivasan, & Svensson (2021), Im & Shadloo (2020), Im & Li (2023)). Using our new dependent-rounding algorithm, among other improvements, we obtain a $1.4$-approximation for this problem. This significantly improves over the prior $1.45$-approximation ratio of Im & Li (2023).
We consider the problem of finding edge-disjoint paths between given pairs of vertices in a sufficiently strong $d$-regular expander graph $G$ with $n$ vertices. In particular, we describe a deterministic, polynomial time algorithm which maintains an initially empty collection of edge-disjoint paths $\mathcal P$ in $G$ and fulfills any series of two types of requests: 1. Given two vertices $a$ and $b$ such that each appears as an endpoint in $O(d)$ paths in $\mathcal P$ and, additionally, $|\mathcal P| = O(n d / \log n)$, the algorithm finds a path of length at most $\log n$ connecting $a$ and $b$ which is edge-disjoint from all other paths in $\mathcal P$, and adds it to $\mathcal P$. 2. Remove a given path $P \in \mathcal{P}$ from $\mathcal{P}$. Importantly, each request is processed before seeing the next one. The upper bound on the length of found paths and the constraints are the best possible up to a constant factor. This establishes the first online algorithm for finding edge-disjoint paths in expanders which also allows removals, significantly strengthening a long list of previous results on the topic.
Linear temporal logic (LTL) and omega-regular objectives -- a superset of LTL -- have seen recent use as a way to express non-Markovian objectives in reinforcement learning. We introduce a model-based probably approximately correct (PAC) learning algorithm for omega-regular objectives in Markov decision processes. Unlike prior approaches, our algorithm learns from sampled trajectories of the system and does not require prior knowledge of the system's topology.
It is not only sufficient to construct computational models that can accurately classify or detect fake images from real images taken from a camera, but it is also important to ensure whether these computational models are fair enough or produce biased outcomes that can eventually harm certain social groups or cause serious security threats. Exploring fairness in forensic algorithms is an initial step towards correcting these biases. Since visual transformers are recently being widely used in most image classification based tasks due to their capability to produce high accuracies, this study tries to explore bias in the transformer based image forensic algorithms that classify natural and GAN generated images. By procuring a bias evaluation corpora, this study analyzes bias in gender, racial, affective, and intersectional domains using a wide set of individual and pairwise bias evaluation measures. As the generalizability of the algorithms against image compression is an important factor to be considered in forensic tasks, this study also analyzes the role of image compression on model bias. Hence to study the impact of image compression on model bias, a two phase evaluation setting is followed, where a set of experiments is carried out in the uncompressed evaluation setting and the other in the compressed evaluation setting.
This paper explores the connections between tempering (for Sequential Monte Carlo; SMC) and entropic mirror descent to sample from a target probability distribution whose unnormalized density is known. We establish that tempering SMC is a numerical approximation of entropic mirror descent applied to the Kullback-Leibler (KL) divergence and obtain convergence rates for the tempering iterates. Our result motivates the tempering iterates from an optimization point of view, showing that tempering can be used as an alternative to Langevin-based algorithms to minimize the KL divergence. We exploit the connection between tempering and mirror descent iterates to justify common practices in SMC and propose improvements to algorithms in literature.
Within the realm of image recognition, a specific category of multi-label classification (MLC) challenges arises when objects within the visual field may occlude one another, demanding simultaneous identification of both occluded and occluding objects. Traditional convolutional neural networks (CNNs) can tackle these challenges; however, those models tend to be bulky and can only attain modest levels of accuracy. Leveraging insights from cutting-edge neural science research, specifically the Holistic Bursting (HB) cell, this paper introduces a pioneering integrated network framework named HB-net. Built upon the foundation of HB cell clusters, HB-net is designed to address the intricate task of simultaneously recognizing multiple occluded objects within images. Various Bursting cell cluster structures are introduced, complemented by an evidence accumulation mechanism. Testing is conducted on multiple datasets comprising digits and letters. The results demonstrate that models incorporating the HB framework exhibit a significant $2.98\%$ enhancement in recognition accuracy compared to models without the HB framework ($1.0298$ times, $p=0.0499$). Although in high-noise settings, standard CNNs exhibit slightly greater robustness when compared to HB-net models, the models that combine the HB framework and EA mechanism achieve a comparable level of accuracy and resilience to ResNet50, despite having only three convolutional layers and approximately $1/30$ of the parameters. The findings of this study offer valuable insights for improving computer vision algorithms. The essential code is provided at //github.com/d-lab438/hb-net.git.
We investigate the randomized decision tree complexity of a specific class of read-once threshold functions. A read-once threshold formula can be defined by a rooted tree, every internal node of which is labeled by a threshold function $T_k^n$ (with output 1 only when at least $k$ out of $n$ input bits are 1) and each leaf by a distinct variable. Such a tree defines a Boolean function in a natural way. We focus on the randomized decision tree complexity of such functions, when the underlying tree is a uniform tree with all its internal nodes labeled by the same threshold function. We prove lower bounds of the form $c(k,n)^d$, where $d$ is the depth of the tree. We also treat trees with alternating levels of AND and OR gates separately and show asymptotically optimal bounds, extending the known bounds for the binary case.
A modelling framework suitable for detecting shape shifts in functional profiles combining the notion of Fr\'echet mean and the concept of deformation models is developed and proposed. The generalized mean sense offered by the Fr\'echet mean notion is employed to capture the typical pattern of the profiles under study, while the concept of deformation models, and in particular of the shape invariant model, allows for interpretable parameterizations of profile's deviations from the typical shape. EWMA-type control charts compatible with the functional nature of data and the employed deformation model are built and proposed, exploiting certain shape characteristics of the profiles under study with respect to the generalized mean sense, allowing for the identification of potential shifts concerning the shape and/or the deformation process. Potential shifts in the shape deformation process, are further distinguished to significant shifts with respect to amplitude and/or the phase of the profile under study. The proposed modelling and shift detection framework is implemented to a real world case study, where daily concentration profiles concerning air pollutants from an area in the city of Athens are modelled, while profiles indicating hazardous concentration levels are successfully identified in most of the cases.