Cohen-Addad, Le, Pilipczuk, and Pilipczuk [CLPP23] recently constructed a stochastic embedding with expected $1+\varepsilon$ distortion of $n$-vertex planar graphs (with polynomial aspect ratio) into graphs of treewidth $O(\varepsilon^{-1}\log^{13} n)$. Their embedding is the first to achieve polylogarithmic treewidth. However, there remains a large gap between the treewidth of their embedding and the treewidth lower bound of $\Omega(\log n)$ shown by Carroll and Goel [CG04]. In this work, we substantially narrow the gap by constructing a stochastic embedding with treewidth $O(\varepsilon^{-1}\log^{3} n)$. We obtain our embedding by improving various steps in the CLPP construction. First, we streamline their embedding construction by showing that one can construct a low-treewidth embedding for any graph from (i) a stochastic hierarchy of clusters and (ii) a stochastic balanced cut. We shave off some logarithmic factors in this step by using a single hierarchy of clusters. Next, we construct a stochastic hierarchy of clusters with optimal separating probability and hop bound based on shortcut partition [CCLMST23, CCLMST24]. Finally, we construct a stochastic balanced cut with an improved trade-off between the cut size and the number of cuts. This is done by a new analysis of the contraction sequence introduced by [CLPP23]; our analysis gives an optimal treewidth bound for graphs admitting a contraction sequence.
Predicting future dynamics is crucial for applications like autonomous driving and robotics, where understanding the environment is key. Existing pixel-level methods are computationally expensive and often focus on irrelevant details. To address these challenges, we introduce $\texttt{DINO-Foresight}$, a novel framework that operates in the semantic feature space of pretrained Vision Foundation Models (VFMs). Our approach trains a masked feature transformer in a self-supervised manner to predict the evolution of VFM features over time. By forecasting these features, we can apply off-the-shelf, task-specific heads for various scene understanding tasks. In this framework, VFM features are treated as a latent space, to which different heads attach to perform specific tasks for future-frame analysis. Extensive experiments show that our framework outperforms existing methods, demonstrating its robustness and scalability. Additionally, we highlight how intermediate transformer representations in $\texttt{DINO-Foresight}$ improve downstream task performance, offering a promising path for the self-supervised enhancement of VFM features. We provide the implementation code at //github.com/Sta8is/DINO-Foresight .
A convergent numerical method for $\alpha$-dissipative solutions of the Hunter-Saxton equation is derived. The method is based on applying a tailor-made projection operator to the initial data, and then solving exactly using the generalized method of characteristics. The projection step is the only step that introduces any approximation error. It is therefore crucial that its design ensures not only a good approximation of the initial data, but also that errors due to the energy dissipation at later times remain small. Furthermore, it is shown that the main quantity of interest, the wave profile, converges in $L^{\infty}$ for all $t \geq 0$, while a subsequence of the energy density converges weakly for almost every time.
We present polynomial-time SDP-based algorithms for the following problem: For fixed $k \leq \ell$, given a real number $\epsilon>0$ and a graph $G$ that admits a $k$-colouring with a $\rho$-fraction of the edges coloured properly, it returns an $\ell$-colouring of $G$ with an $(\alpha \rho - \epsilon)$-fraction of the edges coloured properly in polynomial time in $G$ and $1 / \epsilon$. Our algorithms are based on the algorithms of Frieze and Jerrum [Algorithmica'97] and of Karger, Motwani and Sudan [JACM'98]. When $k$ is fixed and $\ell$ grows large, our algorithm achieves an approximation ratio of $\alpha = 1 - o(1 / \ell)$. When $k, \ell$ are both large, our algorithm achieves an approximation ratio of $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell) - O(1 / k^2)$; if we fix $d = \ell - k$ and allow $k, \ell$ to grow large, this is $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell)$. By extending the results of Khot, Kindler, Mossel and O'Donnell [SICOMP'07] to the promise setting, we show that for large $k$ and $\ell$, assuming Khot's Unique Games Conjecture (\UGC), it is \NP-hard to achieve an approximation ratio $\alpha$ greater than $1 - 1 / \ell + 2 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided that $\ell$ is bounded by a function that is $o(\exp(\sqrt[3]{k}))$. For the case where $d = \ell - k$ is fixed, this bound matches the performance of our algorithm up to $o(\ln \ell / k \ell)$. Furthermore, by extending the results of Guruswami and Sinop [ToC'13] to the promise setting, we prove that it is \NP-hard to achieve an approximation ratio greater than $1 - 1 / \ell + 8 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided again that $\ell$ is bounded as before (but this time without assuming the \UGC).
$\ell_1$ regularization is used to preserve edges or enforce sparsity in a solution to an inverse problem. We investigate the Split Bregman and the Majorization-Minimization iterative methods that turn this non-smooth minimization problem into a sequence of steps that include solving an $\ell_2$-regularized minimization problem. We consider selecting the regularization parameter in the inner generalized Tikhonov regularization problems that occur at each iteration in these $\ell_1$ iterative methods. The generalized cross validation and $\chi^2$ degrees of freedom methods are extended to these inner problems. In particular, for the $\chi^2$ method this includes extending the $\chi^2$ result for problems in which the regularization operator has more rows than columns, and showing how to use the $A-$weighted generalized inverse to estimate prior information at each inner iteration. Numerical experiments for image deblurring problems demonstrate that it is more effective to select the regularization parameter automatically within the iterative schemes than to keep it fixed for all iterations. Moreover, an appropriate regularization parameter can be estimated in the early iterations and used fixed to convergence.
This work introduces a novel approach to constructing DNA codes from linear codes over a non-chain extension of $\mathbb{Z}_4$. We study $(\text{\textbaro},\mathfrak{d}, \gamma)$-constacyclic codes over the ring $\mathfrak{R}=\mathbb{Z}_4+\omega\mathbb{Z}_4, \omega^2=\omega,$ with an $\mathfrak{R}$-automorphism $\text{\textbaro}$ and a $\text{\textbaro}$-derivation $\mathfrak{d}$ over $\mathfrak{R}.$ Further, we determine the generators of the $(\text{\textbaro},\mathfrak{d}, \gamma)$-constacyclic codes over the ring $\mathfrak{R}$ of any arbitrary length and establish the reverse constraint for these codes. Besides the necessary and sufficient criterion to derive reverse-complement codes, we present a construction to obtain DNA codes from these reversible codes. Moreover, we use another construction on the $(\text{\textbaro},\mathfrak{d},\gamma)$-constacyclic codes to generate additional optimal and new classical codes. Finally, we provide several examples of $(\text{\textbaro},\mathfrak{d}, \gamma)$ constacyclic codes and construct DNA codes from established results. The parameters of these linear codes over $\mathbb{Z}_4$ are better and optimal according to the codes available at \cite{z4codes}.
Efficient algorithms for solving the Smallest Enclosing Sphere (SES) problem, such as Welzl's algorithm, often fail to handle degenerate subsets of points in 3D space. Degeneracies and ill-posed configurations present significant challenges, leading to failures in convergence, inaccuracies or increased computational cost in such cases. Existing improvements to these algorithms, while addressing some of these issues, are either computationally expensive or only partially effective. In this paper, we propose a hybrid algorithm designed to mitigate degeneracy while maintaining an overall computational complexity of $O(N)$. By combining robust preprocessing steps with efficient core computations, our approach avoids the pitfalls of degeneracy without sacrificing scalability. The proposed method is validated through theoretical analysis and experimental results, demonstrating its efficacy in addressing degenerate configurations and achieving high efficiency in practice.
Methods of computational quantum chemistry provide accurate approximations of molecular properties crucial for computer-aided drug discovery and other areas of chemical science. However, high computational complexity limits the scalability of their applications. Neural network potentials (NNPs) are a promising alternative to quantum chemistry methods, but they require large and diverse datasets for training. This work presents a new dataset and benchmark called $\nabla^2$DFT that is based on the nablaDFT. It contains twice as much molecular structures, three times more conformations, new data types and tasks, and state-of-the-art models. The dataset includes energies, forces, 17 molecular properties, Hamiltonian and overlap matrices, and a wavefunction object. All calculations were performed at the DFT level ($\omega$B97X-D/def2-SVP) for each conformation. Moreover, $\nabla^2$DFT is the first dataset that contains relaxation trajectories for a substantial number of drug-like molecules. We also introduce a novel benchmark for evaluating NNPs in molecular property prediction, Hamiltonian prediction, and conformational optimization tasks. Finally, we propose an extendable framework for training NNPs and implement 10 models within it.
Infrared and visible image fusion (IVIF) is a crucial technique for enhancing visual performance by integrating unique information from different modalities into one fused image. Exiting methods pay more attention to conducting fusion with undisturbed data, while overlooking the impact of deliberate interference on the effectiveness of fusion results. To investigate the robustness of fusion models, in this paper, we propose a novel adversarial attack resilient network, called $\textrm{A}^{\textrm{2}}$RNet. Specifically, we develop an adversarial paradigm with an anti-attack loss function to implement adversarial attacks and training. It is constructed based on the intrinsic nature of IVIF and provide a robust foundation for future research advancements. We adopt a Unet as the pipeline with a transformer-based defensive refinement module (DRM) under this paradigm, which guarantees fused image quality in a robust coarse-to-fine manner. Compared to previous works, our method mitigates the adverse effects of adversarial perturbations, consistently maintaining high-fidelity fusion results. Furthermore, the performance of downstream tasks can also be well maintained under adversarial attacks. Code is available at //github.com/lok-18/A2RNet.
We consider two-dimensional $(\lambda_1, \lambda_2)$-constacyclic codes over $\mathbb{F}_{q}$ of area $M N$, where $q$ is some power of prime $p$ with $\gcd(M,p)=1$ and $\gcd(N,p)=1$. With the help of common zero (CZ) set, we characterize 2-D constacyclic codes. Further, we provide an algorithm to construct an ideal basis of these codes by using their essential common zero (ECZ) sets. We describe the dual of 2-D constacyclic codes. Finally, we provide an encoding scheme for generating 2-D constacyclic codes. We present an example to illustrate that 2-D constacyclic codes can have better minimum distance compared to their cyclic counterparts with the same code size and code rate.
We propose a $C^0$ interior penalty method for the fourth-order stream function formulation of the surface Stokes problem. The scheme utilizes continuous, piecewise polynomial spaces defined on an approximate surface. We show that the resulting discretization is positive definite and derive error estimates in various norms in terms of the polynomial degree of the finite element space as well as the polynomial degree to define the geometry approximation. A notable feature of the scheme is that it does not explicitly depend on the Gauss curvature of the surface. This is achieved via a novel integration-by-parts formula for the surface biharmonic operator.