In this paper we present a novel approach for the design of high order general boundary conditions when approximating solutions of the Euler equations on domains with curved boundaries, using meshes which may not be boundary conformal. When dealing with curved boundaries and/or unfitted discretizations, the consistency of boundary conditions is a well-known challenge, especially in the context of high order schemes. In order to tackle such consistency problems, the so-called Reconstruction for Off-site Data (ROD) method has been recently introduced in the finite volume framework: it is based on performing a boundary polynomial reconstruction that embeds the considered boundary treatment thanks to the implementation of a constrained minimization problem. This work is devoted to the development of the ROD approach in the context of discontinuous finite elements. We use the genuine space-time nature of the local ADER predictors to reformulate the ROD as a single space-time reconstruction procedure. This allows us to avoid a new reconstruction (linear system inversion) at each sub-time node and retrieve a single space-time polynomial that embeds the considered boundary conditions for the entire space-time element. Several numerical experiments are presented proving the consistency of the new approach for all kinds of boundary conditions. Computations involving the interaction of shocks with embedded curved boundaries are made possible through an a posteriori limiting technique.
In this paper, we consider interpolation by \textit{completely monotonous} polynomials (CMPs for short), that is, polynomials with non-negative real coefficients. In particular, given a finite set $S\subset \mathbb{R}_{>0} \times \mathbb{R}_{\geq 0}$, we consider \textit{the minimal polynomial} of $S$, introduced by Berg [1985], which is `minimal,' in the sense that it is eventually majorized by all the other CMPs interpolating $S$. We give an upper bound of the degree of the minimal polynomial of $S$ when it exists. Furthermore, we give another algorithm for computing the minimal polynomial of given $S$ which utilizes an order structure on sign sequences. Applying the upper bound above, we also analyze the computational complexity of algorithms for computing minimal polynomials including ours.
This paper investigates the effect of the design matrix on the ability (or inability) to estimate a sparse parameter in linear regression. More specifically, we characterize the optimal rate of estimation when the smallest singular value of the design matrix is bounded away from zero. In addition to this information-theoretic result, we provide and analyze a procedure which is simultaneously statistically optimal and computationally efficient, based on soft thresholding the ordinary least squares estimator. Most surprisingly, we show that the Lasso estimator -- despite its widespread adoption for sparse linear regression -- is provably minimax rate-suboptimal when the minimum singular value is small. We present a family of design matrices and sparse parameters for which we can guarantee that the Lasso with any choice of regularization parameter -- including those which are data-dependent and randomized -- would fail in the sense that its estimation rate is suboptimal by polynomial factors in the sample size. Our lower bound is strong enough to preclude the statistical optimality of all forms of the Lasso, including its highly popular penalized, norm-constrained, and cross-validated variants.
Conceptual architecture involves a highly creative exploration of novel ideas, often taken from other disciplines as architects consider radical new forms, materials, textures and colors for buildings. While today's generative AI systems can produce remarkable results, they lack the creativity demonstrated for decades by evolutionary algorithms. SCAPE, our proposed tool, combines evolutionary search with generative AI, enabling users to explore creative and good quality designs inspired by their initial input through a simple point and click interface. SCAPE injects randomness into generative AI, and enables memory, making use of the built-in language skills of GPT-4 to vary prompts via text-based mutation and crossover. We demonstrate that compared to DALL-E 3, SCAPE enables a 67% improvement in image novelty, plus improvements in quality and effectiveness of use; we show that in just 3 iterations SCAPE has a 24% image novelty increase enabling effective exploration, plus optimization of images by users. We use more than 20 independent architects to assess SCAPE, who provide markedly positive feedback.
In this paper, we propose a novel approach to test the equality of high-dimensional mean vectors of several populations via the weighted $L_2$-norm. We establish the asymptotic normality of the test statistics under the null hypothesis. We also explain theoretically why our test statistics can be highly useful in weakly dense cases when the nonzero signal in mean vectors is present. Furthermore, we compare the proposed test with existing tests using simulation results, demonstrating that the weighted $L_2$-norm-based test statistic exhibits favorable properties in terms of both size and power.
In this paper we study the interactions between so-called fractional relaxations of the integer programs (IPs) which encode homomorphism and isomorphism of relational structures. We give a combinatorial characterization of a certain natural linear programming (LP) relaxation of homomorphism in terms of fractional isomorphism. As a result, we show that the families of constraint satisfaction problems (CSPs) that are solvable by such linear program are precisely those that are closed under an equivalence relation which we call Weisfeiler-Leman invariance. We also generalize this result to the much broader framework of Promise Valued Constraint Satisfaction Problems, which brings together two well-studied extensions of the CSP framework. Finally, we consider the hierarchies of increasingly tighter relaxations of the homomorphism and isomorphism IPs obtained by applying the Sherali-Adams and Weisfeiler-Leman methods respectively. We extend our combinatorial characterization of the basic LP to higher levels of the Sherali-Adams hierarchy, and we generalize a well-known logical characterization of the Weisfeiler-Leman test from graphs to relational structures.
In this paper, we consider multi-robot localization problems with focus on cooperative localization and observability analysis of relative pose estimation. For cooperative localization, there is extra information available to each robot via communication network and message passing. If odometry data of a target robot can be transmitted to the ego-robot then the observability of their relative pose estimation can be achieved by range-only or bearing-only measurements provided both of their linear velocities are non-zero. If odometry data of a target robot is not directly transmitted but estimated by the ego-robot then there must be both range and bearing measurements to guarantee the observability of relative pose estimation. For ROS/Gazebo simulations, we consider four different sensing and communication structures in which extended Kalman filtering (EKF) and pose graph optimization (PGO) estimation with different robust loss functions (filtering and smoothing with different batch sizes of sliding window) are compared in terms of estimation accuracy. For hardware experiments, two Turtlebot3 equipped with UWB modules are used for real-world inter-robot relative pose estimation, in which both EKF and PGO are applied and compared.
In this paper, we focus on the design of binary constant-weight codes that admit low-complexity encoding and decoding algorithms, and that have size as a power of $2$. We construct a family of $(n=2^\ell, M=2^k, d=2)$ constant-weight codes ${\cal C}[\ell, r]$ parameterized by integers $\ell \geq 3$ and $1 \leq r \leq \lfloor \frac{\ell+3}{4} \rfloor$, by encoding information in the gaps between successive $1$'s of a vector. The code has weight $w = \ell$ and combinatorial dimension $k$ that scales quadratically with $\ell$. The encoding time is linear in the input size $k$, and the decoding time is poly-logarithmic in the input size $n$, discounting the linear time spent on parsing the input. Encoding and decoding algorithms of similar codes known in either information-theoretic or combinatorial literature require computation of large number of binomial coefficients. Our algorithms fully eliminate the need to evaluate binomial coefficients. While the code has a natural price to pay in $k$, it performs fairly well against the information-theoretic upper bound $\lfloor \log_2 {n \choose w} \rfloor$. When $\ell =3$, the code is optimal achieving the upper bound; when $\ell=4$, it is one bit away from the upper bound, and as $\ell$ grows it is order-optimal in the sense that the ratio of $k$ with its upper bound becomes a constant $\frac{11}{16}$ when $r=\lfloor \frac{\ell+3}{4} \rfloor$. With the same or even lower complexity, we derive new codes permitting a wider range of parameters by modifying ${\cal C}[\ell, r]$ in two different ways. The code derived using the first approach has the same blocklength $n=2^\ell$, but weight $w$ is allowed to vary from $\ell-1$ to $1$. In the second approach, the weight remains fixed as $w = \ell$, but the blocklength is reduced to $n=2^\ell - 2^r +1$. For certain selected values of parameters, these modified codes have an optimal $k$.
This paper introduces a novel automated system for generating architecture schematic designs aimed at streamlining complex decision-making at the multifamily real estate development project's outset. Leveraging the combined strengths of generative AI (neuro reasoning) and mathematical program solvers (symbolic reasoning), the method addresses both the reliance on expert insights and technical challenges in architectural schematic design. To address the large-scale and interconnected nature of design decisions needed for designing a whole building, we proposed a novel sequential neuro-symbolic reasoning approach, emulating traditional architecture design processes from initial concept to detailed layout. To remove the need to hand-craft a cost function to approximate the desired objectives, we propose a solution that uses neuro reasoning to generate constraints and cost functions that the symbolic solvers can use to solve. We also incorporate feedback loops for each design stage to ensure a tight integration between neuro and symbolic reasoning. Developed using GPT-4 without further training, our method's effectiveness is validated through comparative studies with real-world buildings. Our method can generate various building designs in accordance with the understanding of the neighborhood, showcasing its potential to transform the realm of architectural schematic design.
This paper surveys vision-language pre-training (VLP) methods for multimodal intelligence that have been developed in the last few years. We group these approaches into three categories: ($i$) VLP for image-text tasks, such as image captioning, image-text retrieval, visual question answering, and visual grounding; ($ii$) VLP for core computer vision tasks, such as (open-set) image classification, object detection, and segmentation; and ($iii$) VLP for video-text tasks, such as video captioning, video-text retrieval, and video question answering. For each category, we present a comprehensive review of state-of-the-art methods, and discuss the progress that has been made and challenges still being faced, using specific systems and models as case studies. In addition, for each category, we discuss advanced topics being actively explored in the research community, such as big foundation models, unified modeling, in-context few-shot learning, knowledge, robustness, and computer vision in the wild, to name a few.
Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets.