The calculation of the acoustic field in or around objects is an important task in acoustic engineering. To numerically solve this task, the boundary element method (BEM) is a commonly used method especially for infinite domains. The open source tool Mesh2HRTF and its BEM core NumCalc provide users with a collection of free software for acoustic simulations without the need of having an in-depth knowledge into numerical methods. However, we feel that users should have a basic understanding with respect to the methods behind the software they are using. We are convinced that this basic understanding helps in avoiding common mistakes and also helps to understand the requirements to use the software. To provide this background is the first motivation for this paper. A second motivation for this paper is to demonstrate the accuracy of NumCalc when solving benchmark problems. Thus, users can get an idea about the accuracy they can expect when using NumCalc as well as the memory and CPU requirements of NumCalc. A third motivation for this paper is to give users detailed information about some parts of the actual implementation that are usually not mentioned in literature, e.g., the specific version of the fast multipole method and its clustering process or how to use frequency-dependent admittance boundary conditions.
Machine learning is employed for solving physical systems governed by general nonlinear partial differential equations (PDEs). However, complex multi-physics systems such as acoustic-structure coupling are often described by a series of PDEs that incorporate variable physical quantities, which are referred to as parametric systems. There are lack of strategies for solving parametric systems governed by PDEs that involve explicit and implicit quantities. In this paper, a deep learning-based Multi Physics-Informed PointNet (MPIPN) is proposed for solving parametric acoustic-structure systems. First, the MPIPN induces an enhanced point-cloud architecture that encompasses explicit physical quantities and geometric features of computational domains. Then, the MPIPN extracts local and global features of the reconstructed point-cloud as parts of solving criteria of parametric systems, respectively. Besides, implicit physical quantities are embedded by encoding techniques as another part of solving criteria. Finally, all solving criteria that characterize parametric systems are amalgamated to form distinctive sequences as the input of the MPIPN, whose outputs are solutions of systems. The proposed framework is trained by adaptive physics-informed loss functions for corresponding computational domains. The framework is generalized to deal with new parametric conditions of systems. The effectiveness of the MPIPN is validated by applying it to solve steady parametric acoustic-structure coupling systems governed by the Helmholtz equations. An ablation experiment has been implemented to demonstrate the efficacy of physics-informed impact with a minority of supervised data. The proposed method yields reasonable precision across all computational domains under constant parametric conditions and changeable combinations of parametric conditions for acoustic-structure systems.
Natural systems with emergent behaviors often organize along low-dimensional subsets of high-dimensional spaces. For example, despite the tens of thousands of genes in the human genome, the principled study of genomics is fruitful because biological processes rely on coordinated organization that results in lower dimensional phenotypes. To uncover this organization, many nonlinear dimensionality reduction techniques have successfully embedded high-dimensional data into low-dimensional spaces by preserving local similarities between data points. However, the nonlinearities in these methods allow for too much curvature to preserve general trends across multiple non-neighboring data clusters, thereby limiting their interpretability and generalizability to out-of-distribution data. Here, we address both of these limitations by regularizing the curvature of manifolds generated by variational autoencoders, a process we coin ``$\Gamma$-VAE''. We demonstrate its utility using two example data sets: bulk RNA-seq from the The Cancer Genome Atlas (TCGA) and the Genotype Tissue Expression (GTEx); and single cell RNA-seq from a lineage tracing experiment in hematopoietic stem cell differentiation. We find that the resulting regularized manifolds identify mesoscale structure associated with different cancer cell types, and accurately re-embed tissues from completely unseen, out-of distribution cancers as if they were originally trained on them. Finally, we show that preserving long-range relationships to differentiated cells separates undifferentiated cells -- which have not yet specialized -- according to their eventual fate. Broadly, we anticipate that regularizing the curvature of generative models will enable more consistent, predictive, and generalizable models in any high-dimensional system with emergent low-dimensional behavior.
Activation Patching is a method of directly computing causal attributions of behavior to model components. However, applying it exhaustively requires a sweep with cost scaling linearly in the number of model components, which can be prohibitively expensive for SoTA Large Language Models (LLMs). We investigate Attribution Patching (AtP), a fast gradient-based approximation to Activation Patching and find two classes of failure modes of AtP which lead to significant false negatives. We propose a variant of AtP called AtP*, with two changes to address these failure modes while retaining scalability. We present the first systematic study of AtP and alternative methods for faster activation patching and show that AtP significantly outperforms all other investigated methods, with AtP* providing further significant improvement. Finally, we provide a method to bound the probability of remaining false negatives of AtP* estimates.
A module of a graph G is a set of vertices that have the same set of neighbours outside. Modules of a graphs form a so-called partitive family and thereby can be represented by a unique tree MD(G), called the modular decomposition tree. Motivated by the central role of modules in numerous algorithmic graph theory questions, the problem of efficiently computing MD(G) has been investigated since the early 70's. To date the best algorithms run in linear time but are all rather complicated. By combining previous algorithmic paradigms developed for the problem, we are able to present a simpler linear-time that relies on very simple data-structures, namely slice decomposition and sequences of rooted ordered trees.
The morphology of nanostructured materials exhibiting a polydisperse porous space, such as aerogels, is very open porous and fine grained. Therefore, a simulation of the deformation of a large aerogel structure resolving the nanostructure would be extremely expensive. Thus, multi-scale or homogenization approaches have to be considered. Here, a computational scale bridging approach based on the FE$^2$ method is suggested, where the macroscopic scale is discretized using finite elements while the microstructure of the open-porous material is resolved as a network of Euler-Bernoulli beams. Here, the beam frame based RVEs (representative volume elements) have pores whose size distribution follows the measured values for a specific material. This is a well-known approach to model aerogel structures. For the computational homogenization, an approach to average the first Piola-Kirchhoff stresses in a beam frame by neglecting rotational moments is suggested. To further overcome the computationally most expensive part in the homogenization method, that is, solving the RVEs and averaging their stress fields, a surrogate model is introduced based on neural networks. The networks input is the localized deformation gradient on the macroscopic scale and its output is the averaged stress for the specific material. It is trained on data generated by the beam frame based approach. The effiency and robustness of both homogenization approaches is shown numerically, the approximation properties of the surrogate model is verified for different macroscopic problems and discretizations. Different (Quasi-)Newton solvers are considered on the macroscopic scale and compared with respect to their convergence properties.
The variational quantum eigensolver (VQE) is a promising candidate that brings practical benefits from quantum computing. However, the required bandwidth in/out of a cryostat is a limiting factor to scale cryogenic quantum computers. We propose a tailored counter-based module with single flux quantum circuits in 4-K stage which precomputes a part of VQE calculation and reduces the amount of inter-temperature communication. The evaluation shows that our system reduces the required bandwidth by 97%, and with this drastic reduction, total power consumption is reduced by 93% in the case where 277 VQE programs are executed in parallel on a 10000-qubit machine.
Source conditions are a key tool in regularisation theory that are needed to derive error estimates and convergence rates for ill-posed inverse problems. In this paper, we provide a recipe to practically compute source condition elements as the solution of convex minimisation problems that can be solved with first-order algorithms. We demonstrate the validity of our approach by testing it on two inverse problem case studies in machine learning and image processing: sparse coefficient estimation of a polynomial via LASSO regression and recovering an image from a subset of the coefficients of its discrete Fourier transform. We further demonstrate that the proposed approach can easily be modified to solve the machine learning task of identifying the optimal sampling pattern in the Fourier domain for a given image and variational regularisation method, which has applications in the context of sparsity promoting reconstruction from magnetic resonance imaging data.
Understanding a surgical scene is crucial for computer-assisted surgery systems to provide any intelligent assistance functionality. One way of achieving this scene understanding is via scene segmentation, where every pixel of a frame is classified and therefore identifies the visible structures and tissues. Progress on fully segmenting surgical scenes has been made using machine learning. However, such models require large amounts of annotated training data, containing examples of all relevant object classes. Such fully annotated datasets are hard to create, as every pixel in a frame needs to be annotated by medical experts and, therefore, are rarely available. In this work, we propose a method to combine multiple partially annotated datasets, which provide complementary annotations, into one model, enabling better scene segmentation and the use of multiple readily available datasets. Our method aims to combine available data with complementary labels by leveraging mutual exclusive properties to maximize information. Specifically, we propose to use positive annotations of other classes as negative samples and to exclude background pixels of binary annotations, as we cannot tell if they contain a class not annotated but predicted by the model. We evaluate our method by training a DeepLabV3 on the publicly available Dresden Surgical Anatomy Dataset, which provides multiple subsets of binary segmented anatomical structures. Our approach successfully combines 6 classes into one model, increasing the overall Dice Score by 4.4% compared to an ensemble of models trained on the classes individually. By including information on multiple classes, we were able to reduce confusion between stomach and colon by 24%. Our results demonstrate the feasibility of training a model on multiple datasets. This paves the way for future work further alleviating the need for one large, fully segmented datasets.
We propose and analyse boundary-preserving schemes for the strong approximations of some scalar SDEs with non-globally Lipschitz drift and diffusion coefficients whose state-space is bounded. The schemes consists of a Lamperti transform followed by a Lie--Trotter splitting. We prove $L^{p}(\Omega)$-convergence of order $1$, for every $p \geq 1$, of the schemes and exploit the Lamperti transform to confine the numerical approximations to the state-space of the considered SDE. We provide numerical experiments that confirm the theoretical results and compare the proposed Lamperti-splitting schemes to other numerical schemes for SDEs.
Although metaheuristics have been widely recognized as efficient techniques to solve real-world optimization problems, implementing them from scratch remains difficult for domain-specific experts without programming skills. In this scenario, metaheuristic optimization frameworks are a practical alternative as they provide a variety of algorithms composed of customized elements, as well as experimental support. Recently, many engineering problems require to optimize multiple or even many objectives, increasing the interest in appropriate metaheuristic algorithms and frameworks that might integrate new specific requirements while maintaining the generality and reusability principles they were conceived for. Based on this idea, this paper introduces JCLEC-MO, a Java framework for both multi- and many-objective optimization that enables engineers to apply, or adapt, a great number of multi-objective algorithms with little coding effort. A case study is developed and explained to show how JCLEC-MO can be used to address many-objective engineering problems, often requiring the inclusion of domain-specific elements, and to analyze experimental outcomes by means of conveniently connected R utilities.