We present fast simulation methods for the self-assembly of complex shapes in two dimensions. The shapes are modeled via a general boundary curve and interact via a standard volume term promoting overlap and an interpenetration penalty. To efficiently realize the Gibbs measure on the space of possible configurations we employ the hybrid Monte Carlo algorithm together with a careful use of signed distance functions for energy evaluation. Motivated by the self-assembly of identical coat proteins of the tobacco mosaic virus which assemble into a helical shell, we design a particular nonconvex 2D model shape and demonstrate its robust self-assembly into a unique final state. Our numerical experiments reveal two essential prerequisites for this self-assembly process: blocking and matching (i.e., local repulsion and attraction) of different parts of the boundary; and nonconvexity and handedness of the shape.
By abstracting over well-known properties of De Bruijn's representation with nameless dummies, we design a new theory of syntax with variable binding and capture-avoiding substitution. We propose it as a simpler alternative to Fiore, Plotkin, and Turi's approach, with which we establish a strong formal link. We also show that our theory easily incorporates simple types and equations between terms.
We study communication over a Gaussian multiple-access channel (MAC) with two types of transmitters: Digital transmitters hold a message from a discrete set that needs to be communicated to the receiver. Analog transmitters hold sequences of analog values, and some function of these distributed values (but not the values themselves) need to be conveyed to the receiver. For the digital messages, it is required that they can be decoded error free at the receiver with high probability while the recovered analog function values have to satisfy a fidelity criterion such as an upper bound on mean squared error (MSE) or a certain maximum error with a given confidence. For the case in which the computed function for the analog transmitters is a sum of values in [-1,1], we derive inner and outer bounds for the tradeoff of digital and analog rates of communication under peak and average power constraints for digital transmitters and a peak power constraint for analog transmitters. We then extend the achievability part of our result to a larger class of functions that includes all linear, but also some non-linear functions.
LiNGAM determines the variable order from cause to effect using additive noise models, but it faces challenges with confounding. Previous methods maintained LiNGAM's fundamental structure while trying to identify and address variables affected by confounding. As a result, these methods required significant computational resources regardless of the presence of confounding, and they did not ensure the detection of all confounding types. In contrast, this paper enhances LiNGAM by introducing LiNGAM-MMI, a method that quantifies the magnitude of confounding using KL divergence and arranges the variables to minimize its impact. This method efficiently achieves a globally optimal variable order through the shortest path problem formulation. LiNGAM-MMI processes data as efficiently as traditional LiNGAM in scenarios without confounding while effectively addressing confounding situations. Our experimental results suggest that LiNGAM-MMI more accurately determines the correct variable order, both in the presence and absence of confounding.
The notion of Boolean logic backpropagation was introduced to build neural networks with weights and activations being Boolean numbers. Most of computations can be done with Boolean logic instead of real arithmetic, both during training and inference phases. But the underlying discrete optimization problem is NP-hard, and the Boolean logic has no guarantee. In this work we propose the first convergence analysis, under standard non-convex assumptions.
Computational analysis with the finite element method requires geometrically accurate meshes. It is well known that high-order meshes can accurately capture curved surfaces with fewer degrees of freedom in comparison to low-order meshes. Existing techniques for high-order mesh generation typically output meshes with same polynomial order for all elements. However, high order elements away from curvilinear boundaries or interfaces increase the computational cost of the simulation without increasing geometric accuracy. In prior work, we have presented one such approach for generating body-fitted uniform-order meshes that takes a given mesh and morphs it to align with the surface of interest prescribed as the zero isocontour of a level-set function. We extend this method to generate mixed-order meshes such that curved surfaces of the domain are discretized with high-order elements, while low-order elements are used elsewhere. Numerical experiments demonstrate the robustness of the approach and show that it can be used to generate mixed-order meshes that are much more efficient than high uniform-order meshes. The proposed approach is purely algebraic, and extends to different types of elements (quadrilaterals/triangles/tetrahedron/hexahedra) in two- and three-dimensions.
Faithfully summarizing the knowledge encoded by a deep neural network (DNN) into a few symbolic primitive patterns without losing much information represents a core challenge in explainable AI. To this end, Ren et al. (2023c) have derived a series of theorems to prove that the inference score of a DNN can be explained as a small set of interactions between input variables. However, the lack of generalization power makes it still hard to consider such interactions as faithful primitive patterns encoded by the DNN. Therefore, given different DNNs trained for the same task, we develop a new method to extract interactions that are shared by these DNNs. Experiments show that the extracted interactions can better reflect common knowledge shared by different DNNs.
We present efficient MATLAB implementations of the lowest-order primal hybrid finite element method (FEM) for linear second-order elliptic and parabolic problems with mixed boundary conditions in two spatial dimensions. We employ the Crank-Nicolson finite difference scheme for the complete discrete setup of the parabolic problem. All the codes presented are fully vectorized using matrix-wise array operations. Numerical experiments are conducted to show the performance of the software.
Score-based statistical models play an important role in modern machine learning, statistics, and signal processing. For hypothesis testing, a score-based hypothesis test is proposed in \cite{wu2022score}. We analyze the performance of this score-based hypothesis testing procedure and derive upper bounds on the probabilities of its Type I and II errors. We prove that the exponents of our error bounds are asymptotically (in the number of samples) tight for the case of simple null and alternative hypotheses. We calculate these error exponents explicitly in specific cases and provide numerical studies for various other scenarios of interest.
Necessary and sufficient conditions of uniform consistency are explored. A hypothesis is simple. Nonparametric sets of alternatives are bounded convex sets in $\mathbb{L}_p$, $p >1$ with "small" balls deleted. The "small" balls have the center at the point of hypothesis and radii of balls tend to zero as sample size increases. For problem of hypothesis testing on a density, we show that, for the sets of alternatives, there are uniformly consistent tests for some sequence of radii of the balls, if and only if, convex set is relatively compact. The results are established for problem of hypothesis testing on a density, for signal detection in Gaussian white noise, for linear ill-posed problems with random Gaussian noise and so on.
In this work, a Generalized Finite Difference (GFD) scheme is presented for effectively computing the numerical solution of a parabolic-elliptic system modelling a bacterial strain with density-suppressed motility. The GFD method is a meshless method known for its simplicity for solving non-linear boundary value problems over irregular geometries. The paper first introduces the basic elements of the GFD method, and then an explicit-implicit scheme is derived. The convergence of the method is proven under a bound for the time step, and an algorithm is provided for its computational implementation. Finally, some examples are considered comparing the results obtained with a regular mesh and an irregular cloud of points.