A mechanical model and numerical method for structural membranes implied by all isosurfaces of a level-set function in a three-dimensional bulk domain are proposed. The mechanical model covers large displacements in the context of the finite strain theory and is formulated based on the tangential differential calculus. Alongside curved two-dimensional membranes embedded in three dimensions, also the simpler case of curved ropes (cables) in two-dimensional bulk domains is covered. The implicit geometries (shapes) are implied by the level sets and the boundaries of the structures are given by the intersection of the level sets with the boundary of the bulk domain. For the numerical analysis, the bulk domain is discretized using a background mesh composed by (higher-order) elements with the dimensionality of the embedding space. The elements are by no means aligned to the level sets, i.e., the geometries of the structures, which resembles a fictitious domain method, most importantly the Trace FEM. The proposed numerical method is a hybrid of the classical FEM and fictitious domain methods which may be labeled as "Bulk Trace FEM". Numerical studies confirm higher-order convergence rates and the potential for new material models with continuously embedded sub-structures in bulk domains.
We consider a class of problems of Discrete Tomography which has been deeply investigated in the past: the reconstruction of convex lattice sets from their horizontal and/or vertical X-rays, i.e. from the number of points in a sequence of consecutive horizontal and vertical lines. The reconstruction of the HV-convex polyominoes works usually in two steps, first the filling step consisting in filling operations, second the convex aggregation of the switching components. We prove three results about the convex aggregation step: (1) The convex aggregation step used for the reconstruction of HV-convex polyominoes does not always provide a solution. The example yielding to this result is called \textit{the bad guy} and disproves a conjecture of the domain. (2) The reconstruction of a digital convex lattice set from only one X-ray can be performed in polynomial time. We prove it by encoding the convex aggregation problem in a Directed Acyclic Graph. (3) With the same strategy, we prove that the reconstruction of fat digital convex sets from their horizontal and vertical X-rays can be solved in polynomial time. Fatness is a property of the digital convex sets regarding the relative position of the left, right, top and bottom points of the set. The complexity of the reconstruction of the lattice sets which are not fat remains an open question.
Mesoscale simulations of discrete defects in metals provide an ideal framework to investigate the micro-scale mechanisms governing the plastic deformation under high thermal and mechanical loading conditions. To bridge size and time-scale while limiting computational effort, typically the concept of representative volume elements (RVEs) is employed. This approach considers the microstructure evolution in a volume that is representative of the overall material behavior. However, in settings with complex thermal and mechanical loading histories careful consideration of the impact of modeling constraints in terms of time scale and simulation domain on predicted results is required. We address the representation of heterogeneous dislocation structure formation in simulation volumes using the example of residual stress formation during cool-down of laser powder-bed fusion (LPBF) of AISI 316L stainless steel. This is achieved by a series of large-scale three-dimensional discrete dislocation dynamics (DDD) simulations assisted by thermo-mechanical finite element modeling of the LPBF process. Our results show that insufficient size of periodic simulation domains can result in dislocation patterns that reflect the boundaries of the primary cell. More pronounced dislocation interaction observed for larger domains highlight the significance of simulation domain constraints for predicting mechanical properties. We formulate criteria that characterize representative volume elements by capturing the conformity of the dislocation structure to the bulk material. This work provides a basis for future investigations of heterogeneous microstructure formation in mesoscale simulations of bulk material behavior.
This paper focuses on the achievable accuracy of center-of-gravity (CoG) centroiding with respect to the ultimate limits defined by the Cramer Rao lower variance bounds. In a practical scenario, systematic centroiding errors occur through coarse sampling of the points-spread-function (PSF) as well as signal truncation errors at the boundaries of the region-of-interest (ROI). While previous studies focused on sampling errors alone, this paper derives and analyzes the full systematic error, as truncation error become increasingly important for small ROIs where the effect of random pixel noise may be more efficiently suppressed than for large ROIs. Unbiased estimators are introduced and analytical expressions derived for their variance, detailing the effects of photon shot noise, pixel random noise and residual systematic error. Analytical results are verified by Monte Carlo simulations and the performances compared to those of other algorithms, such as Iteratively Weighted CoG, Thresholded CoG, and Least Squares Fits. The unbiased estimators allow achieving centroiding errors very close to the Cramer Rao Lower Bound (CRLB), for low and high photon number, at significantly lower computational effort than other algorithms. Additionally, optimal configurations in relation to PSF radius and ROI size and other specific parameters, are determined for all other algorithms, and their normalized centroid error assessed with respect to the CRLB.
We consider a class of problems of Discrete Tomography which has been deeply investigated in the past: the reconstruction of convex lattice sets from their horizontal and/or vertical X-rays, i.e. from the number of points in a sequence of consecutive horizontal and vertical lines. The reconstruction of the HV-convex polyominoes works usually in two steps, first the filling step consisting in filling operations, second the convex aggregation of the switching components. We prove three results about the convex aggregation step: (1) The convex aggregation step used for the reconstruction of HV-convex polyominoes does not always provide a solution. The example yielding to this result is called \textit{the bad guy} and disproves a conjecture of the domain. (2) The reconstruction of a digital convex lattice set from only one X-ray can be performed in polynomial time. We prove it by encoding the convex aggregation problem in a Directed Acyclic Graph. (3) With the same strategy, we prove that the reconstruction of fat digital convex sets from their horizontal and vertical X-rays can be solved in polynomial time. Fatness is a property of the digital convex sets regarding the relative position of the left, right, top and bottom points of the set. The complexity of the reconstruction of the lattice sets which are not fat remains an open question.
Let $G$ be a graph, which represents a social network, and suppose each node $v$ has a threshold value $\tau(v)$. Consider an initial configuration, where each node is either positive or negative. In each discrete time step, a node $v$ becomes/remains positive if at least $\tau(v)$ of its neighbors are positive and negative otherwise. A node set $\mathcal{S}$ is a Target Set (TS) whenever the following holds: if $\mathcal{S}$ is fully positive initially, all nodes in the graph become positive eventually. We focus on a generalization of TS, called Timed TS (TTS), where it is permitted to assign a positive state to a node at any step of the process, rather than just at the beginning. We provide graph structures for which the minimum TTS is significantly smaller than the minimum TS, indicating that timing is an essential aspect of successful target selection strategies. Furthermore, we prove tight bounds on the minimum size of a TTS in terms of the number of nodes and maximum degree when the thresholds are assigned based on the majority rule. We show that the problem of determining the minimum size of a TTS is NP-hard and provide an Integer Linear Programming formulation and a greedy algorithm. We evaluate the performance of our algorithm by conducting experiments on various synthetic and real-world networks. We also present a linear-time exact algorithm for trees.
A power series being given as the solution of a linear differential equation with appropriate initial conditions, minimization consists in finding a non-trivial linear differential equation of minimal order having this power series as a solution. This problem exists in both homogeneous and inhomogeneous variants; it is distinct from, but related to, the classical problem of factorization of differential operators. Recently, minimization has found applications in Transcendental Number Theory, more specifically in the computation of non-zero algebraic points where Siegel's $E$-functions take algebraic values. We present algorithms and implementations for these questions, and discuss examples and experiments.
We propose an optimistic estimate to evaluate the best possible fitting performance of nonlinear models. It yields an optimistic sample size that quantifies the smallest possible sample size to fit/recover a target function using a nonlinear model. We estimate the optimistic sample sizes for matrix factorization models, deep models, and deep neural networks (DNNs) with fully-connected or convolutional architecture. For each nonlinear model, our estimates predict a specific subset of targets that can be fitted at overparameterization, which are confirmed by our experiments. Our optimistic estimate reveals two special properties of the DNN models -- free expressiveness in width and costly expressiveness in connection. These properties suggest the following architecture design principles of DNNs: (i) feel free to add neurons/kernels; (ii) restrain from connecting neurons. Overall, our optimistic estimate theoretically unveils the vast potential of nonlinear models in fitting at overparameterization. Based on this framework, we anticipate gaining a deeper understanding of how and why numerous nonlinear models such as DNNs can effectively realize their potential in practice in the near future.
We study the problem of generating diverse candidates in the context of Multi-Objective Optimization. In many applications of machine learning such as drug discovery and material design, the goal is to generate candidates which simultaneously optimize a set of potentially conflicting objectives. Moreover, these objectives are often imperfect evaluations of some underlying property of interest, making it important to generate diverse candidates to have multiple options for expensive downstream evaluations. We propose Multi-Objective GFlowNets (MOGFNs), a novel method for generating diverse Pareto optimal solutions, based on GFlowNets. We introduce two variants of MOGFNs: MOGFN-PC, which models a family of independent sub-problems defined by a scalarization function, with reward-conditional GFlowNets, and MOGFN-AL, which solves a sequence of sub-problems defined by an acquisition function in an active learning loop. Our experiments on wide variety of synthetic and benchmark tasks demonstrate advantages of the proposed methods in terms of the Pareto performance and importantly, improved candidate diversity, which is the main contribution of this work.
In the domain generalization literature, a common objective is to learn representations independent of the domain after conditioning on the class label. We show that this objective is not sufficient: there exist counter-examples where a model fails to generalize to unseen domains even after satisfying class-conditional domain invariance. We formalize this observation through a structural causal model and show the importance of modeling within-class variations for generalization. Specifically, classes contain objects that characterize specific causal features, and domains can be interpreted as interventions on these objects that change non-causal features. We highlight an alternative condition: inputs across domains should have the same representation if they are derived from the same object. Based on this objective, we propose matching-based algorithms when base objects are observed (e.g., through data augmentation) and approximate the objective when objects are not observed (MatchDG). Our simple matching-based algorithms are competitive to prior work on out-of-domain accuracy for rotated MNIST, Fashion-MNIST, PACS, and Chest-Xray datasets. Our method MatchDG also recovers ground-truth object matches: on MNIST and Fashion-MNIST, top-10 matches from MatchDG have over 50% overlap with ground-truth matches.
We investigate the problem of automatically determining what type of shoe left an impression found at a crime scene. This recognition problem is made difficult by the variability in types of crime scene evidence (ranging from traces of dust or oil on hard surfaces to impressions made in soil) and the lack of comprehensive databases of shoe outsole tread patterns. We find that mid-level features extracted by pre-trained convolutional neural nets are surprisingly effective descriptors for this specialized domains. However, the choice of similarity measure for matching exemplars to a query image is essential to good performance. For matching multi-channel deep features, we propose the use of multi-channel normalized cross-correlation and analyze its effectiveness. Our proposed metric significantly improves performance in matching crime scene shoeprints to laboratory test impressions. We also show its effectiveness in other cross-domain image retrieval problems: matching facade images to segmentation labels and aerial photos to map images. Finally, we introduce a discriminatively trained variant and fine-tune our system through our proposed metric, obtaining state-of-the-art performance.