We consider lithological tomography in which the posterior distribution of (hydro)geological parameters of interest is inferred from geophysical data by treating the intermediate geophysical properties as latent variables. In such a latent variable model, one needs to estimate the intractable likelihood of the (hydro)geological parameters given the geophysical data. The pseudo-marginal method is an adaptation of the Metropolis-Hastings algorithm in which an unbiased approximation of this likelihood is obtained by Monte Carlo averaging over samples from, in this setting, the noisy petrophysical relationship linking (hydro)geological and geophysical properties. To make the method practical in data-rich geophysical settings with low noise levels, we demonstrate that the Monte Carlo sampling must rely on importance sampling distributions that well approximate the posterior distribution of petrophysical scatter around the sampled (hydro)geological parameter field. To achieve a suitable acceptance rate, we rely both on (1) the correlated pseudo-marginal method, which correlates the samples used in the proposed and current states of the Markov chain, and (2) a model proposal scheme that preserves the prior distribution. As a synthetic test example, we infer porosity fields using crosshole ground-penetrating radar (GPR) first-arrival travel times. We use a (50x50)-dimensional pixel-based parameterization of the multi-Gaussian porosity field with known statistical parameters, resulting in a parameter space of high dimension. We demonstrate that the correlated pseudo-marginal method with our proposed importance sampling and prior-preserving proposal scheme outperforms current state-of-the-art methods in both linear and non-linear settings by greatly enhancing the posterior exploration.
Efficient optimization of topology and raster angle has shown unprecedented enhancements in the mechanical properties of 3D printed thermoplastic polymers. Topology optimization helps reduce the waste of raw material in the fabrication of 3D printed parts, thus decreasing production costs associated with manufacturing lighter structures. Fiber orientation plays an important role in increasing the stiffness of a structure. This paper develops and tests a new method for handling stress constraints in topology and fiber orientation optimization of 3D printed orthotropic structures. The stress constraints are coupled with an objective function that maximizes stiffness. This is accomplished by using the modified solid isotropic material with penalization method with the method of moving asymptotes as the optimizer. Each element has a fictitious density and an angle as the main design variables. To reduce the number of stress constraints and thus the computational cost, a new clustering strategy is employed in which the highest stresses in the principal material coordinates are grouped separately into two clusters using an adjusted $P$-norm. A detailed description of the formulation and sensitivity analysis is discussed. While we present an analysis of 2D structures in the numerical examples section, the method can also be used for 3D structures, as the formulation is generic. Our results show that this method can produce efficient structures suitable for 3D printing while avoiding stress concentrations.
Broadcast/multicast communication systems are typically designed to optimize the outage rate criterion, which neglects the performance of the fraction of clients with the worst channel conditions. Targeting ultra-reliable communication scenarios, this paper takes a complementary approach by introducing the conditional value-at-risk (CVaR) rate as the expected rate of a worst-case fraction of clients. To support differential quality-of-service (QoS) levels in this class of clients, layered division multiplexing (LDM) is applied, which enables decoding at different rates. Focusing on a practical scenario in which the transmitter does not know the fading distribution, layer allocation is optimized based on a dataset sampled during deployment. The optimality gap caused by the availability of limited data is bounded via a generalization analysis, and the sample complexity is shown to increase as the designated fraction of worst-case clients decreases. Considering this theoretical result, meta-learning is introduced as a means to reduce sample complexity by leveraging data from previous deployments. Numerical experiments demonstrate that LDM improves spectral efficiency even for small datasets; that, for sufficiently large datasets, the proposed mirror-descent-based layer optimization scheme achieves a CVaR rate close to that achieved when the transmitter knows the fading distribution; and that meta-learning can significantly reduce data requirements.
We study several simplified dark matter (DM) models and their signatures at the LHC using neural networks. We focus on the usual monojet plus missing transverse energy channel, but to train the algorithms we organize the data in 2D histograms instead of event-by-event arrays. This results in a large performance boost to distinguish between standard model (SM) only and SM plus new physics signals. We use the kinematic monojet features as input data which allow us to describe families of models with a single data sample. We found that the neural network performance does not depend on the simulated number of background events if they are presented as a function of $S/\sqrt{B}$, where $S$ and $B$ are the number of signal and background events per histogram, respectively. This provides flexibility to the method, since testing a particular model in that case only requires knowing the new physics monojet cross section. Furthermore, we also discuss the network performance under incorrect assumptions about the true DM nature. Finally, we propose multimodel classifiers to search and identify new signals in a more general way, for the next LHC run.
The R package "sensobol" provides several functions to conduct variance-based uncertainty and sensitivity analysis, from the estimation of sensitivity indices to the visual representation of the results. It implements several state-of-the-art first and total-order estimators and allows the computation of up to third-order effects, as well as of the approximation error, in a swift and user-friendly way. Its flexibility makes it also appropriate for models with either a scalar or a multivariate output. We illustrate its functionality by conducting a variance-based sensitivity analysis of three classic models: the Sobol' (1998) G function, the logistic population growth model of Verhulst (1845), and the spruce budworm and forest model of Ludwig, Jones and Holling (1976).
A matrix formalism for the determination of the best estimator in certain simulation-based parameter estimation problems will be presented and discussed. The equations, termed as the Linear Template Fit, combine a linear regression with a least square method and its optimization. The Linear Template Fit employs only predictions that are calculated beforehand and which are provided for a few values of the parameter of interest. Therefore, the Linear Template Fit is particularly suited for parameter estimation with computationally intensive simulations that are otherwise often limited in their usability for statistical inference, or for performance critical applications. Equations for error propagation are discussed, and the analytic form provides comprehensive insights into the parameter estimation problem. Furthermore, the quickly-converging algorithm of the Quadratic Template Fit will be presented, which is suitable for a non-linear dependence on the parameters. As an example application, a determination of the strong coupling constant, $\alpha_s(m_Z)$, from inclusive jet cross section data at the CERN Large Hadron Collider is studied and compared with previously published results.
Recent advances in Transformer models allow for unprecedented sequence lengths, due to linear space and time complexity. In the meantime, relative positional encoding (RPE) was proposed as beneficial for classical Transformers and consists in exploiting lags instead of absolute positions for inference. Still, RPE is not available for the recent linear-variants of the Transformer, because it requires the explicit computation of the attention matrix, which is precisely what is avoided by such methods. In this paper, we bridge this gap and present Stochastic Positional Encoding as a way to generate PE that can be used as a replacement to the classical additive (sinusoidal) PE and provably behaves like RPE. The main theoretical contribution is to make a connection between positional encoding and cross-covariance structures of correlated Gaussian processes. We illustrate the performance of our approach on the Long-Range Arena benchmark and on music generation.
Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are usually less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows a flexible score function under the acyclicity constraint.
Deep neural network models used for medical image segmentation are large because they are trained with high-resolution three-dimensional (3D) images. Graphics processing units (GPUs) are widely used to accelerate the trainings. However, the memory on a GPU is not large enough to train the models. A popular approach to tackling this problem is patch-based method, which divides a large image into small patches and trains the models with these small patches. However, this method would degrade the segmentation quality if a target object spans multiple patches. In this paper, we propose a novel approach for 3D medical image segmentation that utilizes the data-swapping, which swaps out intermediate data from GPU memory to CPU memory to enlarge the effective GPU memory size, for training high-resolution 3D medical images without patching. We carefully tuned parameters in the data-swapping method to obtain the best training performance for 3D U-Net, a widely used deep neural network model for medical image segmentation. We applied our tuning to train 3D U-Net with full-size images of 192 x 192 x 192 voxels in brain tumor dataset. As a result, communication overhead, which is the most important issue, was reduced by 17.1%. Compared with the patch-based method for patches of 128 x 128 x 128 voxels, our training for full-size images achieved improvement on the mean Dice score by 4.48% and 5.32 % for detecting whole tumor sub-region and tumor core sub-region, respectively. The total training time was reduced from 164 hours to 47 hours, resulting in 3.53 times of acceleration.
This paper proposes a Reinforcement Learning (RL) algorithm to synthesize policies for a Markov Decision Process (MDP), such that a linear time property is satisfied. We convert the property into a Limit Deterministic Buchi Automaton (LDBA), then construct a product MDP between the automaton and the original MDP. A reward function is then assigned to the states of the product automaton, according to accepting conditions of the LDBA. With this reward function, our algorithm synthesizes a policy that satisfies the linear time property: as such, the policy synthesis procedure is "constrained" by the given specification. Additionally, we show that the RL procedure sets up an online value iteration method to calculate the maximum probability of satisfying the given property, at any given state of the MDP - a convergence proof for the procedure is provided. Finally, the performance of the algorithm is evaluated via a set of numerical examples. We observe an improvement of one order of magnitude in the number of iterations required for the synthesis compared to existing approaches.
This paper reports Deep LOGISMOS approach to 3D tumor segmentation by incorporating boundary information derived from deep contextual learning to LOGISMOS - layered optimal graph image segmentation of multiple objects and surfaces. Accurate and reliable tumor segmentation is essential to tumor growth analysis and treatment selection. A fully convolutional network (FCN), UNet, is first trained using three adjacent 2D patches centered at the tumor, providing contextual UNet segmentation and probability map for each 2D patch. The UNet segmentation is then refined by Gaussian Mixture Model (GMM) and morphological operations. The refined UNet segmentation is used to provide the initial shape boundary to build a segmentation graph. The cost for each node of the graph is determined by the UNet probability maps. Finally, a max-flow algorithm is employed to find the globally optimal solution thus obtaining the final segmentation. For evaluation, we applied the method to pancreatic tumor segmentation on a dataset of 51 CT scans, among which 30 scans were used for training and 21 for testing. With Deep LOGISMOS, DICE Similarity Coefficient (DSC) and Relative Volume Difference (RVD) reached 83.2+-7.8% and 18.6+-17.4% respectively, both are significantly improved (p<0.05) compared with contextual UNet and/or LOGISMOS alone.