This paper introduces several depths for random sets with possibly non-convex realisations, proposes ways to estimate the depths based on the samples and compares them with existing ones. The depths are further applied for the comparison between two samples of random sets using a visual method of DD-plots and statistical testing. The advantage of this approach is identifying sets within the sample that are responsible for rejecting the null hypothesis of equality in distribution and providing clues on differences between distributions. The method is justified using a simulation study and applied to real data consisting of histological images of mastopathy and mammary cancer tissue.
Benchmark instances for the unbounded knapsack problem are typically generated according to specific criteria within a given constant range $R$, and these instances can be referred to as the unbounded knapsack problem with bounded coefficients (UKPB). In order to increase the difficulty of solving these instances, the knapsack capacity $C$ is usually set to a very large value. Therefore, an exact algorithm that neither time complexity nor space complexity includes the capacity coefficient $C$ is highly anticipated. In this paper, we propose an exact algorithm with time complexity of $O(R^4)$ and space complexity of $O(R^3)$. The algorithm initially divides the multiset $N$ into two multisubsets, $N_1$ and $N_2$, based on the profit density of their types. For the multisubset $N_2$ composed of types with profit density lower than the maximum profit density type, we utilize a recent branch and bound (B\&B) result by Dey et al. (Math. Prog., pp 569-587, 2023) to determine the maximum selection number for types in $N_2$. We then employ the Unbounded-DP algorithm to exactly solve for the types in $N_2$. For the multisubset $N_1$ composed of the maximum profit density type and its counterparts with the same profit density, we transform it into a linear Diophantine equation and leverage relevant conclusions from the Frobenius problem to solve it efficiently. In particular, the proof techniques required by the algorithm are primarily covered in the first-year mathematics curriculum, which is convenient for subsequent researchers to grasp.
This paper proposes a systematic and novel component level co-rotational (CR) framework, for upgrading existing 3D continuum finite elements to flexible multibody analysis. Without using any model reduction techniques, the high efficiency is achieved through sophisticated operations in both modeling and numerical implementation phrases. In modeling phrase, as in conventional 3D nonlinear finite analysis, the nodal absolute coordinates are used as the system generalized coordinates, therefore simple formulations of the inertia force terms can be obtained. For the elastic force terms, inspired by existing floating frame of reference formulation (FFRF) and conventional element-level CR formulation, a component-level CR modeling strategy is developed. By in combination with Schur complement theory and fully exploring the nature of the component-level CR modeling method, an extremely efficient procedure is developed, which enables us to transform the linear equations raised from each Newton-Raphson iteration step into linear systems with constant coefficient matrix. The coefficient matrix thus can be pre-calculated and decomposed only once, and at all the subsequent time steps only back substitutions are needed, which avoids frequently updating the Jacobian matrix and avoids directly solving the large-scale linearized equation in each iteration. Multiple examples are presented to demonstrate the performance of the proposed framework.
In this work we introduce novel stress-only formulations of linear elasticity with special attention to their approximate solution using weighted residual methods. We present four sets of boundary value problems for a pure stress formulation of three-dimensional solids, and in two dimensions for plane stress and plane strain. The associated governing equations are derived by modifications and combinations of the Beltrami-Michell equations and the Navier-Cauchy equations. The corresponding variational forms of dimension $d \in \{2,3\}$ allow to approximate the stress tensor directly, without any presupposed potential stress functions, and are shown to be well-posed in $H^1 \otimes \mathrm{Sym}(d)$ in the framework of functional analysis via the Lax-Milgram theorem, making their finite element implementation using $C^0$-continuous elements straightforward. Further, in the finite element setting we provide a treatment for constant and piece-wise constant body forces via distributions. The operators and differential identities in this work are provided in modern tensor notation and rely on exact sequences, making the resulting equations and differential relations directly comprehensible. Finally, numerical benchmarks for convergence as well as spectral analysis are used to test the limits and identify viable use-cases of the equations.
Bayesian inference for complex models with an intractable likelihood can be tackled using algorithms performing many calls to computer simulators. These approaches are collectively known as "simulation-based inference" (SBI). Recent SBI methods have made use of neural networks (NN) to provide approximate, yet expressive constructs for the unavailable likelihood function and the posterior distribution. However, they do not generally achieve an optimal trade-off between accuracy and computational demand. In this work, we propose an alternative that provides both approximations to the likelihood and the posterior distribution, using structured mixtures of probability distributions. Our approach produces accurate posterior inference when compared to state-of-the-art NN-based SBI methods, while exhibiting a much smaller computational footprint. We illustrate our results on several benchmark models from the SBI literature.
We investigate a convective Brinkman--Forchheimer problem coupled with a heat transfer equation. The investigated model considers thermal diffusion and viscosity depending on the temperature. We prove the existence of a solution without restriction on the data and uniqueness when the solution is slightly smoother and the data is suitably restricted. We propose a finite element discretization scheme for the considered model and derive convergence results and a priori error estimates. Finally, we illustrate the theory with numerical examples.
This article aims to provide approximate solutions for the non-linear collision-induced breakage equation using two different semi-analytical schemes, i.e., variational iteration method (VIM) and optimized decomposition method (ODM). The study also includes the detailed convergence analysis and error estimation for ODM in the case of product collisional ($K(\epsilon,\rho)=\epsilon\rho$) and breakage ($b(\epsilon,\rho,\sigma)=\frac{2}{\rho}$) kernels with an exponential decay initial condition. By contrasting estimated concentration function and moments with exact solutions, the novelty of the suggested approaches is presented considering three numerical examples. Interestingly, in one case, VIM provides a closed-form solution, however, finite term series solutions obtained via both schemes supply a great approximation for the concentration function and moments.
In this paper, a class of high-order methods to numerically solve Functional Differential Equations with Piecewise Continuous Arguments (FDEPCAs) is discussed. The framework stems from the expansion of the vector field associated with the reference differential equation along the shifted and scaled Legendre polynomial orthonormal basis, working on a suitable extension of Hamiltonian Boundary Value Methods. Within the design of the methods, a proper generalization of the perturbation results coming from the field of ordinary differential equations is considered, with the aim of handling the case of FDEPCAs. The error analysis of the devised family of methods is performed, while a few numerical tests on Hamiltonian FDEPCAs are provided, to give evidence to the theoretical findings and show the effectiveness of the obtained resolution strategy.
In this paper we analyze the weighted essentially non-oscillatory (WENO) schemes in the finite volume framework by examining the first step of the explicit third-order total variation diminishing Runge-Kutta method. The rationale for the improved performance of the finite volume WENO-M, WENO-Z and WENO-ZR schemes over WENO-JS in the first time step is that the nonlinear weights corresponding to large errors are adjusted to increase the accuracy of numerical solutions. Based on this analysis, we propose novel Z-type nonlinear weights of the finite volume WENO scheme for hyperbolic conservation laws. Instead of taking the difference of the smoothness indicators for the global smoothness indicator, we employ the logarithmic function with tuners to ensure that the numerical dissipation is reduced around discontinuities while the essentially non-oscillatory property is preserved. The proposed scheme does not necessitate substantial extra computational expenses. Numerical examples are presented to demonstrate the capability of the proposed WENO scheme in shock capturing.
This paper presents asymptotic results for the maximum likelihood and restricted maximum likelihood (REML) estimators within a two-way crossed mixed effect model as the sizes of the rows, columns, and cells tend to infinity. Under very mild conditions which do not require the assumption of normality, the estimators are proven to be asymptotically normal, possessing a structured covariance matrix. The growth rate for the number of rows, columns, and cells is unrestricted, whether considered pairwise or collectively.
This paper proposes a~simple, yet powerful, method for balancing distributions of covariates for causal inference based on observational studies. The method makes it possible to balance an arbitrary number of quantiles (e.g., medians, quartiles, or deciles) together with means if necessary. The proposed approach is based on the theory of calibration estimators (Deville and S\"arndal 1992), in particular, calibration estimators for quantiles, proposed by Harms and Duchesne (2006). The method does not require numerical integration, kernel density estimation or assumptions about the distributions. Valid estimates can be obtained by drawing on existing asymptotic theory. An~illustrative example of the proposed approach is presented for the entropy balancing method and the covariate balancing propensity score method. Results of a~simulation study indicate that the method efficiently estimates average treatment effects on the treated (ATT), the average treatment effect (ATE), the quantile treatment effect on the treated (QTT) and the quantile treatment effect (QTE), especially in the presence of non-linearity and mis-specification of the models. The proposed approach can be further generalized to other designs (e.g. multi-category, continuous) or methods (e.g. synthetic control method). An open source software implementing proposed methods is available.