Suicide bombing is an infamous form of terrorism that is becoming increasingly prevalent in the current era of global terror warfare. We consider the case of targeted attacks of this kind, and the use of detectors distributed over the area under threat as a protective countermeasure. Such detectors are non-fully reliable, and must be strategically placed in order to maximize the chances of detecting the attack, hence minimizing the expected number of casualties. To this end, different metaheuristic approaches based on local search and on population-based search are considered and benchmarked against a powerful greedy heuristic from the literature. We conduct an extensive empirical evaluation on synthetic instances featuring very diverse properties. Most metaheuristics outperform the greedy algorithm, and a hill-climber is shown to be superior to remaining approaches. This hill-climber is subsequently subject to a sensitivity analysis to determine which problem features make it stand above the greedy approach, and is finally deployed on a number of problem instances built after realistic scenarios, corroborating the good performance of the heuristic.
A new approach based on censoring and moment criterion is introduced for parameter estimation of count distributions when the probability generating function is available even though a closed form of the probability mass function and/or finite moments do not exist.
In the finite difference approximation of the fractional Laplacian the stiffness matrix is typically dense and needs to be approximated numerically. The effect of the accuracy in approximating the stiffness matrix on the accuracy in the whole computation is analyzed and shown to be significant. Four such approximations are discussed. While they are shown to work well with the recently developed grid-over finite difference method (GoFD) for the numerical solution of boundary value problems of the fractional Laplacian, they differ in accuracy, economics to compute, performance of preconditioning, and asymptotic decay away from the diagonal line. In addition, two preconditioners based on sparse and circulant matrices are discussed for the iterative solution of linear systems associated with the stiffness matrix. Numerical results in two and three dimensions are presented.
We obtain the almost sure strong consistency and the Berry-Esseen type bound for the maximum likelihood estimator Ln of the ensemble L for determinantal point processes (DPPs), strengthening and completing previous work initiated in Brunel, Moitra, Rigollet, and Urschel [BMRU17]. Numerical algorithms of estimating DPPs are developed and simulation studies are performed. Lastly, we give explicit formula and a detailed discussion for the maximum likelihood estimator for blocked determinantal matrix of two by two submatrices and compare it with the frequency method.
Discovering causal relationships from observational data is a fundamental yet challenging task. Invariant causal prediction (ICP, Peters et al., 2016) is a method for causal feature selection which requires data from heterogeneous settings and exploits that causal models are invariant. ICP has been extended to general additive noise models and to nonparametric settings using conditional independence tests. However, the latter often suffer from low power (or poor type I error control) and additive noise models are not suitable for applications in which the response is not measured on a continuous scale, but reflects categories or counts. Here, we develop transformation-model (TRAM) based ICP, allowing for continuous, categorical, count-type, and uninformatively censored responses (these model classes, generally, do not allow for identifiability when there is no exogenous heterogeneity). As an invariance test, we propose TRAM-GCM based on the expected conditional covariance between environments and score residuals with uniform asymptotic level guarantees. For the special case of linear shift TRAMs, we also consider TRAM-Wald, which tests invariance based on the Wald statistic. We provide an open-source R package 'tramicp' and evaluate our approach on simulated data and in a case study investigating causal features of survival in critically ill patients.
This work deals with a problem of assigning periodic tasks to employees in such a way that each employee performs each task with the same frequency in the long term. The motivation comes from a collaboration with the SNCF, the main French railway company. An almost complete solution is provided under the form of a necessary and sufficient condition that can be checked in polynomial time. A complementary discussion about possible extensions is also proposed.
Differentially private stochastic gradient descent (DP-SGD) refers to a family of optimization algorithms that provide a guaranteed level of differential privacy (DP) through DP accounting techniques. However, current accounting techniques make assumptions that diverge significantly from practical DP-SGD implementations. For example, they may assume the loss function is Lipschitz continuous and convex, sample the batches randomly with replacement, or omit the gradient clipping step. In this work, we analyze the most commonly used variant of DP-SGD, in which we sample batches cyclically with replacement, perform gradient clipping, and only release the last DP-SGD iterate. More specifically - without assuming convexity, smoothness, or Lipschitz continuity of the loss function - we establish new R\'enyi differential privacy (RDP) bounds for the last DP-SGD iterate under the mild assumption that (i) the DP-SGD stepsize is small relative to the topological constants in the loss function, and (ii) the loss function is weakly-convex. Moreover, we show that our bounds converge to previously established convex bounds when the weak-convexity parameter of the objective function approaches zero. In the case of non-Lipschitz smooth loss functions, we provide a weaker bound that scales well in terms of the number of DP-SGD iterations.
We investigate the set of invariant idempotent probabilities for countable idempotent iterated function systems (IFS) defined in compact metric spaces. We demonstrate that, with constant weights, there exists a unique invariant idempotent probability. Utilizing Secelean's approach to countable IFSs, we introduce partially finite idempotent IFSs and prove that the sequence of invariant idempotent measures for these systems converges to the invariant measure of the original countable IFS. We then apply these results to approximate such measures with discrete systems, producing, in the one-dimensional case, data series whose Higuchi fractal dimension can be calculated. Finally, we provide numerical approximations for two-dimensional cases and discuss the application of generalized Higuchi dimensions in these scenarios.
We investigate a Tikhonov regularization scheme specifically tailored for shallow neural networks within the context of solving a classic inverse problem: approximating an unknown function and its derivatives within a unit cubic domain based on noisy measurements. The proposed Tikhonov regularization scheme incorporates a penalty term that takes three distinct yet intricately related network (semi)norms: the extended Barron norm, the variation norm, and the Radon-BV seminorm. These choices of the penalty term are contingent upon the specific architecture of the neural network being utilized. We establish the connection between various network norms and particularly trace the dependence of the dimensionality index, aiming to deepen our understanding of how these norms interplay with each other. We revisit the universality of function approximation through various norms, establish rigorous error-bound analysis for the Tikhonov regularization scheme, and explicitly elucidate the dependency of the dimensionality index, providing a clearer understanding of how the dimensionality affects the approximation performance and how one designs a neural network with diverse approximating tasks.
We propose a way to maintain strong consistency and facilitate error analysis in the context of dissipation-based WENO stabilization for continuous and discontinuous Galerkin discretizations of conservation laws. Following Kuzmin and Vedral (J. Comput. Phys. 487:112153, 2023) and Vedral (arXiv preprint arXiv:2309.12019), we use WENO shock detectors to determine appropriate amounts of low-order artificial viscosity. In contrast to existing WENO methods, our approach blends candidate polynomials using residual-based nonlinear weights. The shock-capturing terms of our stabilized Galerkin methods vanish if residuals do. This enables us to achieve improved accuracy compared to weakly consistent alternatives. As we show in the context of steady convection-diffusion-reaction (CDR) equations, nonlinear local projection stabilization terms can be included in a way that preserves the coercivity of local bilinear forms. For the corresponding Galerkin-WENO discretization of a CDR problem, we rigorously derive a priori error estimates. Additionally, we demonstrate the stability and accuracy of the proposed method through one- and two-dimensional numerical experiments for hyperbolic conservation laws and systems thereof. The numerical results for representative test problems are superior to those obtained with traditional WENO schemes, particularly in scenarios involving shocks and steep gradients.
We propose a novel projection method to treat near-incompressibility and volumetric locking in small- and large-deformation elasticity and plasticity within the context of higher order material point methods. The material point method is well known to exhibit volumetric locking due to the presence of large numbers of material points per element that are used to decrease the quadrature error. Although there has been considerable research on the treatment of near-incompressibility in the traditional material point method, the issue has not been studied in depth for higher order material point methods. Using the Bbar and Fbar methods as our point of departure we develop an appropriate projection technique for material point methods that use higher order shape functions for the background discretization. The approach is based on the projection of the dilatational part of the appropriate strain rate measure onto a lower dimensional approximation space, according to the traditional Bbar and Fbar techniques, but tailored to the material point method. The presented numerical examples exhibit reduced stress oscillations and are free of volumetric locking and hourglassing phenomena.