We revisit the generalized hyperbolic (GH) distribution and its nested models. These include widely used parametric choices like the multivariate normal, skew-t, Laplace, and several others. We also introduce the multiple-choice LASSO, a novel penalized method for choosing among alternative constraints on the same parameter. A hierarchical multiple-choice LASSO penalized likelihood is optimized to perform simultaneous model selection and inference within the GH family. We illustrate our approach through a simulation study. The methodology proposed in this paper has been implemented in R functions which are available as supplementary material.
This paper develops and benchmarks an immersed peridynamics method to simulate the deformation, damage, and failure of hyperelastic materials within a fluid-structure interaction framework. The immersed peridynamics method describes an incompressible structure immersed in a viscous incompressible fluid. It expresses the momentum equation and incompressibility constraint in Eulerian form, and it describes the structural motion and resultant forces in Lagrangian form. Coupling between Eulerian and Lagrangian variables is achieved by integral transforms with Dirac delta function kernels, as in standard immersed boundary methods. The major difference between our approach and conventional immersed boundary methods is that we use peridynamics, instead of classical continuum mechanics, to determine the structural forces. We focus on non-ordinary state-based peridynamic material descriptions that allow us to use a constitutive correspondence framework that can leverage well characterized nonlinear constitutive models of soft materials. The convergence and accuracy of our approach are compared to both conventional and immersed finite element methods using widely used benchmark problems of nonlinear incompressible elasticity. We demonstrate that the immersed peridynamics method yields comparable accuracy with similar numbers of structural degrees of freedom for several choices of the size of the peridynamic horizon. We also demonstrate that the method can generate grid-converged simulations of fluid-driven material damage growth, crack formation and propagation, and rupture under large deformations.
We propose a novel learned keypoint detection method to increase the number of correct matches for the task of non-rigid image correspondence. By leveraging true correspondences acquired by matching annotated image pairs with a specified descriptor extractor, we train an end-to-end convolutional neural network (CNN) to find keypoint locations that are more appropriate to the considered descriptor. For that, we apply geometric and photometric warpings to images to generate a supervisory signal, allowing the optimization of the detector. Experiments demonstrate that our method enhances the Mean Matching Accuracy of numerous descriptors when used in conjunction with our detection method, while outperforming the state-of-the-art keypoint detectors on real images of non-rigid objects by 20 p.p. We also apply our method on the complex real-world task of object retrieval where our detector performs on par with the finest keypoint detectors currently available for this task. The source code and trained models are publicly available at //github.com/verlab/LearningToDetect_PRL_2023
At STOC 2002, Eiter, Gottlob, and Makino presented a technique called ordered generation that yields an $n^{O(d)}$-delay algorithm listing all minimal transversals of an $n$-vertex hypergraph of degeneracy $d$. Recently at IWOCA 2019, Conte, Kant\'e, Marino, and Uno asked whether this XP-delay algorithm parameterized by $d$ could be made FPT-delay parameterized by $d$ and the maximum degree $\Delta$, i.e., an algorithm with delay $f(d,\Delta)\cdot n^{O(1)}$ for some computable function $f$. Moreover, as a first step toward answering that question, they note that the same delay is open for the intimately related problem of listing all minimal dominating sets in graphs. In this paper, we answer the latter question in the affirmative.
Hawkes processes are often applied to model dependence and interaction phenomena in multivariate event data sets, such as neuronal spike trains, social interactions, and financial transactions. In the nonparametric setting, learning the temporal dependence structure of Hawkes processes is generally a computationally expensive task, all the more with Bayesian estimation methods. In particular, for generalised nonlinear Hawkes processes, Monte-Carlo Markov Chain methods applied to compute the doubly intractable posterior distribution are not scalable to high-dimensional processes in practice. Recently, efficient algorithms targeting a mean-field variational approximation of the posterior distribution have been proposed. In this work, we first unify existing variational Bayes approaches under a general nonparametric inference framework, and analyse the asymptotic properties of these methods under easily verifiable conditions on the prior, the variational class, and the nonlinear model. Secondly, we propose a novel sparsity-inducing procedure, and derive an adaptive mean-field variational algorithm for the popular sigmoid Hawkes processes. Our algorithm is parallelisable and therefore computationally efficient in high-dimensional setting. Through an extensive set of numerical simulations, we also demonstrate that our procedure is able to adapt to the dimensionality of the parameter of the Hawkes process, and is partially robust to some type of model mis-specification.
This study focuses on the presence of (multi)fractal structures in confined hadronic matter through the momentum distributions of mesons produced in proton-proton collisions between 23 GeV and 63 GeV. The analysis demonstrates that the $q$-exponential behaviour of the particle momentum distributions is consistent with fractal characteristics, exhibiting fractal structures in confined hadronic matter with features similar to those observed in the deconfined quark-gluon plasma (QGP) regime. Furthermore, the systematic analysis of meson production in hadronic collisions at energies below 1 TeV suggests that specific fractal parameters are universal, independently of confinement or deconfinement, while others may be influenced by the quark content of the produced meson. These results pave the way for further research exploring the implications of fractal structures on various physical distributions and offer insights into the nature of the phase transition between confined and deconfined regimes.
The aim of this paper is to combine several Ivev-like modal systems characterized by 4-valued non-deterministic matrices (Nmatrices) with IDM4, a 4-valued expansion of Belnap-Dunn's logic FDE with an implication introduced by Pynko in 1999. In order to to this, we introduce a new methodology for combining logics which are characterized by means of swap structures, based on what we call superposition of snapshots. In particular, the combination of IDM4 with Tm, the 4-valued Ivlev's version of KT, will be analyzed with more details. From the semantical perspective, the idea is to combine the 4-valued swap structures (Nmatrices) for Tm (and several of its extensions) with the 4-valued twist structure (logical matrix) for IDM4. This superposition produces a universe of 6 snapshots, with 3 of them being designated. The multioperators over the new universe are defined by combining the specifications of the given swap and twist structures. This gives origin to 6 different paradefinite Ivlev-like modal logics, each one of them characterized by a 6-valued Nmatrix, and conservatively extending the original modal logic and IDM4. This important feature allows us to consider the proposed construction as a genuine technique for combining logics. In addition, it is possible to define in the combined logics a classicality operator in the sense of logics of evidence and truth (LETs). A sound and complete Hilbert-style axiomatization is also presented for the 6 combined systems, as well as a very simple Prolog program which implements the swap structures semantics for the 6 systems, which gives a decision procedure for satisfiability, refutability and validity of formulas in these logics.
This paper presents the error analysis of numerical methods on graded meshes for stochastic Volterra equations with weakly singular kernels. We first prove a novel regularity estimate for the exact solution via analyzing the associated convolution structure. This reveals that the exact solution exhibits an initial singularity in the sense that its H\"older continuous exponent on any neighborhood of $t=0$ is lower than that on every compact subset of $(0,T]$. Motivated by the initial singularity, we then construct the Euler--Maruyama method, fast Euler--Maruyama method, and Milstein method based on graded meshes. By establishing their pointwise-in-time error estimates, we give the grading exponents of meshes to attain the optimal uniform-in-time convergence orders, where the convergence orders improve those of the uniform mesh case. Numerical experiments are finally reported to confirm the sharpness of theoretical findings.
Learning classification tasks of (2^nx2^n) inputs typically consist of \le n (2x2) max-pooling (MP) operators along the entire feedforward deep architecture. Here we show, using the CIFAR-10 database, that pooling decisions adjacent to the last convolutional layer significantly enhance accuracies. In particular, average accuracies of the advanced-VGG with m layers (A-VGGm) architectures are 0.936, 0.940, 0.954, 0.955, and 0.955 for m=6, 8, 14, 13, and 16, respectively. The results indicate A-VGG8s' accuracy is superior to VGG16s', and that the accuracies of A-VGG13 and A-VGG16 are equal, and comparable to that of Wide-ResNet16. In addition, replacing the three fully connected (FC) layers with one FC layer, A-VGG6 and A-VGG14, or with several linear activation FC layers, yielded similar accuracies. These significantly enhanced accuracies stem from training the most influential input-output routes, in comparison to the inferior routes selected following multiple MP decisions along the deep architecture. In addition, accuracies are sensitive to the order of the non-commutative MP and average pooling operators adjacent to the output layer, varying the number and location of training routes. The results call for the reexamination of previously proposed deep architectures and their accuracies by utilizing the proposed pooling strategy adjacent to the output layer.
Recently, several algorithms have been proposed for decomposing reactive synthesis specifications into independent and simpler sub-specifications. Being inspired by one of the approaches, developed by Antonio Iannopollo (2018), who designed the so-called (DC) algorithm, we present here our solution that takes his ideas further and provides mathematical formalisation of the strategy behind DC. We rigorously define the main notions involved in the algorithm, explain the technique, and demonstrate its application on examples. The core technique of DC is based on the detection of independent variables in linear temporal logic formulae by exploiting the power and efficiency of a model checker. Although the DC algorithm is sound, it is not complete, as its author already pointed out. In this paper, we provide a counterexample that shows this fact and propose relevant changes to adapt the original DC strategy to ensure its correctness. The modification of DC and the detailed proof of its soundness and completeness are the main contributions of this work.
We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits very good performances and robustness with respect to all parameters of interest.