A large literature specifies conditions under which the information complexity for a sequence of numerical problems defined for dimensions $1, 2, \ldots$ grows at a moderate rate, i.e., the sequence of problems is tractable. Here, we focus on the situation where the space of available information consists of all linear functionals and the problems are defined as linear operator mappings between Hilbert spaces. We unify the proofs of known tractability results and generalize a number of existing results. These generalizations are expressed as five theorems that provide equivalent conditions for (strong) tractability in terms of sums of functions of the singular values of the solution operators.
This study proposes a novel image contrast enhancement method based on image projection onto the squared eigenfunctions of the two dimensional Schr\"odinger operator. This projection depends on a design parameter \texorpdfstring{\(\gamma\)}{gamma} which is proposed to control the pixel intensity during image reconstruction. The performance of the proposed method is investigated through its application to color images. The selection of \texorpdfstring{\(\gamma\)}{gamma} values is performed using k-means, which helps preserve the image spatial adjacency information. Furthermore, multi-objective optimization using the Non dominated Sorting Genetic Algorithm II (NSAG2) algorithm is proposed to select the optimal values of \texorpdfstring{\(\gamma\)}{gamma} and the semi-classical parameter h from the 2DSCSA. The results demonstrate the effectiveness of the proposed method for enhancing image contrast while preserving the inherent characteristics of the original image, producing the desired enhancement with almost no artifacts.
This paper introduces a new numerical scheme for a system that includes evolution equations describing a perfect plasticity model with a time-dependent yield surface. We demonstrate that the solution to the proposed scheme is stable under suitable norms. Moreover, the stability leads to the existence of an exact solution, and we also prove that the solution to the proposed scheme converges strongly to the exact solution under suitable norms.
We propose an individual claims reserving model based on the conditional Aalen-Johansen estimator, as developed in Bladt and Furrer (2023b). In our approach, we formulate a multi-state problem, where the underlying variable is the individual claim size, rather than time. The states in this model represent development periods, and we estimate the cumulative density function of individual claim sizes using the conditional Aalen-Johansen method as transition probabilities to an absorbing state. Our methodology reinterprets the concept of multi-state models and offers a strategy for modeling the complete curve of individual claim sizes. To illustrate our approach, we apply our model to both simulated and real datasets. Having access to the entire dataset enables us to support the use of our approach by comparing the predicted total final cost with the actual amount, as well as evaluating it in terms of the continuously ranked probability score.
The general increase in data size and data sharing motivates the adoption of Big Data strategies in several scientific disciplines. However, while several options are available, no particular guidelines exist for selecting a Big Data engine. In this paper, we compare the runtime performance of two popular Big Data engines with Python APIs, Apache Spark, and Dask, in processing neuroimaging pipelines. Our experiments use three synthetic \HL{neuroimaging} applications to process the \SI{606}{\gibi\byte} BigBrain image and an actual pipeline to process data from thousands of anatomical images. We benchmark these applications on a dedicated HPC cluster running the Lustre file system while using varying combinations of the number of nodes, file size, and task duration. Our results show that although there are slight differences between Dask and Spark, the performance of the engines is comparable for data-intensive applications. However, Spark requires more memory than Dask, which can lead to slower runtime depending on configuration and infrastructure. In general, the limiting factor was the data transfer time. While both engines are suitable for neuroimaging, more efforts need to be put to reduce the data transfer time and the memory footprint of applications.
Based on the mathematical-physical model of pavement mechanics, a multilayer elastic system with interlayer friction conditions is constructed. Given the complex boundary conditions, the corresponding variational inequalities of the partial differential equations are derived, so that the problem can be analyzed under the variational framework. First, the existence and uniqueness of the solution of the variational inequality is proved; then the approximation error of the numerical solution based on the finite element method is analyzed, and when the finite element space satisfies certain approximation conditions, the convergence of the numerical solution is proved; finally, in the trivial finite element space, the convergence order of the numerical solution is derived. The above conclusions provide basic theoretical support for solving the displacement-strain problem of multilayer elastic systems under the framework of variational inequalities.
Topology optimization is used to systematically design contact-aided thermo-mechanical regulators, i.e. components whose effective thermal conductivity is tunable by mechanical deformation and contact. The thermo-mechanical interactions are modeled using a fully coupled non-linear thermo-mechanical finite element framework. To obtain the intricate heat transfer response, the components leverage self-contact, which is modeled using a third medium contact method. The effective heat transfer properties of the regulators are tuned by solving a topology optimization problem using a traditional gradient based algorithm. Several designs of thermo-mechanical regulators in the form of switches, diodes and triodes are presented.
Many combinatorial optimization problems can be formulated as the search for a subgraph that satisfies certain properties and minimizes the total weight. We assume here that the vertices correspond to points in a metric space and can take any position in given uncertainty sets. Then, the cost function to be minimized is the sum of the distances for the worst positions of the vertices in their uncertainty sets. We propose two types of polynomial-time approximation algorithms. The first one relies on solving a deterministic counterpart of the problem where the uncertain distances are replaced with maximum pairwise distances. We study in details the resulting approximation ratio, which depends on the structure of the feasible subgraphs and whether the metric space is Ptolemaic or not. The second algorithm is a fully-polynomial time approximation scheme for the special case of $s-t$ paths.
Symplectic integrators are widely implemented numerical integrators for Hamiltonian mechanics, which preserve the Hamiltonian structure (symplecticity) of the system. Although the symplectic integrator does not conserve the energy of the system, it is well known that there exists a conserving modified Hamiltonian, called the shadow Hamiltonian. For the Nambu mechanics, which is a kind of generalized Hamiltonian mechanics, we can also construct structure-preserving integrators by the same procedure used to construct the symplectic integrators. In the structure-preserving integrator, however, the existence of shadow Hamiltonians is nontrivial. This is because the Nambu mechanics is driven by multiple Hamiltonians and it is nontrivial whether the time evolution by the integrator can be cast into the Nambu mechanical time evolution driven by multiple shadow Hamiltonians. In this paper we present a general procedure to calculate the shadow Hamiltonians of structure-preserving integrators for Nambu mechanics, and give an example where the shadow Hamiltonians exist. This is the first attempt to determine the concrete forms of the shadow Hamiltonians for a Nambu mechanical system. We show that the fundamental identity, which corresponds to the Jacobi identity in Hamiltonian mechanics, plays an important role in calculating the shadow Hamiltonians using the Baker-Campbell-Hausdorff formula. It turns out that the resulting shadow Hamiltonians have indefinite forms depending on how the fundamental identities are used. This is not a technical artifact, because the exact shadow Hamiltonians obtained independently have the same indefiniteness.
We propose a generic approach for numerically efficient simulation from analytically intractable distributions with constrained support. Our approach relies upon Generalized Randomized Hamiltonian Monte Carlo (GRHMC) processes and combines these with a randomized transition kernel that appropriately adjusts the Hamiltonian flow at the boundary of the constrained domain, ensuring that it remains within the domain. The numerical implementation of this constrained GRHMC process exploits the sparsity of the randomized transition kernel and the specific structure of the constraints so that the proposed approach is numerically accurate, computationally fast and operational even in high-dimensional applications. We illustrate this approach with posterior distributions of several Bayesian models with challenging parameter domain constraints in applications to real-word data sets. Building on the capability of GRHMC processes to efficiently explore otherwise challenging and high-dimensional posteriors, the proposed method expands the set of Bayesian models that can be analyzed by using the standard Markov-Chain Monte-Carlo (MCMC) methodology, As such, it can advance the development and use of Bayesian models with useful constrained priors, which are difficult to handle with existing methods. The article is accompanied by an R-package (\url{//github.com/torekleppe/pdmphmc}), which allows for automatically implementing GRHMC processes for arbitrary target distributions and domain constraints.
We detail for the first time a complete explicit description of the quasi-cyclic structure of all classical finite generalized quadrangles. Using these descriptions we construct families of quasi-cyclic LDPC codes derived from the point-line incidence matrix of the quadrangles by explicitly calculating quasi-cyclic generator and parity check matrices for these codes. This allows us to construct parity check and generator matrices of all such codes of length up to 400000. These codes cover a wide range of transmission rates, are easy and fast to implement and perform close to Shannon's limit with no visible error floors. We also include some performance data for these codes. Furthermore, we include a complete explicit description of the quasi-cyclic structure of the point-line and point-hyperplane incidences of the finite projective and affine spaces.