The paper presents a spectral representation for general type two-sided discrete time signals from $\ell_\infty$, i.e for all bounded discrete time signals, including signals that do not vanish at $\pm\infty$. This representation allows to extend on the general type signals from $\ell_\infty$ the notions of transfer functions, spectrum gaps, and filters, and to obtain some frequency conditions of predictability and data recoverability.
We study the accuracy of reconstruction of a family of functions $f_\epsilon(x)$, $x\in\mathbb R^2$, $\epsilon\to0$, from their discrete Radon transform data sampled with step size $O(\epsilon)$. For each $\epsilon>0$ sufficiently small, the function $f_\epsilon$ has a jump across a rough boundary $\mathcal S_\epsilon$, which is modeled by an $O(\epsilon)$-size perturbation of a smooth boundary $\mathcal S$. The function $H_0$, which describes the perturbation, is assumed to be of bounded variation. Let $f_\epsilon^{\text{rec}}$ denote the reconstruction, which is computed by interpolating discrete data and substituting it into a continuous inversion formula. We prove that $(f_\epsilon^{\text{rec}}-K_\epsilon*f_\epsilon)(x_0+\epsilon\check x)=O(\epsilon^{1/2}\ln(1/\epsilon))$, where $x_0\in\mathcal S$ and $K_\epsilon$ is an easily computable kernel.
In physics, density $\rho(\cdot)$ is a fundamentally important scalar function to model, since it describes a scalar field or a probability density function that governs a physical process. Modeling $\rho(\cdot)$ typically scales poorly with parameter space, however, and quickly becomes prohibitively difficult and computationally expensive. One promising avenue to bypass this is to leverage the capabilities of denoising diffusion models often used in high-fidelity image generation to parameterize $\rho(\cdot)$ from existing scientific data, from which new samples can be trivially sampled from. In this paper, we propose $\rho$-Diffusion, an implementation of denoising diffusion probabilistic models for multidimensional density estimation in physics, which is currently in active development and, from our results, performs well on physically motivated 2D and 3D density functions. Moreover, we propose a novel hashing technique that allows $\rho$-Diffusion to be conditioned by arbitrary amounts of physical parameters of interest.
In this paper, we study the algebraic structure of $(\sigma,\delta)$-polycyclic codes as submodules in the quotient module $S/Sf$, where $S=R[x,\sigma,\delta]$ is the Ore extension, $f\in S$, and $R$ is a finite but not necessarily commutative ring. We establish that the Euclidean duals of $(\sigma,\delta)$-polycyclic codes are $(\sigma,\delta)$-sequential codes. By using $(\sigma,\delta)$-Pseudo Linear Transformation (PLT), we define the annihilator dual of $(\sigma,\delta)$-polycyclic codes. Then, we demonstrate that the annihilator duals of $(\sigma,\delta)$-polycyclic codes maintain their $(\sigma,\delta)$-polycyclic nature. Furthermore, we classify when two $(\sigma,\delta)$-polycyclic codes are Hamming isometrical equivalent. By employing Wedderburn polynomials, we introduce simple-root $(\sigma,\delta)$-polycyclic codes. Subsequently, we define the $(\sigma, \delta)$-Mattson-Solomon transform for this class of codes and we address the problem of decomposing these codes by using the properties of Wedderburn polynomials.
We consider non-ergodic class of stationary real harmonizable symmetric $\alpha$-stable processes $X=\left\{X(t):t\in\mathbb{R}\right\}$ with a finite symmetric and absolutely continuous control measure. We refer to its density function as the spectral density of $X$. These processes admit a LePage series representation and are conditionally Gaussian, which allows us to derive the non-ergodic limit of sample functions on $X$. In particular, we give an explicit expression for the non-ergodic limits of the empirical characteristic function of $X$ and the lag process $\left\{X(t+h)-X(t):t\in\mathbb{R}\right\}$ with $h>0$, respectively. The process admits an equivalent representation as a series of sinusoidal waves with random frequencies which are i.i.d. with the (normalized) spectral density of $X$ as their probability density function. Based on strongly consistent frequency estimation using the periodogram we present a strongly consistent estimator of the spectral density. The periodogram's computation is fast and efficient, and our method is not affected by the non-ergodicity of $X$.
The joint modeling of multiple longitudinal biomarkers together with a time-to-event outcome is a challenging modeling task of continued scientific interest. In particular, the computational complexity of high dimensional (generalized) mixed effects models often restricts the flexibility of shared parameter joint models, even when the subject-specific marker trajectories follow highly nonlinear courses. We propose a parsimonious multivariate functional principal components representation of the shared random effects. This allows better scalability, as the dimension of the random effects does not directly increase with the number of markers, only with the chosen number of principal component basis functions used in the approximation of the random effects. The functional principal component representation additionally allows to estimate highly flexible subject-specific random trajectories without parametric assumptions. The modeled trajectories can thus be distinctly different for each biomarker. We build on the framework of flexible Bayesian additive joint models implemented in the R-package 'bamlss', which also supports estimation of nonlinear covariate effects via Bayesian P-splines. The flexible yet parsimonious functional principal components basis used in the estimation of the joint model is first estimated in a preliminary step. We validate our approach in a simulation study and illustrate its advantages by analyzing a study on primary biliary cholangitis.
Consider a risk portfolio with aggregate loss random variable $S=X_1+\dots +X_n$ defined as the sum of the $n$ individual losses $X_1, \dots, X_n$. The expected allocation, $E[X_i \times 1_{\{S = k\}}]$, for $i = 1, \dots, n$ and $k \in \mathbb{N}$, is a vital quantity for risk allocation and risk-sharing. For example, one uses this value to compute peer-to-peer contributions under the conditional mean risk-sharing rule and capital allocated to a line of business under the Euler risk allocation paradigm. This paper introduces an ordinary generating function for expected allocations, a power series representation of the expected allocation of an individual risk given the total risks in the portfolio when all risks are discrete. First, we provide a simple relationship between the ordinary generating function for expected allocations and the probability generating function. Then, leveraging properties of ordinary generating functions, we reveal new theoretical results on closed-formed solutions to risk allocation problems, especially when dealing with Katz or compound Katz distributions. Then, we present an efficient algorithm to recover the expected allocations using the fast Fourier transform, providing a new practical tool to compute expected allocations quickly. The latter approach is exceptionally efficient for a portfolio of independent risks.
Normal modal logics extending the logic K4.3 of linear transitive frames are known to lack the Craig interpolation property, except some logics of bounded depth such as S5. We turn this `negative' fact into a research question and pursue a non-uniform approach to Craig interpolation by investigating the following interpolant existence problem: decide whether there exists a Craig interpolant between two given formulas in any fixed logic above K4.3. Using a bisimulation-based characterisation of interpolant existence for descriptive frames, we show that this problem is decidable and coNP-complete for all finitely axiomatisable normal modal logics containing K4.3. It is thus not harder than entailment in these logics, which is in sharp contrast to other recent non-uniform interpolation results. We also extend our approach to Priorean temporal logics (with both past and future modalities) over the standard time flows-the integers, rationals, reals, and finite strict linear orders-none of which is blessed with the Craig interpolation property.
Subsurface storage of CO$_2$ is an important means to mitigate climate change, and to investigate the fate of CO$_2$ over several decades in vast reservoirs, numerical simulation based on realistic models is essential. Faults and other complex geological structures introduce modeling challenges as their effects on storage operations are uncertain due to limited data. In this work, we present a computational framework for forward propagation of uncertainty, including stochastic upscaling and copula representation of flow functions for a CO$_2$ storage site using the Vette fault zone in the Smeaheia formation in the North Sea as a test case. The upscaling method leads to a reduction of the number of stochastic dimensions and the cost of evaluating the reservoir model. A viable model that represents the upscaled data needs to capture dependencies between variables, and allow sampling. Copulas provide representation of dependent multidimensional random variables and a good fit to data, allow fast sampling, and coupling to the forward propagation method via independent uniform random variables. The non-stationary correlation within some of the upscaled flow function are accurately captured by a data-driven transformation model. The uncertainty in upscaled flow functions and other parameters are propagated to uncertain leakage estimates using numerical reservoir simulation of a two-phase system. The expectations of leakage are estimated by an adaptive stratified sampling technique, where samples are sequentially concentrated to regions of the parameter space to greedily maximize variance reduction. We demonstrate cost reduction compared to standard Monte Carlo of one or two orders of magnitude for simpler test cases with only fault and reservoir layer permeabilities assumed uncertain, and factors 2--8 cost reduction for stochastic multi-phase flow properties and more complex stochastic models.
In the first part of the paper we study absolute error of sampling discretization of the integral $L_p$-norm for functional classes of continuous functions. We use chaining technique to provide a general bound for the error of sampling discretization of the $L_p$-norm on a given functional class in terms of entropy numbers in the uniform norm of this class. The general result yields new error bounds for sampling discretization of the $L_p$-norms on classes of multivariate functions with mixed smoothness. In the second part of the paper we apply the obtained bounds to study universal sampling discretization and the problem of optimal sampling recovery.
We discuss avoidance of sure loss and coherence results for semicopulas and standardized functions, i.e., for grounded, 1-increasing functions with value $1$ at $(1,1,\ldots, 1)$. We characterize the existence of a $k$-increasing $n$-variate function $C$ fulfilling $A\leq C\leq B$ for standardized $n$-variate functions $A,B$ and discuss the method for constructing this function. Our proofs also include procedures for extending functions on some countably infinite mesh to functions on the unit box. We provide a characterization when $A$ respectively $B$ coincides with the pointwise infimum respectively supremum of the set of all $k$-increasing $n$-variate functions $C$ fulfilling $A\leq C\leq B$.