An easily computable dimension (or ECD) group code in the group algebra $\mathbb{F}_{q}G$ is an ideal of dimension less than or equal to $p=char(\mathbb{F}_{q})$ that is generated by an idempotent. This paper introduces an easily computable indecomposable dimension (or ECID) group algebra as a finite group algebra for which all group codes generated by primitive idempotents are ECD. Several characterizations are given for these algebras. In addition, some arithmetic conditions to determine whether a group algebra is ECID are presented, in the case it is semisimple. In the non-semisimple case, these algebras have finite representation type where the Sylow $p$-subgroups of the underlying group are simple. The dimension and some lower bounds for the minimum Hamming distance of group codes in these algebras are given together with some arithmetical tests of primitivity of idempotents. Examples illustrating the main results are presented.
We introduce a simple natural deduction system for reasoning with judgments of the form "there exists a proof of $\varphi$" to explore the notion of judgmental existence following Martin-L\"{o}f's methodology of distinguishing between judgments and propositions. In this system, the existential judgment can be internalized into a modal notion of propositional existence that is closely related to truncation modality, a key tool for obtaining proof irrelevance, and lax modality. We provide a computational interpretation in the style of the Curry-Howard isomorphism for the existence modality and show that the corresponding system has some desirable properties such as strong normalization or subject reduction.
We prove lower bounds for the randomized approximation of the embedding $\ell_1^m \rightarrow \ell_\infty^m$ based on algorithms that use arbitrary linear (hence non-adaptive) information provided by a (randomized) measurement matrix $N \in \mathbb{R}^{n \times m}$. These lower bounds reflect the increasing difficulty of the problem for $m \to \infty$, namely, a term $\sqrt{\log m}$ in the complexity $n$. This result implies that non-compact operators between arbitrary Banach spaces are not approximable using non-adaptive Monte Carlo methods. We also compare these lower bounds for non-adaptive methods with upper bounds based on adaptive, randomized methods for recovery for which the complexity $n$ only exhibits a $(\log\log m)$-dependence. In doing so we give an example of linear problems where the error for adaptive vs. non-adaptive Monte Carlo methods shows a gap of order $n^{1/2} ( \log n)^{-1/2}$.
We propose a new simple and explicit numerical scheme for time-homogeneous stochastic differential equations. The scheme is based on sampling increments at each time step from a skew-symmetric probability distribution, with the level of skewness determined by the drift and volatility of the underlying process. We show that as the step-size decreases the scheme converges weakly to the diffusion of interest. We then consider the problem of simulating from the limiting distribution of an ergodic diffusion process using the numerical scheme with a fixed step-size. We establish conditions under which the numerical scheme converges to equilibrium at a geometric rate, and quantify the bias between the equilibrium distributions of the scheme and of the true diffusion process. Notably, our results do not require a global Lipschitz assumption on the drift, in contrast to those required for the Euler--Maruyama scheme for long-time simulation at fixed step-sizes. Our weak convergence result relies on an extension of the theory of Milstein \& Tretyakov to stochastic differential equations with non-Lipschitz drift, which could also be of independent interest. We support our theoretical results with numerical simulations.
In this paper, we propose Wasserstein proximals of $\alpha$-divergences as suitable objective functionals for learning heavy-tailed distributions in a stable manner. First, we provide sufficient, and in some cases necessary, relations among data dimension, $\alpha$, and the decay rate of data distributions for the Wasserstein-proximal-regularized divergence to be finite. Finite-sample convergence rates for the estimation in the case of the Wasserstein-1 proximal divergences are then provided under certain tail conditions. Numerical experiments demonstrate stable learning of heavy-tailed distributions -- even those without first or second moment -- without any explicit knowledge of the tail behavior, using suitable generative models such as GANs and flow-based models related to our proposed Wasserstein-proximal-regularized $\alpha$-divergences. Heuristically, $\alpha$-divergences handle the heavy tails and Wasserstein proximals allow non-absolute continuity between distributions and control the velocities of flow-based algorithms as they learn the target distribution deep into the tails.
Given an unconditional diffusion model $\pi(x, y)$, using it to perform conditional simulation $\pi(x \mid y)$ is still largely an open question and is typically achieved by learning conditional drifts to the denoising SDE after the fact. In this work, we express conditional simulation as an inference problem on an augmented space corresponding to a partial SDE bridge. This perspective allows us to implement efficient and principled particle Gibbs and pseudo-marginal samplers marginally targeting the conditional distribution $\pi(x \mid y)$. Contrary to existing methodology, our methods do not introduce any additional approximation to the unconditional diffusion model aside from the Monte Carlo error. We showcase the benefits and drawbacks of our approach on a series of synthetic and real data examples.
Consider the linear ill-posed problems of the form $\sum_{i=1}^{b} A_i x_i =y$, where, for each $i$, $A_i$ is a bounded linear operator between two Hilbert spaces $X_i$ and ${\mathcal Y}$. When $b$ is huge, solving the problem by an iterative method using the full gradient at each iteration step is both time-consuming and memory insufficient. Although randomized block coordinate decent (RBCD) method has been shown to be an efficient method for well-posed large-scale optimization problems with a small amount of memory, there still lacks a convergence analysis on the RBCD method for solving ill-posed problems. In this paper, we investigate the convergence property of the RBCD method with noisy data under either {\it a priori} or {\it a posteriori} stopping rules. We prove that the RBCD method combined with an {\it a priori} stopping rule yields a sequence that converges weakly to a solution of the problem almost surely. We also consider the early stopping of the RBCD method and demonstrate that the discrepancy principle can terminate the iteration after finite many steps almost surely. For a class of ill-posed problems with special tensor product form, we obtain strong convergence results on the RBCD method. Furthermore, we consider incorporating the convex regularization terms into the RBCD method to enhance the detection of solution features. To illustrate the theory and the performance of the method, numerical simulations from the imaging modalities in computed tomography and compressive temporal imaging are reported.
We consider a wide class of generalized Radon transforms $\mathcal R$, which act in $\mathbb{R}^n$ for any $n\ge 2$ and integrate over submanifolds of any codimension $N$, $1\le N\le n-1$. Also, we allow for a fairly general reconstruction operator $\mathcal A$. The main requirement is that $\mathcal A$ be a Fourier integral operator with a phase function, which is linear in the phase variable. We consider the task of image reconstruction from discrete data $g_{j,k} = (\mathcal R f)_{j,k} + \eta_{j,k}$. We show that the reconstruction error $N_\epsilon^{\text{rec}}=\mathcal A \eta_{j,k}$ satisfies $N^{\text{rec}}(\check x;x_0)=\lim_{\epsilon\to0}N_\epsilon^{\text{rec}}(x_0+\epsilon\check x)$, $\check x\in D$. Here $x_0$ is a fixed point, $D\subset\mathbb{R}^n$ is a bounded domain, and $\eta_{j,k}$ are independent, but not necessarily identically distributed, random variables. $N^{\text{rec}}$ and $N_\epsilon^{\text{rec}}$ are viewed as continuous random functions of the argument $\check x$ (random fields), and the limit is understood in the sense of probability distributions. Under some conditions on the first three moments of $\eta_{j,k}$ (and some other not very restrictive conditions on $x_0$ and $\mathcal A$), we prove that $N^{\text{rec}}$ is a zero mean Gaussian random field and explicitly compute its covariance. We also present a numerical experiment with a cone beam transform in $\mathbb{R}^3$, which shows an excellent match between theoretical predictions and simulated reconstructions.
We study the categorical structure of the Grothendieck construction of an indexed category $\mathcal{L}:\mathcal{C}^{op}\to\mathbf{CAT}$ and characterise fibred limits, colimits, and monoidal structures. Next, we give sufficient conditions for the monoidal closure of the total category $\Sigma_\mathcal{C} \mathcal{L}$ of a Grothendieck construction of an indexed category $\mathcal{L}:\mathcal{C}^{op}\to\mathbf{CAT}$. Our analysis is a generalization of G\"odel's Dialectica interpretation, and it relies on a novel notion of $\Sigma$-tractable monoidal structure. As we will see, $\Sigma$-tractable coproducts simultaneously generalize cocartesian coclosed structures, biproducts and extensive coproducts. We analyse when the closed structure is fibred -- usually it is not.
We study tractability properties of the weighted $L_p$-discrepancy. The concept of {\it weighted} discrepancy was introduced by Sloan and Wo\'{z}\-nia\-kowski in 1998 in order to prove a weighted version of the Koksma-Hlawka inequality for the error of quasi-Monte Carlo integration rules. The weights have the aim to model the influence of different coordinates of integrands on the error. A discrepancy is said to be tractable if the information complexity, i.e., the minimal number $N$ of points such that the discrepancy is less than the initial discrepancy times an error threshold $\varepsilon$, does not grow exponentially fast with the dimension. In this case there are various notions of tractabilities used in order to classify the exact rate. For even integer parameters $p$ there are sufficient conditions on the weights available in literature, which guarantee the one or other notion of tractability. In the present paper we prove matching sufficient conditions (upper bounds) and neccessary conditions (lower bounds) for polynomial and weak tractability for all $p \in (1, \infty)$. The proofs of the lower bounds are based on a general result for the information complexity of integration with positive quadrature formulas for tensor product spaces. In order to demonstrate this lower bound we consider as a second application the integration of tensor products of polynomials of degree at most 2.
The classical Andr\'{a}sfai--Erd\H{o}s--S\'{o}s Theorem states that for $\ell\ge 2$, every $n$-vertex $K_{\ell+1}$-free graph with minimum degree greater than $\frac{3\ell-4}{3\ell-1}n$ must be $\ell$-partite. We establish a simple criterion for $r$-graphs, $r \geq 2$, to exhibit an Andr\'{a}sfai--Erd\H{o}s--S\'{o}s type property, also known as degree-stability. This leads to a classification of most previously studied hypergraph families with this property. An immediate application of this result, combined with a general theorem by Keevash--Lenz--Mubayi, solves the spectral Tur\'{a}n problems for a large class of hypergraphs. For every $r$-graph $F$ with degree-stability, there is a simple algorithm to decide the $F$-freeness of an $n$-vertex $r$-graph with minimum degree greater than $(\pi(F) - \varepsilon_F)\binom{n}{r-1}$ in time $O(n^r)$, where $\varepsilon_F >0$ is a constant. In particular, for the complete graph $K_{\ell+1}$, we can take $\varepsilon_{K_{\ell+1}} = (3\ell^2-\ell)^{-1}$, and this bound is tight up to some multiplicative constant factor unless $\mathbf{W[1]} = \mathbf{FPT}$. Based on a result by Chen--Huang--Kanj--Xia, we further show that for every fixed $C > 0$, this problem cannot be solved in time $n^{o(\ell)}$ if we replace $\varepsilon_{K_{\ell+1}}$ with $(C\ell)^{-1}$ unless $\mathbf{ETH}$ fails. Furthermore, we apply the degree-stability of $K_{\ell+1}$ to decide the $K_{\ell+1}$-freeness of graphs whose size is close to the Tur\'{a}n bound in time $(\ell+1)n^2$, partially improving a recent result by Fomin--Golovach--Sagunov--Simonov. As an intermediate step, we show that for a specific class of $r$-graphs $F$, the (surjective) $F$-coloring problem can be solved in time $O(n^r)$, provided the input $r$-graph has $n$ vertices and a large minimum degree, refining several previous results.