Characteristic formulae give a complete logical description of the behaviour of processes modulo some chosen notion of behavioural semantics. They allow one to reduce equivalence or preorder checking to model checking, and are exactly the formulae in the modal logics characterizing classic behavioural equivalences and preorders for which model checking can be reduced to equivalence or preorder checking. This paper studies the complexity of determining whether a formula is characteristic for some finite, loop-free process in each of the logics providing modal characterizations of the simulation-based semantics in van Glabbeek's branching-time spectrum. Since characteristic formulae in each of those logics are exactly the consistent and prime ones, it presents complexity results for the satisfiability and primality problems, and investigates the boundary between modal logics for which those problems can be solved in polynomial time and those for which they become computationally hard. Amongst other contributions, this article also studies the complexity of constructing characteristic formulae in the modal logics characterizing simulation-based semantics, both when such formulae are presented in explicit form and via systems of equations.
We investigate notions of complete representation by partial functions, where the operations in the signature include antidomain restriction and may include composition, intersection, update, preferential union, domain, antidomain, and set difference. When the signature includes both antidomain restriction and intersection, the join-complete and the meet-complete representations coincide. Otherwise, for the signatures we consider, meet-complete is strictly stronger than join-complete. A necessary condition to be meet-completely representable is that the atoms are separating. For the signatures we consider, this condition is sufficient if and only if composition is not in the signature. For each of the signatures we consider, the class of (meet-)completely representable algebras is not axiomatisable by any existential-universal-existential first-order theory. For 14 expressively distinct signatures, we show, by giving an explicit representation, that the (meet-)completely representable algebras form a basic elementary class, axiomatisable by a universal-existential-universal first-order sentence. The signatures we axiomatise are those containing antidomain restriction and any of intersection, update, and preferential union and also those containing antidomain restriction, composition, and intersection and any of update, preferential union, domain, and antidomain.
We develop the novel method of artificial barriers for scalar stochastic differential equations (SDEs) and use it to construct boundary-preserving numerical schemes for strong approximations of scalar SDEs, possibly with non-globally Lipschitz drift and diffusion coefficients, whose state-space is either bounded or half-bounded. The idea of artificial barriers is to augment the SDE with artificial barriers outside the state-space to not change the solution process, and then apply a boundary-preserving numerical scheme to the resulting reflected SDE (RSDE). This enables us to construct boundary-preserving numerical schemes with the same strong convergence rate as the strong convergence rate of the numerical scheme for the corresponding RSDE. Based on the method of artificial barriers, we construct two boundary-preserving schemes that we call the Artificial Barrier Euler-Maruyama (ABEM) scheme and the Artificial Barrier Euler-Peano (ABEP) scheme. We provide numerical experiments for the ABEM scheme and the numerical results agree with the obtained theoretical results.
We provide a tight asymptotic characterization of the error exponent for classical-quantum channel coding assisted by activated non-signaling correlations. Namely, we find that the optimal exponent--also called reliability function--is equal to the well-known sphere packing bound, which can be written as a single-letter formula optimized over Petz-R\'enyi divergences. Remarkably, there is no critical rate and as such our characterization remains tight for arbitrarily low rates below the capacity. On the achievability side, we further extend our results to fully quantum channels. Our proofs rely on semi-definite program duality and a dual representation of the Petz-R\'enyi divergences via Young inequalities.
The least trimmed squares (LTS) is a reasonable formulation of robust regression whereas it suffers from high computational cost due to the nonconvexity and nonsmoothness of its objective function. The most frequently used FAST-LTS algorithm is particularly slow when a sparsity-inducing penalty such as the $\ell_1$ norm is added. This paper proposes a computationally inexpensive algorithm for the sparse LTS, which is based on the proximal gradient method with a reformulation technique. Proposed method is equipped with theoretical convergence preferred over existing methods. Numerical experiments show that our method efficiently yields small objective value.
To analyze the topological properties of the given discrete data, one needs to consider a continuous transform called filtration. Persistent homology serves as a tool to track changes of homology in the filtration. The outcome of the topological analysis of data varies depending on the choice of filtration, making the selection of filtration crucial. Filtration learning is an attempt to find an optimal filtration that minimizes the loss function. Exact Multi-parameter Persistent Homology (EMPH) has been recently proposed, particularly for topological time-series analysis, that utilizes the exact formula of rank invariant instead of calculating it. In this paper, we propose a framework for filtration learning of EMPH. We formulate an optimization problem and propose an algorithm for solving the problem. We then apply the proposed algorithm to several classification problems. Particularly, we derive the exact formula of the gradient of the loss function with respect to the filtration parameters, which makes it possible to directly update the filtration without using automatic differentiation, significantly enhancing the learning process.
We develop a numerical method for simulation of incompressible viscous flows by integrating the technology of random vortex method with the core idea of Large Eddy Simulation (LES). Specifically, we utilize the filtering method in LES, interpreted as spatial averaging, along with the integral representation theorem for parabolic equations, to achieve a closure scheme which may be used for calculating solutions of Navier-Stokes equations. This approach circumvents the challenge associated with handling the non-locally integrable 3-dimensional integral kernel in the random vortex method and facilitates the computation of numerical solutions for flow systems via Monte-Carlo method. Numerical simulations are carried out for both laminar and turbulent flows, demonstrating the validity and effectiveness of the method.
We propose a fast scheme for approximating the Mittag-Leffler function by an efficient sum-of-exponentials (SOE), and apply the scheme to the viscoelastic model of wave propagation with mixed finite element methods for the spatial discretization and the Newmark-beta scheme for the second-order temporal derivative. Compared with traditional L1 scheme for fractional derivative, our fast scheme reduces the memory complexity from $\mathcal O(N_sN) $ to $\mathcal O(N_sN_{exp})$ and the computation complexity from $\mathcal O(N_sN^2)$ to $\mathcal O(N_sN_{exp}N)$, where $N$ denotes the total number of temporal grid points, $N_{exp}$ is the number of exponentials in SOE, and $N_s$ represents the complexity of memory and computation related to the spatial discretization. Numerical experiments are provided to verify the theoretical results.
We derive a mixed-dimensional 3D-1D formulation of the electrostatic equation in two domains with different dielectric constants to compute, with an affordable computational cost, the electric field and potential in the relevant case of thin inclusions in a larger 3D domain. The numerical solution is obtained by Mixed Finite Elements for the 3D problem and Finite Elements on the 1D domain. We analyze some test cases with simple geometries to validate the proposed approach against analytical solutions, and perform comparisons with the fully resolved 3D problem. We treat the case where ramifications are present in the one-dimensional domain and show some results on the geometry of an electrical treeing, a ramified structure that propagates in insulators causing their failure.
Lattice gauge fixing is required to compute gauge-variant quantities, for example those used in RI-MOM renormalization schemes or as objects of comparison for model calculations. Recently, gauge-variant quantities have also been found to be more amenable to signal-to-noise optimization using contour deformations. These applications motivate systematic parameterization and exploration of gauge-fixing schemes. This work introduces a differentiable parameterization of gauge fixing which is broad enough to cover Landau gauge, Coulomb gauge, and maximal tree gauges. The adjoint state method allows gradient-based optimization to select gauge-fixing schemes that minimize an arbitrary target loss function.
We develop confidence sets which provide spatial uncertainty guarantees for the output of a black-box machine learning model designed for image segmentation. To do so we adapt conformal inference to the imaging setting, obtaining thresholds on a calibration dataset based on the distribution of the maximum of the transformed logit scores within and outside of the ground truth masks. We prove that these confidence sets, when applied to new predictions of the model, are guaranteed to contain the true unknown segmented mask with desired probability. We show that learning appropriate score transformations on a learning dataset before performing calibration is crucial for optimizing performance. We illustrate and validate our approach on a polpys tumor dataset. To do so we obtain the logit scores from a deep neural network trained for polpys segmentation and show that using distance transformed scores to obtain outer confidence sets and the original scores for inner confidence sets enables tight bounds on tumor location whilst controlling the false coverage rate.