Satisfiability Modulo Theories (SMT) solvers check the satisfiability of quantifier-free first-order logic formulas. We consider the theory of non-linear real arithmetic where the formulae are logical combinations of polynomial constraints. Here a commonly used tool is the Cylindrical Algebraic Decomposition (CAD) to decompose real space into cells where the constraints are truth-invariant through the use of projection polynomials. An improved approach is to repackage the CAD theory into a search-based algorithm: one that guesses sample points to satisfy the formula, and generalizes guesses that conflict constraints to cylindrical cells around samples which are avoided in the continuing search. Such an approach can lead to a satisfying assignment more quickly, or conclude unsatisfiability with fewer cells. A notable example of this approach is Jovanovi\'c and de Moura's NLSAT algorithm. Since these cells are produced locally to a sample we might need fewer projection polynomials than the traditional CAD projection. The original NLSAT algorithm reduced the set a little; while Brown's single cell construction reduced it much further still. However, the shape and size of the cell produced depends on the order in which the polynomials are considered. This paper proposes a method to construct such cells levelwise, i.e. built level-by-level according to a variable ordering. We still use a reduced number of projection polynomials, but can now consider a variety of different reductions and use heuristics to select the projection polynomials in order to optimise the shape of the cell under construction. We formulate all the necessary theory as a proof system: while not a common presentation for work in this field, it allows an elegant decoupling of heuristics from the algorithm and its proof of correctness.
The spectral clustering algorithm is often used as a binary clustering method for unclassified data by applying the principal component analysis. To study theoretical properties of the algorithm, the assumption of homoscedasticity is often supposed in existing studies. However, this assumption is restrictive and often unrealistic in practice. Therefore, in this paper, we consider the allometric extension model, that is, the directions of the first eigenvectors of two covariance matrices and the direction of the difference of two mean vectors coincide, and we provide a non-asymptotic bound of the error probability of the spectral clustering algorithm for the allometric extension model. As a byproduct of the result, we obtain the consistency of the clustering method in high-dimensional settings.
We analyse a numerical scheme for a system arising from a novel description of the standard elastic--perfectly plastic response. The elastic--perfectly plastic response is described via rate-type equations that do not make use of the standard elastic-plastic decomposition, and the model does not require the use of variational inequalities. Furthermore, the model naturally includes the evolution equation for temperature. We present a low order discretisation based on the finite element method. Under certain restrictions on the mesh we subsequently prove the existence of discrete solutions, and we discuss the stability properties of the numerical scheme. The analysis is supplemented with computational examples.
Given a boolean formula $\Phi$(X, Y, Z), the Max\#SAT problem asks for finding a partial model on the set of variables X, maximizing its number of projected models over the set of variables Y. We investigate a strict generalization of Max\#SAT allowing dependencies for variables in X, effectively turning it into a synthesis problem. We show that this new problem, called DQMax\#SAT, subsumes both the DQBF and DSSAT problems. We provide a general resolution method, based on a reduction to Max\#SAT, together with two improvements for dealing with its inherent complexity. We further discuss a concrete application of DQMax\#SAT for symbolic synthesis of adaptive attackers in the field of program security. Finally, we report preliminary results obtained on the resolution of benchmark problems using a prototype DQMax\#SAT solver implementation.
We explore a link between complexity and physics for circuits of given functionality. Taking advantage of the connection between circuit counting problems and the derivation of ensembles in statistical mechanics, we tie the entropy of circuits of a given functionality and fixed number of gates to circuit complexity. We use thermodynamic relations to connect the quantity analogous to the equilibrium temperature to the exponent describing the exponential growth of the number of distinct functionalities as a function of complexity. This connection is intimately related to the finite compressibility of typical circuits. Finally, we use the thermodynamic approach to formulate a framework for the obfuscation of programs of arbitrary length -- an important problem in cryptography -- as thermalization through recursive mixing of neighboring sections of a circuit, which can viewed as the mixing of two containers with ``gases of gates''. This recursive process equilibrates the average complexity and leads to the saturation of the circuit entropy, while preserving functionality of the overall circuit. The thermodynamic arguments hinge on ergodicity in the space of circuits which we conjecture is limited to disconnected ergodic sectors due to fragmentation. The notion of fragmentation has important implications for the problem of circuit obfuscation as it implies that there are circuits with same size and functionality that cannot be connected via local moves. Furthermore, we argue that fragmentation is unavoidable unless the complexity classes NP and coNP coincide, a statement that implies the collapse of the polynomial hierarchy of complexity theory to its first level.
This paper presents a novel approach to construct regularizing operators for severely ill-posed Fredholm integral equations of the first kind by introducing parametrized discretization. The optimal values of discretization and regularization parameters are computed simultaneously by solving a minimization problem formulated based on a regularization parameter search criterion. The effectiveness of the proposed approach is demonstrated through examples of noisy Laplace transform inversions and the deconvolution of nuclear magnetic resonance relaxation data.
The problem of recovering partial derivatives of high orders of bivariate functions with finite smoothness is studied. Based on the truncation method, a numerical differentiation algorithm was constructed, which is optimal by the order, both in the sense of accuracy and in the sense of the amount of Galerkin information involved. Numerical demonstrations are provided to illustrate that the proposed method can be implemented successfully.
In general, high order splitting methods suffer from an order reduction phenomena when applied to the time integration of partial differential equations with non-periodic boundary conditions. In the last decade, there were introduced several modifications to prevent the second order Strang Splitting method from such a phenomena. In this article, inspired by these recent corrector techniques, we introduce a splitting method of order three for a class of semilinear parabolic problems that avoids order reduction in the context of non-periodic boundary conditions. We give a proof for the third order convergence of the method in a simplified linear setting and confirm the result by numerical experiments. Moreover, we show numerically that the high order convergence persists for an order four variant of a splitting method, and also for a nonlinear source term.
With the increasing availability of large scale datasets, computational power and tools like automatic differentiation and expressive neural network architectures, sequential data are now often treated in a data-driven way, with a dynamical model trained from the observation data. While neural networks are often seen as uninterpretable black-box architectures, they can still benefit from physical priors on the data and from mathematical knowledge. In this paper, we use a neural network architecture which leverages the long-known Koopman operator theory to embed dynamical systems in latent spaces where their dynamics can be described linearly, enabling a number of appealing features. We introduce methods that enable to train such a model for long-term continuous reconstruction, even in difficult contexts where the data comes in irregularly-sampled time series. The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques, with applications to e.g. time series interpolation and forecasting.
The HEat modulated Infinite DImensional Heston (HEIDIH) model and its numerical approximation are introduced and analyzed. This model falls into the general framework of infinite dimensional Heston stochastic volatility models of (F.E. Benth, I.C. Simonsen '18), introduced for the pricing of forward contracts. The HEIDIH model consists of a one-dimensional stochastic advection equation coupled with a stochastic volatility process, defined as a Cholesky-type decomposition of the tensor product of a Hilbert-space valued Ornstein-Uhlenbeck process, the mild solution to the stochastic heat equation on the real half-line. The advection and heat equations are driven by independent space-time Gaussian processes which are white in time and colored in space, with the latter covariance structure expressed by two different kernels. First, a class of weight-stationary kernels are given, under which regularity results for the HEIDIH model in fractional Sobolev spaces are formulated. In particular, the class includes weighted Mat\'ern kernels. Second, numerical approximation of the model is considered. An error decomposition formula, pointwise in space and time, for a finite-difference scheme is proven. For a special case, essentially sharp convergence rates are obtained when this is combined with a fully discrete finite element approximation of the stochastic heat equation. The analysis takes into account a localization error, a pointwise-in-space finite element discretization error and an error stemming from the noise being sampled pointwise in space. The rates obtained in the analysis are higher than what would be obtained using a standard Sobolev embedding technique. Numerical simulations illustrate the results.
As we are aware, various types of methods have been proposed to approximate the Caputo fractional derivative numerically. A common challenge of the methods is the non-local property of the Caputo fractional derivative which leads to the slow and memory consuming methods. Diffusive representation of fractional derivative is an efficient tool to overcome the mentioned challenge. This paper presents two new diffusive representations to approximate the Caputo fractional derivative of order $0<\alpha<1$. Error analysis of the newly presented methods together with some numerical examples are provided at the end.