We introduce a simple natural deduction system for reasoning with judgments of the form "there exists a proof of $\varphi$" to explore the notion of judgmental existence following Martin-L\"{o}f's methodology of distinguishing between judgments and propositions. In this system, the existential judgment can be internalized into a modal notion of propositional existence that is closely related to truncation modality, a key tool for obtaining proof irrelevance, and lax modality. We provide a computational interpretation in the style of the Curry-Howard isomorphism for the existence modality and show that the corresponding system has some desirable properties such as strong normalization or subject reduction.
Bent functions are maximally nonlinear Boolean functions with an even number of variables, which include a subclass of functions, the so-called hyper-bent functions whose properties are stronger than bent functions and a complete classification of hyper-bent functions is elusive and inavailable.~In this paper,~we solve an open problem of Mesnager that describes hyper-bentness of hyper-bent functions with multiple trace terms via Dillon-like exponents with coefficients in the extension field~$\mathbb{F}_{2^{2m}}$~of this field~$\mathbb{F}_{2^{m}}$. By applying M\"{o}bius transformation and the theorems of hyperelliptic curves, hyper-bentness of these functions are successfully characterized in this field~$\mathbb{F}_{2^{2m}}$ with~$m$~odd integer.
Machine learning (ML) methods, which fit to data the parameters of a given parameterized model class, have garnered significant interest as potential methods for learning surrogate models for complex engineering systems for which traditional simulation is expensive. However, in many scientific and engineering settings, generating high-fidelity data on which to train ML models is expensive, and the available budget for generating training data is limited, so that high-fidelity training data are scarce. ML models trained on scarce data have high variance, resulting in poor expected generalization performance. We propose a new multifidelity training approach for scientific machine learning via linear regression that exploits the scientific context where data of varying fidelities and costs are available: for example, high-fidelity data may be generated by an expensive fully resolved physics simulation whereas lower-fidelity data may arise from a cheaper model based on simplifying assumptions. We use the multifidelity data within an approximate control variate framework to define new multifidelity Monte Carlo estimators for linear regression models. We provide bias and variance analysis of our new estimators that guarantee the approach's accuracy and improved robustness to scarce high-fidelity data. Numerical results demonstrate that our multifidelity training approach achieves similar accuracy to the standard high-fidelity only approach with orders-of-magnitude reduced high-fidelity data requirements.
We deal with a model selection problem for structural equation modeling (SEM) with latent variables for diffusion processes. Based on the asymptotic expansion of the marginal quasi-log likelihood, we propose two types of quasi-Bayesian information criteria of the SEM. It is shown that the information criteria have model selection consistency. Furthermore, we examine the finite-sample performance of the proposed information criteria by numerical experiments.
We consider the problem of enumerating all minimal transversals (also called minimal hitting sets) of a hypergraph $\mathcal{H}$. An equivalent formulation of this problem known as the \emph{transversal hypergraph} problem (or \emph{hypergraph dualization} problem) is to decide, given two hypergraphs, whether one corresponds to the set of minimal transversals of the other. The existence of a polynomial time algorithm to solve this problem is a long standing open question. In \cite{fredman_complexity_1996}, the authors present the first sub-exponential algorithm to solve the transversal hypergraph problem which runs in quasi-polynomial time, making it unlikely that the problem is (co)NP-complete. In this paper, we show that when one of the two hypergraphs is of bounded VC-dimension, the transversal hypergraph problem can be solved in polynomial time, or equivalently that if $\mathcal{H}$ is a hypergraph of bounded VC-dimension, then there exists an incremental polynomial time algorithm to enumerate its minimal transversals. This result generalizes most of the previously known polynomial cases in the literature since they almost all consider classes of hypergraphs of bouded VC-dimension. As a consequence, the hypergraph transversal problem is solvable in polynomial time for any class of hypergraphs closed under partial subhypergraphs. We also show that the proposed algorithm runs in quasi-polynomial time in general hypergraphs and runs in polynomial time if the conformality of the hypergraph is bounded, which is one of the few known polynomial cases where the VC-dimension is unbounded.
Structure-preserving particle methods have recently been proposed for a class of nonlinear continuity equations, including aggregation-diffusion equation in [J. Carrillo, K. Craig, F. Patacchini, Calc. Var., 58 (2019), pp. 53] and the Landau equation in [J. Carrillo, J. Hu., L. Wang, J. Wu, J. Comput. Phys. X, 7 (2020), pp. 100066]. One common feature to these equations is that they both admit some variational formulation, which upon proper regularization, leads to particle approximations dissipating the energy and conserving some quantities simultaneously at the semi-discrete level. In this paper, we formulate continuity equations with a density dependent bilinear form associated with the variational derivative of the energy functional and prove that appropriate particle methods satisfy a compatibility condition with its regularized energy. This enables us to utilize discrete gradient time integrators and show that the energy can be dissipated and the mass conserved simultaneously at the fully discrete level. In the case of the Landau equation, we prove that our approach also conserves the momentum and kinetic energy at the fully discrete level. Several numerical examples are presented to demonstrate the dissipative and conservative properties of our proposed method.
We address the problem of the best uniform approximation of a continuous function on a convex domain. The approximation is by linear combinations of a finite system of functions (not necessarily Chebyshev) under arbitrary linear constraints. By modifying the concept of alternance and of the Remez iterative procedure we present a method, which demonstrates its efficiency in numerical problems. The linear rate of convergence is proved under some favourable assumptions. A special attention is paid to systems of complex exponents, Gaussian functions, lacunar algebraic and trigonometric polynomials. Applications to signal processing, linear ODE, switching dynamical systems, and to Markov-Bernstein type inequalities are considered.
This contribution introduces a model order reduction approach for an advection-reaction problem with a parametrized reaction function. The underlying discretization uses an ultraweak formulation with an $L^2$-like trial space and an 'optimal' test space as introduced by Demkowicz et al. This ensures the stability of the discretization and in addition allows for a symmetric reformulation of the problem in terms of a dual solution which can also be interpreted as the normal equations of an adjoint least-squares problem. Classic model order reduction techniques can then be applied to the space of dual solutions which also immediately gives a reduced primal space. We show that the necessary computations do not require the reconstruction of any primal solutions and can instead be performed entirely on the space of dual solutions. We prove exponential convergence of the Kolmogorov $N$-width and show that a greedy algorithm produces quasi-optimal approximation spaces for both the primal and the dual solution space. Numerical experiments based on the benchmark problem of a catalytic filter confirm the applicability of the proposed method.
A method for numerical approximation of a new class of fractional parabolic stochastic evolution equations is introduced and analysed. This class of equations has recently been proposed as a space-time extension of the SPDE-method in spatial statistics. A truncation of the spectral basis function expansion is used to discretise in space, and then a quadrature is used to approximate the temporal evolution of each basis coefficient. Strong error bounds are proved both for the spectral and temporal approximations. The method is tested and the results are verified by several numerical experiments.
Time-fractional parabolic equations with a Caputo time derivative of order $\alpha\in(0,1)$ are discretized in time using continuous collocation methods. For such discretizations, we give sufficient conditions for existence and uniqueness of their solutions. Two approaches are explored: the Lax-Milgram Theorem and the eigenfunction expansion. The resulting sufficient conditions, which involve certain $m\times m$ matrices (where $m$ is the order of the collocation scheme), are verified both analytically, for all $m\ge 1$ and all sets of collocation points, and computationally, for all $ m\le 20$. The semilinear case is also addressed.
This note discusses a simple modification of cross-conformal prediction inspired by recent work on e-values. The precursor of conformal prediction developed in the 1990s by Gammerman, Vapnik, and Vovk was also based on e-values and is called conformal e-prediction in this note. Replacing e-values by p-values led to conformal prediction, which has important advantages over conformal e-prediction without obvious disadvantages. The situation with cross-conformal prediction is, however, different: whereas for cross-conformal prediction validity is only an empirical fact (and can be broken with excessive randomization), this note draws the reader's attention to the obvious fact that cross-conformal e-prediction enjoys a guaranteed property of validity.