亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents a novel simplification calculus for propositional logic derived from Peirce's existential graphs' rules of inference and implication graphs. Our rules can be applied to propositional logic formulae in nested form, are equivalence-preserving, guarantee a monotonically decreasing number of variables, clauses and literals, and maximise the preservation of structural problem information. Our techniques can also be seen as higher-level SAT preprocessing, and we show how one of our rules (TWSR) generalises and streamlines most of the known equivalence-preserving SAT preprocessing methods. In addition, we propose a simplification procedure based on the systematic application of two of our rules (EPR and TWSR) which is solver-agnostic and can be used to simplify large Boolean satisfiability problems and propositional formulae in arbitrary form, and we provide a formal analysis of its algorithmic complexity in terms of space and time. Finally, we show how our rules can be further extended with a novel n-ary implication graph to capture all known equivalence-preserving preprocessing procedures.

相關內容

SAT是研究者關注命題可滿足性問題的理論與應用的第一次年度會議。除了簡單命題可滿足性外,它還包括布爾優化(如MaxSAT和偽布爾(PB)約束)、量化布爾公式(QBF)、可滿足性模理論(SMT)和約束規劃(CP),用于與布爾級推理有明確聯系的問題。官網鏈接: · Analysis · 正定 · 歐幾里得距離 · 條件獨立的 ·
2024 年 7 月 7 日

We present a new method for constructing valid covariance functions of Gaussian processes for spatial analysis in irregular, non-convex domains such as bodies of water. Standard covariance functions based on geodesic distances are not guaranteed to be positive definite on such domains, while existing non-Euclidean approaches fail to respect the partially Euclidean nature of these domains where the geodesic distance agrees with the Euclidean distances for some pairs of points. Using a visibility graph on the domain, we propose a class of covariance functions that preserve Euclidean-based covariances between points that are connected in the domain while incorporating the non-convex geometry of the domain via conditional independence relationships. We show that the proposed method preserves the partially Euclidean nature of the intrinsic geometry on the domain while maintaining validity (positive definiteness) and marginal stationarity of the covariance function over the entire parameter space, properties which are not always fulfilled by existing approaches to construct covariance functions on non-convex domains. We provide useful approximations to improve computational efficiency, resulting in a scalable algorithm. We compare the performance of our method with those of competing state-of-the-art methods using simulation studies on synthetic non-convex domains. The method is applied to data regarding acidity levels in the Chesapeake Bay, showing its potential for ecological monitoring in real-world spatial applications on irregular domains.

This paper shows how to use the shooting method, a classical numerical algorithm for solving boundary value problems, to compute the Riemannian distance on the Stiefel manifold $ \mathrm{St}(n,p) $, the set of $ n \times p $ matrices with orthonormal columns. The proposed method is a shooting method in the sense of the classical shooting methods for solving boundary value problems; see, e.g., Stoer and Bulirsch, 1991. The main feature is that we provide an approximate formula for the Fr\'{e}chet derivative of the geodesic involved in our shooting method. Numerical experiments demonstrate the algorithms' accuracy and performance. Comparisons with existing state-of-the-art algorithms for solving the same problem show that our method is competitive and even beats several algorithms in many cases.

This paper deals with a novel nonlinear coupled nonlocal reaction-diffusion system proposed for image restoration, characterized by the advantages of preserving low gray level features and textures.The gray level indicator in the proposed model is regularized using a new method based on porous media type equations, which is suitable for recovering noisy blurred images. The well-posedness, regularity, and other properties of the model are investigated, addressing the lack of theoretical analysis in those existing similar types of models. Numerical experiments conducted on texture and satellite images demonstrate the effectiveness of the proposed model in denoising and deblurring tasks.

This paper investigates an efficient exponential integrator generalized multiscale finite element method for solving a class of time-evolving partial differential equations in bounded domains. The proposed method first performs the spatial discretization of the model problem using constraint energy minimizing generalized multiscale finite element method (CEM-GMsFEM). This approach consists of two stages. First, the auxiliary space is constructed by solving local spectral problems, where the basis functions corresponding to small eigenvalues are captured. The multiscale basis functions are obtained in the second stage using the auxiliary space by solving local energy minimization problems over the oversampling domains. The basis functions have exponential decay outside the corresponding local oversampling regions. We shall consider the first and second-order explicit exponential Runge-Kutta approach for temporal discretization and to build a fully discrete numerical solution. The exponential integration strategy for the time variable allows us to take full advantage of the CEM-GMsFEM as it enables larger time steps due to its stability properties. We derive the error estimates in the energy norm under the regularity assumption. Finally, we will provide some numerical experiments to sustain the efficiency of the proposed method.

Besides standard Lagrange interpolation, i.e., interpolation of target functions from scattered point evaluations, positive definite kernel functions are well-suited for the solution of more general reconstruction problems. This is due to the intrinsic structure of the underlying reproducing kernel Hilbert space (RKHS). In fact, kernel-based interpolation has been applied to the reconstruction of bivariate functions from scattered Radon samples in computerized tomography (cf. Iske, 2018) and, moreover, to the numerical solution of elliptic PDEs (cf. Wenzel et al., 2022). As shown in various previous contributions, numerical algorithms and theoretical results from kernel-based Lagrange interpolation can be transferred to more general interpolation problems. In particular, greedy point selection methods were studied in (Wenzel et al., 2022), for the special case of Sobolev kernels. In this paper, we aim to develop and analyze more general kernel-based interpolation methods, for less restrictive settings. To this end, we first provide convergence results for generalized interpolation under minimalistic assumptions on both the selected kernel and the target function. Finally, we prove convergence of popular greedy data selection algorithms for totally bounded sets of functionals. Supporting numerical results are provided for illustration.

Reduced basis methods for approximating the solutions of parameter-dependant partial differential equations (PDEs) are based on learning the structure of the set of solutions - seen as a manifold ${\mathcal S}$ in some functional space - when the parameters vary. This involves investigating the manifold and, in particular, understanding whether it is close to a low-dimensional affine space. This leads to the notion of Kolmogorov $N$-width that consists of evaluating to which extent the best choice of a vectorial space of dimension $N$ approximates ${\mathcal S}$ well enough. If a good approximation of elements in ${\mathcal S}$ can be done with some well-chosen vectorial space of dimension $N$ -- provided $N$ is not too large -- then a ``reduced'' basis can be proposed that leads to a Galerkin type method for the approximation of any element in ${\mathcal S}$. In many cases, however, the Kolmogorov $N$-width is not so small, even if the parameter set lies in a space of small dimension yielding a manifold with small dimension. In terms of complexity reduction, this gap between the small dimension of the manifold and the large Kolmogorov $N$-width can be explained by the fact that the Kolmogorov $N$-width is linear while, in contrast, the dependency in the parameter is, most often, non-linear. There have been many contributions aiming at reconciling these two statements, either based on deterministic or AI approaches. We investigate here further a new paradigm that, in some sense, merges these two aspects: the nonlinear compressive reduced basisapproximation. We focus on a simple multiparameter problem and illustrate rigorously that the complexity associated with the approximation of the solution to the parameter dependant PDE is directly related to the number of parameters rather than the Kolmogorov $N$-width.

This paper proposes a second-order accurate direct Eulerian generalized Riemann problem (GRP) scheme for the ten-moment Gaussian closure equations with source terms. The generalized Riemann invariants associated with the rarefaction waves, the contact discontinuity and the shear waves are given, and the 1D exact Riemann solver is obtained. After that, the generalized Riemann invariants and the Rankine-Hugoniot jump conditions are directly used to resolve the left and right nonlinear waves (rarefaction wave and shock wave) of the local GRP in Eulerian formulation, and then the 1D direct Eulerian GRP scheme is derived. They are much more complicated, technical and nontrivial due to more physical variables and elementary waves. Some 1D and 2D numerical experiments are presented to check the accuracy and high resolution of the proposed GRP schemes, where the 2D direct Eulerian GRP scheme is given by using the Strang splitting method for simplicity. It should be emphasized that several examples of 2D Riemann problems are constructed for the first time.

We study weighted basic parallel processes (WBPP), a nonlinear recursive generalisation of weighted finite automata inspired from process algebra and Petri net theory. Our main result is an algorithm of 2-EXPSPACE complexity for the WBPP equivalence problem. While (unweighted) BPP language equivalence is undecidable, we can use this algorithm to decide multiplicity equivalence of BPP and language equivalence of unambiguous BPP, with the same complexity. These are long-standing open problems for the related model of weighted context-free grammars. Our second contribution is a connection between WBPP, power series solutions of systems of polynomial differential equations, and combinatorial enumeration. To this end we consider constructible differentially finite power series (CDF), a class of multivariate differentially algebraic series introduced by Bergeron and Reutenauer in order to provide a combinatorial interpretation to differential equations. CDF series generalise rational, algebraic, and a large class of D-finite (holonomic) series, for which decidability of equivalence was an open problem. We show that CDF series correspond to commutative WBPP series. As a consequence of our result on WBPP and commutativity, we show that equivalence of CDF power series can be decided with 2-EXPTIME complexity. The complexity analysis is based on effective bounds from algebraic geometry, namely on the length of chains of polynomial ideals constructed by repeated application of finitely many, not necessarily commuting derivations of a multivariate polynomial ring. This is obtained by generalising a result of Novikov and Yakovenko in the case of a single derivation, which is noteworthy since generic bounds on ideal chains are non-primitive recursive in general. On the way, we develop the theory of \WBPP~series and \CDF~power series, exposing several of their appealing properties.

This work explores multi-modal inference in a high-dimensional simplified model, analytically quantifying the performance gain of multi-modal inference over that of analyzing modalities in isolation. We present the Bayes-optimal performance and weak recovery thresholds in a model where the objective is to recover the latent structures from two noisy data matrices with correlated spikes. The paper derives the approximate message passing (AMP) algorithm for this model and characterizes its performance in the high-dimensional limit via the associated state evolution. The analysis holds for a broad range of priors and noise channels, which can differ across modalities. The linearization of AMP is compared numerically to the widely used partial least squares (PLS) and canonical correlation analysis (CCA) methods, which are both observed to suffer from a sub-optimal recovery threshold.

Homogenisation empowers the efficient macroscale system level prediction of physical problems with intricate microscale structures. Here we develop an innovative powerful, rigorous and flexible framework for asymptotic homogenisation of dynamics at the finite scale separation of real physics, with proven results underpinned by modern dynamical systems theory. The novel systematic approach removes most of the usual assumptions, whether implicit or explicit, of other methodologies. By no longer assuming averages the methodology constructs so-called multi-continuum or micromorphic homogenisations systematically based upon the microscale physics. The developed framework and approach enables a user to straightforwardly choose and create such homogenisations with clear physical and theoretical support, and of highly controllable accuracy and fidelity.

北京阿比特科技有限公司