亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We incorporate strong negation in the theory of computable functionals TCF, a common extension of Plotkin's PCF and G\"{o}del's system $\mathbf{T}$, by defining simultaneously strong negation $A^{\mathbf{N}}$ of a formula $A$ and strong negation $P^{\mathbf{N}}$ of a predicate $P$ in TCF. As a special case of the latter, we get strong negation of an inductive and a coinductive predicate of TCF. We prove appropriate versions of the Ex falso quodlibet and of double negation elimination for strong negation in TCF. We introduce the so-called tight formulas of TCF i.e., formulas implied from the weak negation of their strong negation, and the relative tight formulas. We present various case-studies and examples, which reveal the naturality of our definition of strong negation in TCF and justify the use of TCF as a formal system for a large part of Bishop-style constructive mathematics.

相關內容

We consider the problem of sketching a set valuation function, which is defined as the expectation of a valuation function of independent random item values. We show that for monotone subadditive or submodular valuation functions satisfying a weak homogeneity condition, or certain other conditions, there exist discretized distributions of item values with $O(k\log(k))$ support sizes that yield a sketch valuation function which is a constant-factor approximation, for any value query for a set of items of cardinality less than or equal to $k$. The discretized distributions can be efficiently computed by an algorithm for each item's value distribution separately. Our results hold under conditions that accommodate a wide range of valuation functions arising in applications, such as the value of a team corresponding to the best performance of a team member, constant elasticity of substitution production functions exhibiting diminishing returns used in economics and consumer theory, and others. Sketch valuation functions are particularly valuable for finding approximate solutions to optimization problems such as best set selection and welfare maximization. They enable computationally efficient evaluation of approximate value oracle queries and provide an approximation guarantee for the underlying optimization problem.

{We analyze a general Implicit-Explicit (IMEX) time discretization for the compressible Euler equations of gas dynamics, showing that they are asymptotic-preserving (AP) in the low Mach number limit. The analysis is carried out for a general equation of state (EOS). We consider both a single asymptotic length scale and two length scales. We then show that, when coupling these time discretizations with a Discontinuous Galerkin (DG) space discretization with appropriate fluxes, an all Mach number numerical method is obtained. A number of relevant benchmarks for ideal gases and their non-trivial extension to non-ideal EOS validate the performed analysis.

This paper addresses structured normwise, mixed, and componentwise condition numbers (CNs) for a linear function of the solution to the generalized saddle point problem (GSPP). We present a general framework enabling us to measure the structured CNs of the individual solution components and derive their explicit formulae when the input matrices have symmetric, Toeplitz, or some general linear structures. In addition, compact formulae for the unstructured CNs are obtained, which recover previous results on CNs for GSPPs for specific choices of the linear function. Furthermore, an application of the derived structured CNs is provided to determine the structured CNs for the weighted Teoplitz regularized least-squares problems and Tikhonov regularization problems, which retrieves some previous studies in the literature.

Mesh-based Graph Neural Networks (GNNs) have recently shown capabilities to simulate complex multiphysics problems with accelerated performance times. However, mesh-based GNNs require a large number of message-passing (MP) steps and suffer from over-smoothing for problems involving very fine mesh. In this work, we develop a multiscale mesh-based GNN framework mimicking a conventional iterative multigrid solver, coupled with adaptive mesh refinement (AMR), to mitigate challenges with conventional mesh-based GNNs. We use the framework to accelerate phase field (PF) fracture problems involving coupled partial differential equations with a near-singular operator due to near-zero modulus inside the crack. We define the initial graph representation using all mesh resolution levels. We perform a series of downsampling steps using Transformer MP GNNs to reach the coarsest graph followed by upsampling steps to reach the original graph. We use skip connectors from the generated embedding during coarsening to prevent over-smoothing. We use Transfer Learning (TL) to significantly reduce the size of training datasets needed to simulate different crack configurations and loading conditions. The trained framework showed accelerated simulation times, while maintaining high accuracy for all cases compared to physics-based PF fracture model. Finally, this work provides a new approach to accelerate a variety of mesh-based engineering multiphysics problems

This paper considers the problem of manifold functional multiple regression with functional response, time--varying scalar regressors, and functional error term displaying Long Range Dependence (LRD) in time. Specifically, the error term is given by a manifold multifractionally integrated functional time series (see, e.g., Ovalle--Mu\~noz \& Ruiz--Medina, 2024)). The manifold is defined by a connected and compact two--point homogeneous space. The functional regression parameters have support in the manifold. The Generalized Least--Squares (GLS) estimator of the vector functional regression parameter is computed, and its asymptotic properties are analyzed under a totally specified and misspecified model scenario. A multiscale residual correlation analysis in the simulation study undertaken illustrates the empirical distributional properties of the errors at different spherical resolution levels.

A new area of application of methods of algebra of logic and to valued logic, which has emerged recently, is the problem of recognizing a variety of objects and phenomena, medical or technical diagnostics, constructing modern machines, checking test problems, etc., which can be reduced to constructing an optimal extension of the logical function to the entire feature space. For example, in logical recognition systems, logical methods based on discrete analysis and propositional calculus based on it are used to build their own recognition algorithms. In the general case, the use of a logical recognition method provides for the presence of logical connections expressed by the optimal continuation of a k-valued function over the entire feature space, in which the variables are the logical features of the objects or phenomena being recognized. The goal of this work is to develop a logical method for object recognition consisting of a reference table with logical features and classes of non-intersecting objects, which are specified as vectors from a given feature space. The method consists of considering the reference table as a logical function that is not defined everywhere and constructing an optimal continuation of the logical function to the entire feature space, which determines the extension of classes to the entire space.

Karppa & Kaski (2019) proposed a novel ``broken" or ``opportunistic" matrix multiplication algorithm, based on a variant of Strassen's algorithm, and used this to develop new algorithms for Boolean matrix multiplication, among other tasks. Their algorithm can compute Boolean matrix multiplication in $O(n^{2.778})$ time. While asymptotically faster matrix multiplication algorithms exist, most such algorithms are infeasible for practical problems. We describe an alternative way to use the broken multiplication algorithm to approximately compute matrix multiplication, either for real-valued or Boolean matrices. In brief, instead of running multiple iterations of the broken algorithm on the original input matrix, we form a new larger matrix by sampling and run a single iteration of the broken algorithm on it. Asymptotically, our algorithm has runtime $O(n^{2.763})$, a slight improvement over the Karppa-Kaski algorithm. Since the goal is to obtain new practical matrix-multiplication algorithms, we also estimate the concrete runtime for our algorithm for some large-scale sample problems. It appears that for these parameters, further optimizations are still needed to make our algorithm competitive.

We derive entropy bounds for the absolute convex hull of vectors $X= (x_1 , \ldots , x_p)\in \mathbb{R}^{n \times p} $ in $\mathbb{R}^n$ and apply this to the case where $X$ is the $d$-fold tensor matrix $$X = \underbrace{\Psi \otimes \cdots \otimes \Psi}_{d \ {\rm times} }\in \mathbb{R}^{m^d \times r^d },$$ with a given $\Psi = ( \psi_1 , \ldots , \psi_r ) \in \mathbb{R}^{m \times r} $, normalized to that $ \| \psi_j \|_2 \le 1$ for all $j \in \{1 , \ldots , r\}$. For $\epsilon >0$ we let ${\cal V} \subset \mathbb{R}^m$ be the linear space with smallest dimension $M ( \epsilon , \Psi)$ such that $ \max_{1 \le j \le r } \min_{v \in {\cal V} } \| \psi_j - v \|_2 \le \epsilon$. We call $M( \epsilon , \psi)$ the $\epsilon$-approximation of $\Psi$ and assume it is -- up to log terms -- polynomial in $\epsilon$. We show that the entropy of the absolute convex hull of the $d$-fold tensor matrix $X$ is up to log-terms of the same order as the entropy for the case $d=1$. The results are generalized to absolute convex hulls of tensors of functions in $L_2 (\mu)$ where $\mu$ is Lebesgue measure on $[0,1]$. As an application we consider the space of functions on $[0,1]^d$ with bounded $q$-th order Vitali total variation for a given $q \in \mathbb{N}$. As a by-product, we construct an orthonormal, piecewise polynomial, wavelet dictionary for functions that are well-approximated by piecewise polynomials.

We investigate pointwise estimation of the function-valued velocity field of a second-order linear SPDE. Based on multiple spatially localised measurements, we construct a weighted augmented MLE and study its convergence properties as the spatial resolution of the observations tends to zero and the number of measurements increases. By imposing H\"older smoothness conditions, we recover the pointwise convergence rate known to be minimax-optimal in the linear regression framework. The optimality of the rate in the current setting is verified by adapting the lower bound ansatz based on the RKHS of local measurements to the nonparametric situation.

The angular halfspace depth (ahD) is a natural modification of the celebrated halfspace (or Tukey) depth to the setup of directional data. It allows us to define elements of nonparametric inference, such as the median, the inter-quantile regions, or the rank statistics, for datasets supported in the unit sphere. Despite being introduced in 1987, ahD has never received ample recognition in the literature, mainly due to the lack of efficient algorithms for its computation. With the recent progress on the computational front, ahD however exhibits the potential for developing viable nonparametric statistics techniques for directional datasets. In this paper, we thoroughly treat the theoretical properties of ahD. We show that similarly to the classical halfspace depth for multivariate data, also ahD satisfies many desirable properties of a statistical depth function. Further, we derive uniform continuity/consistency results for the associated set of directional medians, and the central regions of ahD, the latter representing a depth-based analogue of the quantiles for directional data.

北京阿比特科技有限公司