亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A fundamental issue in the $\lambda$-calculus is to find appropriate notions for meaningfulness. It is well-known that in the call-by-name $\lambda$-calculus (CbN) the meaningful terms can be identified with the solvable ones, and that this notion is not appropriate in the call-by-value $\lambda$-calculus (CbV). This paper validates the challenging claim that yet another notion, previously introduced in the literature as potential valuability, appropriately represents meaningfulness in CbV. Akin to CbN, this claim is corroborated by proving two essential properties. The first one is genericity, stating that meaningless subterms have no bearing on evaluating normalizing terms. To prove this (which was an open problem), we use a novel approach based on stratified reduction, indifferently applicable to CbN and CbV, and in a quantitative way. The second property concerns consistency of the smallest congruence relation resulting from equating all meaningless terms. While the consistency result is not new, we provide the first direct operational proof of it. We also show that such a congruence has a unique consistent and maximal extension, which coincides with a well-known notion of observational equivalence. Our results thus supply the formal concepts and tools that validate the informal notion of meaningfulness underlying CbN and CbV.

相關內容

Let $X$ and $Z$ be random vectors, and $Y=g(X,Z)$. In this paper, on the one hand, for the case that $X$ and $Z$ are continuous, by using the ideas from the total variation and the flux of $g$, we develop a point of view in causal inference capable of dealing with a broad domain of causal problems. Indeed, we focus on a function, called Probabilistic Easy Variational Causal Effect (PEACE), which can measure the direct causal effect of $X$ on $Y$ with respect to continuously and interventionally changing the values of $X$ while keeping the value of $Z$ constant. PEACE is a function of $d\ge 0$, which is a degree managing the strengths of probability density values $f(x|z)$. On the other hand, we generalize the above idea for the discrete case and show its compatibility with the continuous case. Further, we investigate some properties of PEACE using measure theoretical concepts. Furthermore, we provide some identifiability criteria and several examples showing the generic capability of PEACE. We note that PEACE can deal with the causal problems for which micro-level or just macro-level changes in the value of the input variables are important. Finally, PEACE is stable under small changes in $\partial g_{in}/\partial x$ and the joint distribution of $X$ and $Z$, where $g_{in}$ is obtained from $g$ by removing all functional relationships defining $X$ and $Z$.

The extended persistence diagram is an invariant of piecewise linear functions, which is known to be stable under perturbations of functions with respect to the bottleneck distance as introduced by Cohen-Steiner, Edelsbrunner, and Harer. We address the question of universality, which asks for the largest possible stable distance on extended persistence diagrams, showing that a more discriminative variant of the bottleneck distance is universal. Our result applies more generally to settings where persistence diagrams are considered only up to a certain degree. We achieve our results by establishing a functorial construction and several characteristic properties of relative interlevel set homology, which mirror the classical Eilenberg--Steenrod axioms. Finally, we contrast the bottleneck distance with the interleaving distance of sheaves on the real line by showing that the latter is not intrinsic, let alone universal. This particular result has the further implication that the interleaving distance of Reeb graphs is not intrinsic either.

We develop a numerical method for the computation of a minimal convex and compact set, $\mathcal{B}\subset\mathbb{R}^N$, in the sense of mean width. This minimisation is constrained by the requirement that $\max_{b\in\mathcal{B}}\langle b , u\rangle\geq C(u)$ for all unit vectors $u\in S^{N-1}$ given some Lipschitz function $C$. This problem arises in the construction of environmental contours under the assumption of convex failure sets. Environmental contours offer descriptions of extreme environmental conditions commonly applied for reliability analysis in the early design phase of marine structures. Usually, they are applied in order to reduce the number of computationally expensive response analyses needed for reliability estimation. We solve this problem by reformulating it as a linear programming problem. Rigorous convergence analysis is performed, both in terms of convergence of mean widths and in the sense of the Hausdorff metric. Additionally, numerical examples are provided to illustrate the presented methods.

We investigate intuitionistic modal logics with locally interpreted $\square$ and $\lozenge$. The basic logic LIK is stronger than constructive modal logic WK and incomparable with intuitionistic modal logic IK. We propose an axiomatization of LIK and some of its extensions. We propose bi-nested calculi for LIK and these extensions, thus providing both a decision procedure and a procedure of finite countermodel extraction.

Local search is a powerful heuristic in optimization and computer science, the complexity of which has been studied in the white box and black box models. In the black box model, we are given a graph $G = (V,E)$ and oracle access to a function $f : V \to \mathbb{R}$. The local search problem is to find a vertex $v$ that is a local minimum, i.e. with $f(v) \leq f(u)$ for all $(u,v) \in E$, using as few queries to the oracle as possible. We show that if a graph $G$ admits a lazy, irreducible, and reversible Markov chain with stationary distribution $\pi$, then the randomized query complexity of local search on $G$ is $\Omega\left( \frac{\sqrt{n}}{t_{mix} \cdot \exp(3\sigma)}\right)$, where $t_{mix}$ is the mixing time of the chain and $\sigma = \max_{u,v \in V(G)} \frac{\pi(v)}{\pi(u)}.$ This theorem formally establishes a connection between the query complexity of local search and the mixing time of the fastest mixing Markov chain for the given graph. We also get several corollaries that lower bound the complexity as a function of the spectral gap, one of which slightly improves a result from prior work.

We present a generalization of first-order unification to a term algebra where variable indexing is part of the object language. We exploit variable indexing by associating some sequences of variables ($X_0,\ X_1,\ X_2,\dots$) with a mapping $\sigma$ whose domain is the variable sequence and whose range consist of terms that may contain variables from the sequence. From a given term $t$, an infinite sequence of terms may be produced by iterative application of $\sigma$. Given a unification problem $U$ and mapping $\sigma$, the \textit{schematic unification problem} asks whether all unification problems $U$, $\sigma(U)$, $\sigma(\sigma(U))$, $\dots$ are unifiable. We provide a terminating and sound algorithm. Our algorithm is \textit{complete} if we further restrict ourselves to so-called $\infty$-stable problems. We conjecture that this additional requirement is unnecessary for completeness. Schematic unification is related to methods of inductive proof transformation by resolution and inductive reasoning.

In this paper, we focus on the BDS test, which is a nonparametric test of independence. Specifically, the null hypothesis $H_{0}$ of it is that $\{u_{t}\}$ is i.i.d. (independent and identically distributed), where $\{u_{t}\}$ is a random sequence. The BDS test is widely used in economics and finance, but it has a weakness that cannot be ignored: over-rejecting $H_{0}$ even if the length $T$ of $\{u_{t}\}$ is as large as $(100,2000)$. To improve the over-rejection problem of BDS test, considering that the correlation integral is the foundation of BDS test, we not only accurately describe the expectation of the correlation integral under $H_{0}$, but also calculate all terms of the asymptotic variance of the correlation integral whose order is $O(T^{-1})$ and $O(T^{-2})$, which is essential to improve the finite sample performance of BDS test. Based on this, we propose a revised BDS (RBDS) test and prove its asymptotic normality under $H_{0}$. The RBDS test not only inherits all the advantages of the BDS test, but also effectively corrects the over-rejection problem of the BDS test, which can be fully confirmed by the simulation results we presented. Moreover, based on the simulation results, we find that similar to BDS test, RBDS test would also be affected by the parameter estimations of the ARCH-type model, resulting in size distortion, but this phenomenon can be alleviated by the logarithmic transformation preprocessing of the estimate residuals of the model. Besides, through some actual datasets that have been demonstrated to fit well with ARCH-type models, we also compared the performance of BDS test and RBDS test in evaluating the goodness-of-fit of the model in empirical problem, and the results reflect that, under the same condition, the performance of the RBDS test is more encouraging.

In the Pattern Masking for Dictionary Matching (PMDM) problem, we are given a dictionary $\mathcal{D}$ of $d$ strings, each of length $\ell$, a query string $q$ of length $\ell$, and a positive integer $z$, and we are asked to compute a smallest set $K\subseteq\{1,\ldots,\ell\}$, so that if $q[i]$, for all $i\in K$, is replaced by a wildcard, then $q$ matches at least $z$ strings from $\mathcal{D}$. The PMDM problem lies at the heart of two important applications featured in large-scale real-world systems: record linkage of databases that contain sensitive information, and query term dropping. In both applications, solving PMDM allows for providing data utility guarantees as opposed to existing approaches. We first show, through a reduction from the well-known $k$-Clique problem, that a decision version of the PMDM problem is NP-complete, even for strings over a binary alphabet. We present a data structure for PMDM that answers queries over $\mathcal{D}$ in time $\mathcal{O}(2^{\ell/2}(2^{\ell/2}+\tau)\ell)$ and requires space $\mathcal{O}(2^{\ell}d^2/\tau^2+2^{\ell/2}d)$, for any parameter $\tau\in[1,d]$. We also approach the problem from a more practical perspective. We show an $\mathcal{O}((d\ell)^{k/3}+d\ell)$-time and $\mathcal{O}(d\ell)$-space algorithm for PMDM if $k=|K|=\mathcal{O}(1)$. We generalize our exact algorithm to mask multiple query strings simultaneously. We complement our results by showing a two-way polynomial-time reduction between PMDM and the Minimum Union problem [Chlamt\'{a}\v{c} et al., SODA 2017]. This gives a polynomial-time $\mathcal{O}(d^{1/4+\epsilon})$-approximation algorithm for PMDM, which is tight under plausible complexity conjectures.

Recently, a considerable literature has grown up around the theme of Graph Convolutional Network (GCN). How to effectively leverage the rich structural information in complex graphs, such as knowledge graphs with heterogeneous types of entities and relations, is a primary open challenge in the field. Most GCN methods are either restricted to graphs with a homogeneous type of edges (e.g., citation links only), or focusing on representation learning for nodes only instead of jointly propagating and updating the embeddings of both nodes and edges for target-driven objectives. This paper addresses these limitations by proposing a novel framework, namely the Knowledge Embedding based Graph Convolutional Network (KE-GCN), which combines the power of GCNs in graph-based belief propagation and the strengths of advanced knowledge embedding (a.k.a. knowledge graph embedding) methods, and goes beyond. Our theoretical analysis shows that KE-GCN offers an elegant unification of several well-known GCN methods as specific cases, with a new perspective of graph convolution. Experimental results on benchmark datasets show the advantageous performance of KE-GCN over strong baseline methods in the tasks of knowledge graph alignment and entity classification.

The information bottleneck (IB) method is a technique for extracting information that is relevant for predicting the target random variable from the source random variable, which is typically implemented by optimizing the IB Lagrangian that balances the compression and prediction terms. However, the IB Lagrangian is hard to optimize, and multiple trials for tuning values of Lagrangian multiplier are required. Moreover, we show that the prediction performance strictly decreases as the compression gets stronger during optimizing the IB Lagrangian. In this paper, we implement the IB method from the perspective of supervised disentangling. Specifically, we introduce Disentangled Information Bottleneck (DisenIB) that is consistent on compressing source maximally without target prediction performance loss (maximum compression). Theoretical and experimental results demonstrate that our method is consistent on maximum compression, and performs well in terms of generalization, robustness to adversarial attack, out-of-distribution detection, and supervised disentangling.

北京阿比特科技有限公司