亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Suszko's Sentential Calculus with Identity SCI results from classical propositional calculus CPC by adding a new connective $\equiv$ and axioms for identity $\varphi\equiv\psi$ (which we interpret here as `propositional identity'). We reformulate the original semantics of SCI in terms of Boolean prealgebras establishing a connection to `hyperintensional semantics'. Furthermore, we define a general framework of dualities between certain SCI-theories and Lewis-style modal systems in the vicinity of S3. Suszko's original approach to two SCI-theories corresponding to S4 and S5 can be formulated as a special case. All these dualities rely particularly on the fact that Lewis' `strict equivalence' is axiomatized by the SCI-principles of `propositional identity'.

相關內容

We study the performance of Markov chains for the $q$-state ferromagnetic Potts model on random regular graphs. It is conjectured that their performance is dictated by metastability phenomena, i.e., the presence of "phases" (clusters) in the sample space where Markov chains with local update rules, such as the Glauber dynamics, are bound to take exponential time to escape, and therefore cause slow mixing. The phases that are believed to drive these metastability phenomena in the case of the Potts model emerge as local, rather than global, maxima of the so-called Bethe functional, and previous approaches of analysing these phases based on optimisation arguments fall short of the task. Our first contribution is to detail the emergence of the metastable phases for the $q$-state Potts model on the $d$-regular random graph for all integers $q,d\geq 3$, and establish that for an interval of temperatures, delineated by the uniqueness and the Kesten-Stigum thresholds on the $d$-regular tree, the two phases coexist. The proofs are based on a conceptual connection between spatial properties and the structure of the Potts distribution on the random regular graph, rather than complicated moment calculations. Based on this new structural understanding of the model, we obtain various algorithmic consequences. We first complement recent fast mixing results for Glauber dynamics by Blanca and Gheissari below the uniqueness threshold, showing an exponential lower bound on the mixing time above the uniqueness threshold. Then, we obtain tight results even for the non-local Swendsen-Wang chain, where we establish slow mixing/metastability for the whole interval of temperatures where the chain is conjectured to mix slowly on the random regular graph. The key is to bound the conductance of the chains using a random graph "planting" argument combined with delicate bounds on random-graph percolation.

This paper focuses on quantifications whose nature, we believe, is generally undervalued within the Knowledge Representation community: they range over a set of concepts, i.e., of intensional objects identified in the ontology. Hence, we extend first order logic to allow referring to the intension of a symbol, i.e., to the concept it represents. Our formalism is more elaboration tolerant than simpler formalisms that require reification, but also introduces the possibility of syntactically incorrect formula.We introduce a guarding mechanism to make formula syntactically correct, and present a method to verify correctness. The complexity of the method is linear with the length of the formula. We also extend FO($\cdot$) (aka FO-dot), a logic-based knowledge representation language, in a similar way, and show how it helped solve practical problems. The value of expressing intensional statements has been well-established in modal logic. We show how our approach expands on the understanding of intensions as studied in modal settings by, e.g., Fitting, in a way that is of value in non-modal settings as well.

Systems consisting of spheres rolling on elastic membranes have been used as educational tools to introduce a core conceptual idea of General Relativity (GR): how curvature guides the movement of matter. However, previous studies have revealed that such schemes cannot accurately represent relativistic dynamics in the laboratory. Dissipative forces cause the initially GR-like dynamics to be transient and consequently restrict experimental study to only the beginnings of trajectories; dominance of Earth's gravity forbids the difference between spatial and temporal spacetime curvatures. Here by developing a mapping between dynamics of a wheeled vehicle on a spandex membrane, we demonstrate that an active object that can prescribe its speed can not only obtain steady-state orbits, but also use the additional parameters such as speed to tune the orbits towards relativistic dynamics. Our mapping demonstrates how activity mixes space and time in a metric, shows how active particles do not necessarily follow geodesics in the real space but instead follow geodesics in a fiducial spacetime. The mapping further reveals how parameters such as the membrane elasticity and instantaneous speed allow programming a desired spacetime such as the Schwarzschild metric near a non-rotating black hole. Our mapping and framework point the way to the possibility to create a robophysical analog gravity system in the laboratory at low cost and provide insights into active matter in deformable environments and robot exploration in complex landscapes.

The existence of the {\em typical set} is key for data compression strategies and for the emergence of robust statistical observables in macroscopic physical systems. Standard approaches derive its existence from a restricted set of dynamical constraints. However, given the enormous consequences for the understanding of the system's dynamics, and its role underlying the presence of stable, almost deterministic statistical patterns, a question arises whether typical sets exist in much more general scenarios. We demonstrate here that the typical set can be defined and characterized from general forms of entropy for a much wider class of stochastic processes than it was previously thought. This includes processes showing arbitrary path dependence, long range correlations or dynamic sampling spaces; suggesting that typicality is a generic property of stochastic processes, regardless of their complexity. Our results impact directly in the understanding of the stability of complex systems, open the door to new data compression strategies and points to the existence of statistical mechanics-like approaches to systems arbitrarily away from equilibrium with dynamic phase spaces. We argue that the potential emergence of robust properties in complex stochastic systems provided by the existence of typical sets has special relevance to biological systems.

Let $\mathbf{X}$ be a random variable uniformly distributed on the discrete cube $\{ -1,1\} ^{n}$, and let $T_{\rho}$ be the noise operator acting on Boolean functions $f:\{ -1,1\} ^{n}\to\{ 0,1\} $, where $\rho\in[0,1]$ is the noise parameter, representing the correlation coefficient between each coordination of $\mathbf{X}$ and its noise-corrupted version. Given a convex function $\Phi$ and the mean $\mathbb{E}f(\mathbf{X})=a\in[0,1]$, which Boolean function $f$ maximizes the $\Phi$-stability $\mathbb{E}[\Phi(T_{\rho}f(\mathbf{X}))]$ of $f$? Special cases of this problem include the (symmetric and asymmetric) $\alpha$-stability problems and the "Most Informative Boolean Function" problem. In this paper, we provide several upper bounds for the maximal $\Phi$-stability. When specializing $\Phi$ to some particular forms, by these upper bounds, we partially resolve Mossel and O'Donnell's conjecture on $\alpha$-stability with $\alpha>2$, Li and M\'edard's conjecture on $\alpha$-stability with $1<\alpha<2$, and Courtade and Kumar's conjecture on the "Most Informative Boolean Function" which corresponds to a conjecture on $\alpha$-stability with $\alpha=1$. Our proofs are based on discrete Fourier analysis, optimization theory, and improvements of the Friedgut--Kalai--Naor (FKN) theorem. Our improvements of the FKN theorem are sharp or asymptotically sharp for certain cases.

Feature attribution is often loosely presented as the process of selecting a subset of relevant features as a rationale of a prediction. This lack of clarity stems from the fact that we usually do not have access to any notion of ground-truth attribution and from a more general debate on what good interpretations are. In this paper we propose to formalise feature selection/attribution based on the concept of relaxed functional dependence. In particular, we extend our notions to the instance-wise setting and derive necessary properties for candidate selection solutions, while leaving room for task-dependence. By computing ground-truth attributions on synthetic datasets, we evaluate many state-of-the-art attribution methods and show that, even when optimised, some fail to verify the proposed properties and provide wrong solutions.

Commonsense knowledge (CSK) about concepts and their properties is useful for AI applications such as robust chatbots. Prior works like ConceptNet, TupleKB and others compiled large CSK collections, but are restricted in their expressiveness to subject-predicate-object (SPO) triples with simple concepts for S and monolithic strings for P and O. Also, these projects have either prioritized precision or recall, but hardly reconcile these complementary goals. This paper presents a methodology, called Ascent, to automatically build a large-scale knowledge base (KB) of CSK assertions, with advanced expressiveness and both better precision and recall than prior works. Ascent goes beyond triples by capturing composite concepts with subgroups and aspects, and by refining assertions with semantic facets. The latter are important to express temporal and spatial validity of assertions and further qualifiers. Ascent combines open information extraction with judicious cleaning using language models. Intrinsic evaluation shows the superior size and quality of the Ascent KB, and an extrinsic evaluation for QA-support tasks underlines the benefits of Ascent.

Corpus-based set expansion (i.e., finding the "complete" set of entities belonging to the same semantic class, based on a given corpus and a tiny set of seeds) is a critical task in knowledge discovery. It may facilitate numerous downstream applications, such as information extraction, taxonomy induction, question answering, and web search. To discover new entities in an expanded set, previous approaches either make one-time entity ranking based on distributional similarity, or resort to iterative pattern-based bootstrapping. The core challenge for these methods is how to deal with noisy context features derived from free-text corpora, which may lead to entity intrusion and semantic drifting. In this study, we propose a novel framework, SetExpan, which tackles this problem, with two techniques: (1) a context feature selection method that selects clean context features for calculating entity-entity distributional similarity, and (2) a ranking-based unsupervised ensemble method for expanding entity set based on denoised context features. Experiments on three datasets show that SetExpan is robust and outperforms previous state-of-the-art methods in terms of mean average precision.

Link prediction is critical for the application of incomplete knowledge graph (KG) in the downstream tasks. As a family of effective approaches for link predictions, embedding methods try to learn low-rank representations for both entities and relations such that the bilinear form defined therein is a well-behaved scoring function. Despite of their successful performances, existing bilinear forms overlook the modeling of relation compositions, resulting in lacks of interpretability for reasoning on KG. To fulfill this gap, we propose a new model called DihEdral, named after dihedral symmetry group. This new model learns knowledge graph embeddings that can capture relation compositions by nature. Furthermore, our approach models the relation embeddings parametrized by discrete values, thereby decrease the solution space drastically. Our experiments show that DihEdral is able to capture all desired properties such as (skew-) symmetry, inversion and (non-) Abelian composition, and outperforms existing bilinear form based approach and is comparable to or better than deep learning models such as ConvE.

Unpaired image-to-image translation is the problem of mapping an image in the source domain to one in the target domain, without requiring corresponding image pairs. To ensure the translated images are realistically plausible, recent works, such as Cycle-GAN, demands this mapping to be invertible. While, this requirement demonstrates promising results when the domains are unimodal, its performance is unpredictable in a multi-modal scenario such as in an image segmentation task. This is because, invertibility does not necessarily enforce semantic correctness. To this end, we present a semantically-consistent GAN framework, dubbed Sem-GAN, in which the semantics are defined by the class identities of image segments in the source domain as produced by a semantic segmentation algorithm. Our proposed framework includes consistency constraints on the translation task that, together with the GAN loss and the cycle-constraints, enforces that the images when translated will inherit the appearances of the target domain, while (approximately) maintaining their identities from the source domain. We present experiments on several image-to-image translation tasks and demonstrate that Sem-GAN improves the quality of the translated images significantly, sometimes by more than 20% on the FCN score. Further, we show that semantic segmentation models, trained with synthetic images translated via Sem-GAN, leads to significantly better segmentation results than other variants.

北京阿比特科技有限公司