亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper considers parametricity and its consequent free theorems for nested data types. Rather than representing nested types via their Church encodings in a higher-kinded or dependently typed extension of System F, we adopt a functional programming perspective and design a Hindley-Milner-style calculus with primitives for constructing nested types directly as fixpoints. Our calculus can express all nested types appearing in the literature, including truly nested types. At the level of terms, it supports primitive pattern matching, map functions, and fold combinators for nested types. Our main contribution is the construction of a parametric model for our calculus. This is both delicate and challenging. In particular, to ensure the existence of semantic fixpoints interpreting nested types, and thus to establish a suitable Identity Extension Lemma for our calculus, our type system must explicitly track functoriality of types, and cocontinuity conditions on the functors interpreting them must be appropriately threaded throughout the model construction. We also prove that our model satisfies an appropriate Abstraction Theorem, as well as that it verifies all standard consequences of parametricity in the presence of primitive nested types. We give several concrete examples illustrating how our model can be used to derive useful free theorems, including a short cut fusion transformation, for programs over nested types. Finally, we consider generalizing our results to GADTs, and argue that no extension of our parametric model for nested types can give a functorial interpretation of GADTs in terms of left Kan extensions and still be parametric.

相關內容

iOS 8 提供的應用間和應用跟系統的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source:

The broad objective of this paper is to initiate through a mathematical model, the study of causes of wage inequality and relate it to choices of consumption and the technologies of production in an economy. The paper constructs a simple Heterodox Model of a closed economy, in which the consumption and the production parts are clearly separated and yet coupled through a tatonnement process. The equilibria of this process correspond directly with those of a related Arrow-Debreu model. The formulation allows us to identify the combinatorial data which link parameters of the economic system with its equilibria, in particular, the impact of consumer preferences on wages. The Heterodox model also allows the formulation and explicit construction of the consumer choice game, where individual utilities serve as the strategies with total or relative wages as the pay-offs. We illustrate, through two examples, the mathematical details of the consumer choice game. We show that consumer preferences, expressed through modified utility functions, do indeed percolate through the economy, and influences not only prices but also production and wages. Thus, consumer choice may serve as an effective tool for wage redistribution.

Cryptographic Protocols (CP) are distributed algorithms intended for secure communication in an insecure environment. They are used, for example, in electronic payments, electronic voting procedures, systems of confidential data processing, etc. Errors in CPs can bring to great financial and social damage, therefore it is necessary to use mathematical methods to substantiate the correctness and safety of CPs. In this paper, a distributed process model of CPs is presented, which allows one to formally describe CPs and their properties. It is shown how to solve the problems of verification of CPs on the base of this model.

We study a class of non-cooperative aggregative games -- denoted as \emph{social purpose games} -- in which the payoffs depend separately on a player's own strategy (individual benefits) and on a function of the strategy profile which is common to all players (social benefits) weighted by an individual benefit parameter. This structure allows for an asymmetric assessment of the social benefit across players. We show that these games have a potential and we investigate its properties. We investigate the payoff structure and the uniqueness of Nash equilibria and social optima. Furthermore, following the literature on partial cooperation, we investigate the leadership of a single coalition of cooperators while the rest of players act as non-cooperative followers. In particular, we show that social purpose games admit the emergence of a stable coalition of cooperators for the subclass of \emph{strict} social purpose games. Due to the nature of the partial cooperative leadership equilibrium, stable coalitions of cooperators reflect a limited form of farsightedness in their formation. As a particular application, we study the tragedy of the commons game. We show that there emerges a single stable coalition of cooperators to curb the over-exploitation of the resource.

Traditional session types prescribe bidirectional communication protocols for concurrent computations, where well-typed programs are guaranteed to adhere to the protocols. However, simple session types cannot capture properties beyond the basic type of the exchanged messages. In response, recent work has extended session types with refinements from linear arithmetic, capturing intrinsic attributes of processes and data. These refinements then play a central role in describing sequential and parallel complexity bounds on session-typed programs. The Rast language provides an open-source implementation of session-typed concurrent programs extended with arithmetic refinements as well as ergometric and temporal types to capture work and span of program execution. To further support generic programming, Rast also enhances arithmetically refined session types with recently developed nested parametric polymorphism. Type checking relies on Cooper's algorithm for quantifier elimination in Presburger arithmetic with a few significant optimizations, and a heuristic extension to nonlinear constraints. Rast furthermore includes a reconstruction engine so that most program constructs pertaining the layers of refinements and resources are inserted automatically. We provide a variety of examples to demonstrate the expressivity of the language.

We present a unification of refinement and Hoare-style reasoning in a foundational mechanized higher-order distributed separation logic. This unification enables us to prove formally in the Coq proof assistant that concrete implementations of challenging distributed systems refine more abstract models and to combine refinement-style reasoning with Hoare-style program verification. We use our logic to prove correctness of concrete implementations of two-phase commit and single-decree Paxos by showing that they refine their abstract TLA+ specifications. We further use our notion of refinement to transfer fairness assumptions on program executions to model traces and then transfer liveness properties of fair model traces back to program executions, which enables us to prove liveness properties such as strong eventual consistency of a concrete implementation of a Conflict-Free Replicated Data Type and fair termination of a concurrent program.

Time appears to pass irreversibly. In light of CPT symmetry, the Universe's initial condition is thought to be somehow responsible. We propose a model, the stochastic partitioned cellular automaton (SPCA), in which to study the mechanisms and consequences of emergent irreversibility. While their most natural definition is probabilistic, we show that SPCA dynamics can be made deterministic and reversible, by attaching randomly initialized degrees of freedom. This property motivates analogies to classical field theories. We develop the foundations of non-equilibrium statistical mechanics on SPCAs. Of particular interest are the second law of thermodynamics, and a mutual information law which proves fundamental in non-equilibrium settings. We believe that studying the dynamics of information on SPCAs will yield insights on foundational topics in computer engineering, the sciences, and the philosophy of mind. As evidence of this, we discuss several such applications, including an extension of Landauer's principle, and sketch a physical justification of the causal decision theory that underlies the so-called psychological arrow of time.

A recent spate of state-of-the-art semi- and un-supervised solutions disentangle and encode image "content" into a spatial tensor and image appearance or "style" into a vector, to achieve good performance in spatially equivariant tasks (e.g. image-to-image translation). To achieve this, they employ different model design, learning objective, and data biases. While considerable effort has been made to measure disentanglement in vector representations, and assess its impact on task performance, such analysis for (spatial) content - style disentanglement is lacking. In this paper, we conduct an empirical study to investigate the role of different biases in content-style disentanglement settings and unveil the relationship between the degree of disentanglement and task performance. In particular, we consider the setting where we: (i) identify key design choices and learning constraints for three popular content-style disentanglement models; (ii) relax or remove such constraints in an ablation fashion; and (iii) use two metrics to measure the degree of disentanglement and assess its effect on each task performance. Our experiments reveal that there is a "sweet spot" between disentanglement, task performance and - surprisingly - content interpretability, suggesting that blindly forcing for higher disentanglement can hurt model performance and content factors semanticness. Our findings, as well as the used task-independent metrics, can be used to guide the design and selection of new models for tasks where content-style representations are useful.

Federated learning is a distributed machine learning method that aims to preserve the privacy of sample features and labels. In a federated learning system, ID-based sample alignment approaches are usually applied with few efforts made on the protection of ID privacy. In real-life applications, however, the confidentiality of sample IDs, which are the strongest row identifiers, is also drawing much attention from many participants. To relax their privacy concerns about ID privacy, this paper formally proposes the notion of asymmetrical vertical federated learning and illustrates the way to protect sample IDs. The standard private set intersection protocol is adapted to achieve the asymmetrical ID alignment phase in an asymmetrical vertical federated learning system. Correspondingly, a Pohlig-Hellman realization of the adapted protocol is provided. This paper also presents a genuine with dummy approach to achieving asymmetrical federated model training. To illustrate its application, a federated logistic regression algorithm is provided as an example. Experiments are also made for validating the feasibility of this approach.

User engagement is a critical metric for evaluating the quality of open-domain dialogue systems. Prior work has focused on conversation-level engagement by using heuristically constructed features such as the number of turns and the total time of the conversation. In this paper, we investigate the possibility and efficacy of estimating utterance-level engagement and define a novel metric, {\em predictive engagement}, for automatic evaluation of open-domain dialogue systems. Our experiments demonstrate that (1) human annotators have high agreement on assessing utterance-level engagement scores; (2) conversation-level engagement scores can be predicted from properly aggregated utterance-level engagement scores. Furthermore, we show that the utterance-level engagement scores can be learned from data. These scores can improve automatic evaluation metrics for open-domain dialogue systems, as shown by correlation with human judgements. This suggests that predictive engagement can be used as a real-time feedback for training better dialogue models.

Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.

北京阿比特科技有限公司