亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Category theory is a branch of mathematics that provides a formal framework for understanding the relationship between mathematical structures. To this end, a category not only incorporates the data of the desired objects, but also "morphisms", which capture how different objects interact with each other. Category theory has found many applications in mathematics and in computer science, for example in functional programming. Double categories are a natural generalization of categories which incorporate the data of two separate classes of morphisms, allowing a more nuanced representation of relationships and interactions between objects. Similar to category theory, double categories have been successfully applied to various situations in mathematics and computer science, in which objects naturally exhibit two types of morphisms. Examples include categories themselves, but also lenses, petri nets, and spans. While categories have already been formalized in a variety of proof assistants, double categories have received far less attention. In this paper we remedy this situation by presenting a formalization of double categories via the proof assistant Coq, relying on the Coq UniMath library. As part of this work we present two equivalent formalizations of the definition of a double category, an unfolded explicit definition and a second definition which exhibits excellent formal properties via 2-sided displayed categories. As an application of the formal approach we establish a notion of univalent double category along with a univalence principle: equivalences of univalent double categories coincide with their identities

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · CP · MoDELS · 統計量 · 值域 ·
2023 年 11 月 29 日

Constraint programming (CP) is a powerful tool for modeling mathematical concepts and objects and finding both solutions or counter examples. One of the major strengths of CP is that problems can easily be combined or expanded. In this paper, we illustrate that this versatility makes CP an ideal tool for exploring problems in permutation patterns. We declaratively define permutation properties, permutation pattern avoidance and containment constraints using CP and show how this allows us to solve a wide range of problems. We show how this approach enables the arbitrary composition of these conditions, and also allows the easy addition of extra conditions. We demonstrate the effectiveness of our techniques by modelling the containment and avoidance of six permutation patterns, eight permutation properties and measuring five statistics on the resulting permutations. In addition to calculating properties and statistics for the generated permutations, we show that arbitrary additional constraints can also be easily and efficiently added. This approach enables mathematicians to investigate permutation pattern problems in a quick and efficient manner. We demonstrate the utility of constraint programming for permutation patterns by showing how we can easily and efficiently extend the known permutation counts for a conjecture involving the class of 1324 avoiding permutations. For this problem, we expand the enumeration of 1324-avoiding permutations with a fixed number of inversions to permutations of length 16 and show for the first time that in the enumeration there is a pattern occurring which follows a unique sequence on the Online Encyclopedia of Integer Sequences.

Subset Simulation is a Markov chain Monte Carlo method, initially conceived to compute small failure probabilities in structural reliability problems. This is done by iteratively sampling from nested subsets in the input space of a performance function. Subset Simulation has since been adapted as a sampler in other realms such as optimisation, Bayesian updating and history matching. In all of these contexts, it is not uncommon that either the geometry of the input domain or the nature of the corresponding performance function cause Subset Simulation to suffer from ergodicity problems. To address these problems, this paper proposes Branching Subset Simulation. The proposed framework dynamically partitions the input space, and recursively begins Branching Subset Simulation anew in each partition. It is shown that Branching Subset Simulation is less likely than Subset Simulation to suffer from ergodicity problems and has improved sampling efficiency in the presence of multi-modality.

The Parameterized Inapproximability Hypothesis (PIH) asserts that no fixed parameter tractable (FPT) algorithm can distinguish a satisfiable CSP instance, parameterized by the number of variables, from one where every assignment fails to satisfy an $\varepsilon$ fraction of constraints for some absolute constant $\varepsilon > 0$. PIH plays the role of the PCP theorem in parameterized complexity. However, PIH has only been established under Gap-ETH, a very strong assumption with an inherent gap. In this work, we prove PIH under the Exponential Time Hypothesis (ETH). This is the first proof of PIH from a gap-free assumption. Our proof is self-contained and elementary. We identify an ETH-hard CSP whose variables take vector values, and constraints are either linear or of a special parallel structure. Both kinds of constraints can be checked with constant soundness via a "parallel PCP of proximity" based on the Walsh-Hadamard code.

We propose a novel algorithm for data augmentation in nonlinear over-parametrized regression. Our data augmentation algorithm borrows from the literature on causality and extends the recently proposed Anchor regression (AR) method for data augmentation, which is in contrast to the current state-of-the-art domain-agnostic solutions that rely on the Mixup literature. Our Anchor Data Augmentation (ADA) uses several replicas of the modified samples in AR to provide more training examples, leading to more robust regression predictions. We apply ADA to linear and nonlinear regression problems using neural networks. ADA is competitive with state-of-the-art C-Mixup solutions.

We provide a simple and general solution to the fundamental open problem of inaccurate uncertainty quantification of Bayesian inference in misspecified or approximate models, and of generalized Bayesian posteriors more generally. While existing solutions are based on explicit Gaussian posterior approximations, or computationally onerous post-processing procedures, we demonstrate that correct uncertainty quantification can be achieved by substituting the usual posterior with an alternative posterior that conveys the same information. This solution applies to both likelihood-based and loss-based posteriors, and we formally demonstrate the reliable uncertainty quantification of this approach. The new approach is demonstrated through a range of examples, including generalized linear models, and doubly intractable models.

We prove an inverse approximation theorem for the approximation of nonlinear sequence-to-sequence relationships using recurrent neural networks (RNNs). This is a so-called Bernstein-type result in approximation theory, which deduces properties of a target function under the assumption that it can be effectively approximated by a hypothesis space. In particular, we show that nonlinear sequence relationships that can be stably approximated by nonlinear RNNs must have an exponential decaying memory structure - a notion that can be made precise. This extends the previously identified curse of memory in linear RNNs into the general nonlinear setting, and quantifies the essential limitations of the RNN architecture for learning sequential relationships with long-term memory. Based on the analysis, we propose a principled reparameterization method to overcome the limitations. Our theoretical results are confirmed by numerical experiments. The code has been released in //github.com/radarFudan/Curse-of-memory

We present {\lambda}ert, a type theory supporting refinement types with explicit proofs. Instead of solving refinement constraints with an SMT solver like DML and Liquid Haskell, our system requires and permits programmers to embed proofs of properties within the program text, letting us support a rich logic of properties including quantifiers and induction. We show that the type system is sound by showing that every refined program erases to a simply-typed program, and by means of a denotational semantics, we show that every erased program has all of the properties demanded by its refined type. All of our proofs are formalised in Lean 4.

Koopman representations aim to learn features of nonlinear dynamical systems (NLDS) which lead to linear dynamics in the latent space. Theoretically, such features can be used to simplify many problems in modeling and control of NLDS. In this work we study autoencoder formulations of this problem, and different ways they can be used to model dynamics, specifically for future state prediction over long horizons. We discover several limitations of predicting future states in the latent space and propose an inference-time mechanism, which we refer to as Periodic Reencoding, for faithfully capturing long term dynamics. We justify this method both analytically and empirically via experiments in low and high dimensional NLDS.

Disentangled Representation Learning (DRL) aims to learn a model capable of identifying and disentangling the underlying factors hidden in the observable data in representation form. The process of separating underlying factors of variation into variables with semantic meaning benefits in learning explainable representations of data, which imitates the meaningful understanding process of humans when observing an object or relation. As a general learning strategy, DRL has demonstrated its power in improving the model explainability, controlability, robustness, as well as generalization capacity in a wide range of scenarios such as computer vision, natural language processing, data mining etc. In this article, we comprehensively review DRL from various aspects including motivations, definitions, methodologies, evaluations, applications and model designs. We discuss works on DRL based on two well-recognized definitions, i.e., Intuitive Definition and Group Theory Definition. We further categorize the methodologies for DRL into four groups, i.e., Traditional Statistical Approaches, Variational Auto-encoder Based Approaches, Generative Adversarial Networks Based Approaches, Hierarchical Approaches and Other Approaches. We also analyze principles to design different DRL models that may benefit different tasks in practical applications. Finally, we point out challenges in DRL as well as potential research directions deserving future investigations. We believe this work may provide insights for promoting the DRL research in the community.

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

北京阿比特科技有限公司