亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Multiplicative linear logic is a very well studied formal system, and most such studies are concerned with the one-sided sequent calculus. In this paper we look in detail at existing translations between a deep inference system and the standard sequent calculus one, provide a simplified translation, and provide a formal proof that a standard approach to modelling is indeed invariant to all these translations. En route we establish a necessary condition for provable sequents related to the number of pars and tensors in a formula that seems to be missing from the literature.

相關內容

Autoformalization involves automatically translating informal math into formal theorems and proofs that are machine-verifiable. Euclidean geometry provides an interesting and controllable domain for studying autoformalization. In this paper, we introduce a neuro-symbolic framework for autoformalizing Euclidean geometry, which combines domain knowledge, SMT solvers, and large language models (LLMs). One challenge in Euclidean geometry is that informal proofs rely on diagrams, leaving gaps in texts that are hard to formalize. To address this issue, we use theorem provers to fill in such diagrammatic information automatically, so that the LLM only needs to autoformalize the explicit textual steps, making it easier for the model. We also provide automatic semantic evaluation for autoformalized theorem statements. We construct LeanEuclid, an autoformalization benchmark consisting of problems from Euclid's Elements and the UniGeo dataset formalized in the Lean proof assistant. Experiments with GPT-4 and GPT-4V show the capability and limitations of state-of-the-art LLMs on autoformalizing geometry problems. The data and code are available at //github.com/loganrjmurphy/LeanEuclid.

Unoriented surface reconstructions based on the Gauss formula have attracted much attention due to their elegant mathematical formulation and excellent performance. However, the isotropic characteristics of the formulation limit their capacity to leverage the anisotropic information within the point cloud. In this work, we propose a novel anisotropic formulation by introducing a convection term in the original Laplace operator. By choosing different velocity vectors, the anisotropic feature can be exploited to construct more effective linear equations. Moreover, an adaptive selection strategy is introduced for the velocity vector to further enhance the orientation and reconstruction performance of thin structures. Extensive experiments demonstrate that our method achieves state-of-the-art performance and manages various challenging situations, especially for models with thin structures or small holes. The source code will be released on GitHub.

The success of the text-guided diffusion model has inspired the development and release of numerous powerful diffusion models within the open-source community. These models are typically fine-tuned on various expert datasets, showcasing diverse denoising capabilities. Leveraging multiple high-quality models to produce stronger generation ability is valuable, but has not been extensively studied. Existing methods primarily adopt parameter merging strategies to produce a new static model. However, they overlook the fact that the divergent denoising capabilities of the models may dynamically change across different states, such as when experiencing different prompts, initial noises, denoising steps, and spatial locations. In this paper, we propose a novel ensembling method, Adaptive Feature Aggregation (AFA), which dynamically adjusts the contributions of multiple models at the feature level according to various states (i.e., prompts, initial noises, denoising steps, and spatial locations), thereby keeping the advantages of multiple diffusion models, while suppressing their disadvantages. Specifically, we design a lightweight Spatial-Aware Block-Wise (SABW) feature aggregator that adaptive aggregates the block-wise intermediate features from multiple U-Net denoisers into a unified one. The core idea lies in dynamically producing an individual attention map for each model's features by comprehensively considering various states. It is worth noting that only SABW is trainable with about 50 million parameters, while other models are frozen. Both the quantitative and qualitative experiments demonstrate the effectiveness of our proposed Adaptive Feature Aggregation method. The code is available at //github.com/tenvence/afa/.

Recent theoretical developments in coset coding theory have provided continuous-valued functions which give the equivocation and maximum likelihood (ML) decoding probability of coset secrecy codes. In this work, we develop a method for incorporating these functions, along with a complex set of constraints, into a gradient descent optimization algorithm. This algorithm employs a movement cost function and trigonometric update step to ensure that the continuous-valued code definition vector ultimately reaches a value which yields a realizable coset code. This algorithm is used to produce coset codes with blocklength up to a few thousand. These codes were compared against published codes, including both short-blocklength and capacity-achieving constructions. For most code sizes, codes generated using gradient descent outperformed all others, especially capacity-achieving constructions, which performed significantly worse than randomly-generated codes at short blocklength.

Change point detection (CPD) and anomaly detection (AD) are essential techniques in various fields to identify abrupt changes or abnormal data instances. However, existing methods are often constrained to univariate data, face scalability challenges with large datasets due to computational demands, and experience reduced performance with high-dimensional or intricate data, as well as hidden anomalies. Furthermore, they often lack interpretability and adaptability to domain-specific knowledge, which limits their versatility across different fields. In this work, we propose a deep learning-based CPD/AD method called Probabilistic Predictive Coding (PPC) that jointly learns to encode sequential data to low dimensional latent space representations and to predict the subsequent data representations as well as the corresponding prediction uncertainties. The model parameters are optimized with maximum likelihood estimation by comparing these predictions with the true encodings. At the time of application, the true and predicted encodings are used to determine the probability of conformity, an interpretable and meaningful anomaly score. Furthermore, our approach has linear time complexity, scalability issues are prevented, and the method can easily be adjusted to a wide range of data types and intricate applications. We demonstrate the effectiveness and adaptability of our proposed method across synthetic time series experiments, image data, and real-world magnetic resonance spectroscopic imaging data.

Standard common factor models, such as the linear normal factor model, rely on strict parametric assumptions, which require rigorous model-data fit assessment to prevent fallacious inferences. However, overall goodness-of-fit diagnostics conventionally used in factor analysis do not offer diagnostic information on where the misfit originates. In the current work, we propose a new fit assessment framework for common factor models by extending the theory of generalized residuals (Haberman & Sinharay, 2013). This framework allows for the flexible adaptation of test statistics to identify various sources of misfit. In addition, the resulting goodness-of-fit tests provide more informative diagnostics, as the evaluation is performed conditionally on latent variables. Several examples of test statistics suitable for assessing various model assumptions are presented within this framework, and their performance is evaluated by simulation studies and a real data example.

Algorithmic stability is a central notion in learning theory that quantifies the sensitivity of an algorithm to small changes in the training data. If a learning algorithm satisfies certain stability properties, this leads to many important downstream implications, such as generalization, robustness, and reliable predictive inference. Verifying that stability holds for a particular algorithm is therefore an important and practical question. However, recent results establish that testing the stability of a black-box algorithm is impossible, given limited data from an unknown distribution, in settings where the data lies in an uncountably infinite space (such as real-valued data). In this work, we extend this question to examine a far broader range of settings, where the data may lie in any space -- for example, categorical data. We develop a unified framework for quantifying the hardness of testing algorithmic stability, which establishes that across all settings, if the available data is limited then exhaustive search is essentially the only universally valid mechanism for certifying algorithmic stability. Since in practice, any test of stability would naturally be subject to computational constraints, exhaustive search is impossible and so this implies fundamental limits on our ability to test the stability property for a black-box algorithm.

Recent work has proposed the linear representation hypothesis: that language models perform computation by manipulating one-dimensional representations of concepts ("features") in activation space. In contrast, we explore whether some language model representations may be inherently multi-dimensional. We begin by developing a rigorous definition of irreducible multi-dimensional features based on whether they can be decomposed into either independent or non-co-occurring lower-dimensional features. Motivated by these definitions, we design a scalable method that uses sparse autoencoders to automatically find multi-dimensional features in GPT-2 and Mistral 7B. These auto-discovered features include strikingly interpretable examples, e.g. circular features representing days of the week and months of the year. We identify tasks where these exact circles are used to solve computational problems involving modular arithmetic in days of the week and months of the year. Finally, we provide evidence that these circular features are indeed the fundamental unit of computation in these tasks with intervention experiments on Mistral 7B and Llama 3 8B, and we find further circular representations by breaking down the hidden states for these tasks into interpretable components.

We study the identification of causal effects, motivated by two improvements to identifiability which can be attained if one knows that some variables in a causal graph are functionally determined by their parents (without needing to know the specific functions). First, an unidentifiable causal effect may become identifiable when certain variables are functional. Second, certain functional variables can be excluded from being observed without affecting the identifiability of a causal effect, which may significantly reduce the number of needed variables in observational data. Our results are largely based on an elimination procedure which removes functional variables from a causal graph while preserving key properties in the resulting causal graph, including the identifiability of causal effects.

Proving compositionality of behavioral equivalence on state-based systems with respect to algebraic operations is a classical and widely studied problem. We study a categorical formulation of this problem, where operations on state-based systems modeled as coalgebras can be elegantly captured through distributive laws between functors. To prove compositionality, it then suffices to show that this distributive law lifts from sets to relations, giving an explanation of how behavioral equivalence on smaller systems can be combined to obtain behavioral equivalence on the composed system. In this paper, we refine this approach by focusing on so-called codensity lifting of functors, which gives a very generic presentation of various notions of (bi)similarity as well as quantitative notions such as behavioral metrics on probabilistic systems. The key idea is to use codensity liftings both at the level of algebras and coalgebras, using a new generalization of the codensity lifting. The problem of lifting distributive laws then reduces to the abstract problem of constructing distributive laws between codensity liftings, for which we propose a simplified sufficient condition. Our sufficient condition instantiates to concrete proof methods for compositionality of algebraic operations on various types of state-based systems. We instantiate our results to prove compositionality of qualitative and quantitative properties of deterministic automata. We also explore the limits of our approach by including an example of probabilistic systems, where it is unclear whether the sufficient condition holds, and instead we use our setting to give a direct proof of compositionality. ...

北京阿比特科技有限公司