亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We define term rewriting systems on the vertices and faces of nestohedra, and show that the former are confluent and terminating. While the associated poset on vertices generalizes Barnard--McConville's flip order for graph-associahedra, the preorder on faces likely generalizes the facial weak order for permutahedra. Moreover, we define and study contextual families of nestohedra, whose local confluence diagrams satisfy a certain uniformity condition. Among them are associahedra and operahedra, whose associated proofs of confluence for their rewriting systems reproduce proofs of categorical coherence theorems for monoidal categories and categorified operads.

相關內容

We revisit and generalize inequalities for the summatory function of the sum of digits in a given integer base. We prove that several known results can be deduced from a theorem in a 2023 paper by Mohanty, Greenbury, Sarkany, Narayanan, Dingle, Ahnert, and Louis, whose primary scope is the maximum mutational robustness in genotype-phenotype maps.

Whether or not the Kronecker coefficients of the symmetric group count some set of combinatorial objects is a longstanding open question. In this work we show that a given Kronecker coefficient is proportional to the rank of a projector that can be measured efficiently using a quantum computer. In other words a Kronecker coefficient counts the dimension of the vector space spanned by the accepting witnesses of a QMA verifier, where QMA is the quantum analogue of NP. This implies that approximating the Kronecker coefficients to within a given relative error is not harder than a certain natural class of quantum approximate counting problems that captures the complexity of estimating thermal properties of quantum many-body systems. A second consequence is that deciding positivity of Kronecker coefficients is contained in QMA, complementing a recent NP-hardness result of Ikenmeyer, Mulmuley and Walter. We obtain similar results for the related problem of approximating row sums of the character table of the symmetric group. Finally, we discuss an efficient quantum algorithm that approximates normalized Kronecker coefficients to inverse-polynomial additive error.

The notion of Laplacian of a graph can be generalized to simplicial complexes and hypergraphs, and contains information on the topology of these structures. Even for a graph, the consideration of associated simplicial complexes is interesting to understand its shape. Whereas the Laplacian of a graph has a simple probabilistic interpretation as the generator of a continuous time Markov chain on the graph, things are not so direct when considering simplicial complexes. We define here new Markov chains on simplicial complexes. For a given order~$k$, the state space is the set of $k$-cycles that are chains of $k$-simplexes with null boundary. This new framework is a natural generalization of the canonical Markov chains on graphs. We show that the generator of our Markov chain is the upper Laplacian defined in the context of algebraic topology for discrete structure. We establish several key properties of this new process: in particular, when the number of vertices is finite, the Markov chain is positive recurrent. This result is not trivial, since the cycles can loop over themselves an unbounded number of times. We study the diffusive limits when the simplicial complexes under scrutiny are a sequence of ever refining triangulations of the flat torus. Using the analogy between singular and Hodge homologies, we express this limit as valued in the set of currents. The proof of tightness and the identification of the limiting martingale problem make use of the flat norm and carefully controls of the error terms in the convergence of the generator. Uniqueness of the solution to the martingale problem is left open. An application to hole detection is carried.

This manuscript derives locally weighted ensemble Kalman methods from the point of view of ensemble-based function approximation. This is done by using pointwise evaluations to build up a local linear or quadratic approximation of a function, tapering off the effect of distant particles via local weighting. This introduces a candidate method (the locally weighted Ensemble Kalman method for inversion) with the motivation of combining some of the strengths of the particle filter (ability to cope with nonlinear maps and non-Gaussian distributions) and the Ensemble Kalman filter (no filter degeneracy).

We study the properties of a family of distances between functions of a single variable. These distances are examples of integral probability metrics, and have been used previously for comparing probability measures on the line; special cases include the Earth Mover's Distance and the Kolmogorov Metric. We examine their properties for general signals, proving that they are robust to a broad class of deformations. We also establish corresponding robustness results for the induced sliced distances between multivariate functions. Finally, we establish error bounds for approximating the univariate metrics from finite samples, and prove that these approximations are robust to additive Gaussian noise. The results are illustrated in numerical experiments, which include comparisons with Wasserstein distances.

In the mixture of experts model, a common assumption is the linearity between a response variable and covariates. While this assumption has theoretical and computational benefits, it may lead to suboptimal estimates by overlooking potential nonlinear relationships among the variables. To address this limitation, we propose a partially linear structure that incorporates unspecified functions to capture nonlinear relationships. We establish the identifiability of the proposed model under mild conditions and introduce a practical estimation algorithm. We present the performance of our approach through numerical studies, including simulations and real data analysis.

In some causal inference scenarios, the treatment variable is measured inaccurately, for instance in epidemiology or econometrics. Failure to correct for the effect of this measurement error can lead to biased causal effect estimates. Previous research has not studied methods that address this issue from a causal viewpoint while allowing for complex nonlinear dependencies and without assuming access to side information. For such a scenario, this study proposes a model that assumes a continuous treatment variable that is inaccurately measured. Building on existing results for measurement error models, we prove that our model's causal effect estimates are identifiable, even without knowledge of the measurement error variance or other side information. Our method relies on a deep latent variable model in which Gaussian conditionals are parameterized by neural networks, and we develop an amortized importance-weighted variational objective for training the model. Empirical results demonstrate the method's good performance with unknown measurement error. More broadly, our work extends the range of applications in which reliable causal inference can be conducted.

Neural fields provide a continuous scene representation of 3D geometry and appearance in a way which has great promise for robotics applications. One functionality that unlocks unique use-cases for neural fields in robotics is object 6-DoF registration. In this paper, we provide an expanded analysis of the recent Reg-NF neural field registration method and its use-cases within a robotics context. We showcase the scenario of determining the 6-DoF pose of known objects within a scene using scene and object neural field models. We show how this may be used to better represent objects within imperfectly modelled scenes and generate new scenes by substituting object neural field models into the scene.

We generalize McDiarmid's inequality for functions with bounded differences on a high probability set, using an extension argument. Those functions concentrate around their conditional expectations. We further extend the results to concentration in general metric spaces.

Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.

北京阿比特科技有限公司