亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study a colored generalization of the famous simple-switch Markov chain for sampling the set of graphs with a fixed degree sequence. Here we consider the space of graphs with colored vertices, in which we fix the degree sequence and another statistic arising from the vertex coloring, and prove that the set can be connected with simple color-preserving switches or moves. These moves form a basis for defining an irreducible Markov chain necessary for testing statistical model fit to block-partitioned network data. Our methods further generalize well-known algebraic results from the 1990s: namely, that the corresponding moves can be used to construct a regular triangulation for a generalization of the second hypersimplex. On the other hand, in contrast to the monochromatic case, we show that for simple graphs, the 1-norm of the moves necessary to connect the space increases with the number of colors.

相關內容

We tackle the challenge of efficiently reconstructing a 3D asset from a single image with growing demands for automated 3D content creation pipelines. Previous methods primarily rely on Score Distillation Sampling (SDS) and Neural Radiance Fields (NeRF). Despite their significant success, these approaches encounter practical limitations due to lengthy optimization and considerable memory usage. In this report, we introduce Gamba, an end-to-end amortized 3D reconstruction model from single-view images, emphasizing two main insights: (1) 3D representation: leveraging a large number of 3D Gaussians for an efficient 3D Gaussian splatting process; (2) Backbone design: introducing a Mamba-based sequential network that facilitates context-dependent reasoning and linear scalability with the sequence (token) length, accommodating a substantial number of Gaussians. Gamba incorporates significant advancements in data preprocessing, regularization design, and training methodologies. We assessed Gamba against existing optimization-based and feed-forward 3D generation approaches using the real-world scanned OmniObject3D dataset. Here, Gamba demonstrates competitive generation capabilities, both qualitatively and quantitatively, while achieving remarkable speed, approximately 0.6 second on a single NVIDIA A100 GPU.

Due to their flexibility to represent almost any kind of relational data, graph-based models have enjoyed a tremendous success over the past decades. While graphs are inherently only combinatorial objects, however, many prominent analysis tools are based on the algebraic representation of graphs via matrices such as the graph Laplacian, or on associated graph embeddings. Such embeddings associate to each node a set of coordinates in a vector space, a representation which can then be employed for learning tasks such as the classification or alignment of the nodes of the graph. As the geometric picture provided by embedding methods enables the use of a multitude of methods developed for vector space data, embeddings have thus gained interest both from a theoretical as well as a practical perspective. Inspired by trace-optimization problems, often encountered in the analysis of graph-based data, here we present a method to derive ellipsoidal embeddings of the nodes of a graph, in which each node is assigned a set of coordinates on the surface of a hyperellipsoid. Our method may be seen as an alternative to popular spectral embedding techniques, to which it shares certain similarities we discuss. To illustrate the utility of the embedding we conduct a case study in which we analyse synthetic and real world networks with modular structure, and compare the results obtained with known methods in the literature.

The paper presents a new approach of stability evaluation of the approximate Riemann solvers based on the direct Lyapunov method. The present methodology offers a detailed understanding of the origins of numerical shock instability in the approximate Riemann solvers. The pressure perturbation feeding the density and transverse momentum perturbations is identified as the cause of the numerical shock instabilities in the complete approximate Riemann solvers, while the magnitude of the numerical shock instabilities are found to be proportional to the magnitude of the pressure perturbations. A shock-stable HLLEM scheme is proposed based on the insights obtained from this analysis about the origins of numerical shock instability in the approximate Riemann solvers. A set of numerical test cases are solved to show that the proposed scheme is free from numerical shock instability problems of the original HLLEM scheme at high Mach numbers.

Recent works in Task and Motion Planning (TAMP) show that training control policies on language-supervised robot trajectories with quality labeled data markedly improves agent task success rates. However, the scarcity of such data presents a significant hurdle to extending these methods to general use cases. To address this concern, we present an automated framework to decompose trajectory data into temporally bounded and natural language-based descriptive sub-tasks by leveraging recent prompting strategies for Foundation Models (FMs) including both Large Language Models (LLMs) and Vision Language Models (VLMs). Our framework provides both time-based and language-based descriptions for lower-level sub-tasks that comprise full trajectories. To rigorously evaluate the quality of our automatic labeling framework, we contribute an algorithm SIMILARITY to produce two novel metrics, temporal similarity and semantic similarity. The metrics measure the temporal alignment and semantic fidelity of language descriptions between two sub-task decompositions, namely an FM sub-task decomposition prediction and a ground-truth sub-task decomposition. We present scores for temporal similarity and semantic similarity above 90%, compared to 30% of a randomized baseline, for multiple robotic environments, demonstrating the effectiveness of our proposed framework. Our results enable building diverse, large-scale, language-supervised datasets for improved robotic TAMP.

One of the open problems in machine learning is whether any set-family of VC-dimension $d$ admits a sample compression scheme of size $O(d)$. In this paper, we study this problem for balls in graphs. For a ball $B=B_r(x)$ of a graph $G=(V,E)$, a realizable sample for $B$ is a signed subset $X=(X^+,X^-)$ of $V$ such that $B$ contains $X^+$ and is disjoint from $X^-$. A proper sample compression scheme of size $k$ consists of a compressor and a reconstructor. The compressor maps any realizable sample $X$ to a subsample $X'$ of size at most $k$. The reconstructor maps each such subsample $X'$ to a ball $B'$ of $G$ such that $B'$ includes $X^+$ and is disjoint from $X^-$. For balls of arbitrary radius $r$, we design proper labeled sample compression schemes of size $2$ for trees, of size $3$ for cycles, of size $4$ for interval graphs, of size $6$ for trees of cycles, and of size $22$ for cube-free median graphs. For balls of a given radius, we design proper labeled sample compression schemes of size $2$ for trees and of size $4$ for interval graphs. We also design approximate sample compression schemes of size 2 for balls of $\delta$-hyperbolic graphs.

We compare three graphical methods for displaying evidence in a legal case: Wigmore Charts, Bayesian Networks and Chain Event Graphs. We find that these methods are aimed at three distinct audiences, respectively lawyers, forensic scientists and the police. The methods are illustrated using part of the evidence in the case of the murder of Meredith Kercher. More specifically, we focus on representing the list of propositions, evidence, testimony and facts given in the first trial against Raffaele Sollecito and Amanda Knox with these graphical methodologies.

In the search for highly efficient decoders for short LDPC codes approaching maximum likelihood performance, a relayed decoding strategy, specifically activating the ordered statistics decoding process upon failure of a neural min-sum decoder, is enhanced by instilling three innovations. Firstly, soft information gathered at each step of the neural min-sum decoder is leveraged to forge a new reliability measure using a convolutional neural network. This measure aids in constructing the most reliable basis of ordered statistics decoding, bolstering the decoding process by excluding error-prone bits or concentrating them in a smaller area. Secondly, an adaptive ordered statistics decoding process is introduced, guided by a derived decoding path comprising prioritized blocks, each containing distinct test error patterns. The priority of these blocks is determined from the statistical data during the query phase. Furthermore, effective complexity management methods are devised by adjusting the decoding path's length or refining constraints on the involved blocks. Thirdly, a simple auxiliary criterion is introduced to reduce computational complexity by minimizing the number of candidate codewords before selecting the optimal estimate. Extensive experimental results and complexity analysis strongly support the proposed framework, demonstrating its advantages in terms of high throughput, low complexity, independence from noise variance, in addition to superior decoding performance.

Reed--Solomon codes are a classic family of error-correcting codes consisting of evaluations of low-degree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal unique-decoding capabilities, but their list-decoding capabilities are not fully understood. Given the prevalence of Reed-Solomon codes, a fundamental question in coding theory is determining if Reed--Solomon codes can optimally achieve list-decoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed--Solomon codes are combinatorially list-decodable all the way to capacity. However, their results hold for randomly-punctured Reed--Solomon codes over an exponentially large field size $2^{O(n)}$, where $n$ is the block length of the code. A natural question is whether Reed--Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed--Solomon codes are list-decodable to capacity with field size $O(n^2)$. We show that Reed--Solomon codes are list-decodable to capacity with linear field size $O(n)$, which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size $q$ and code length $n$ cannot be bounded by an absolute constant. Our techniques also show that random linear codes are list-decodable up to (the alphabet-independent) capacity with optimal list-size $O(1/\varepsilon)$ and near-optimal alphabet size $2^{O(1/\varepsilon^2)}$, where $\varepsilon$ is the gap to capacity. As far as we are aware, list-decoding up to capacity with optimal list-size $O(1/\varepsilon)$ was previously not known to be achievable with any linear code over a constant alphabet size (even non-constructively). Our proofs are based on the ideas of Guo and Zhang, and we additionally exploit symmetries of reduced intersection matrices.

We study the problem of testing and recovering $k$-clique Ferromagnetic mean shift in the planted Sherrington-Kirkpatrick model (i.e., a type of spin glass model) with $n$ spins. The planted SK model -- a stylized mixture of an uncountable number of Ising models -- allows us to study the fundamental limits of correlation analysis for dependent random variables under misspecification. Our paper makes three major contributions: (i) We identify the phase diagrams of the testing problem by providing minimax optimal rates for multiple different parameter regimes. We also provide minimax optimal rates for exact recovery in the high/critical and low temperature regimes. (ii) We prove a universality result implying that all the obtained rates still hold with non-Gaussian couplings. (iii) To achieve the major results, we also establish a family of novel concentration bounds and central limiting theorems for the averaging statistics in the local and global phases of the planted SK model. These technical results shed new insights into the planted spin glass models. The pSK model also exhibits close connections with a binary variant of the single spike Gaussian sparse principle component analysis model by replacing the background identity precision matrix with a Wigner random matrix.

We study the problem of testing and recovering the hidden $k$-clique Ferromagnetic correlation in the planted Random Field Curie-Weiss model (a.k.a. the pRFCW model). The pRFCW model is a random effect Ising model that exhibits richer phase diagrams both statistically and physically than the standard Curie-Weiss model. Using an alternative characterization of parameter regimes as 'temperatures' and the mean values as 'outer magnetic fields,' we establish the minimax optimal detection rates and recovery rates. The results consist of $7$ distinctive phases for testing and $3$ phases for exact recovery. Our results also imply that the randomness of the outer magnetic field contributes to countable possible convergence rates, which are not observed in the fixed field model. As a byproduct of the proof techniques, we provide two new mathematical results: (1) A family of tail bounds for the average magnetization of the Random Field Curie-Weiss model (a.k.a. the RFCW model) across all temperatures and arbitrary outer fields. (2) A sharp estimate of the information divergence between RFCW models. These play pivotal roles in establishing the major theoretical results in this paper. Additionally, we show that the mathematical structure involved in the pRFCW hidden clique inference problem resembles a 'sparse PCA-like' problem for discrete data. The richer statistical phases than the long-studied Gaussian counterpart shed new light on the theoretical insight of sparse PCA for discrete data.

北京阿比特科技有限公司