亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a version of the matrix-tree theorem, which relates the determinant of a matrix to sums of weights of arborescences of its directed graph representation. Our treatment allows for non-zero column sums in the parent matrix by adding a root vertex to the usually considered matrix directed graph. We use our result to prove a version of the matrix-forest, or all-minors, theorem, which relates minors of the matrix to forests of arborescences of the matrix digraph. We then show that it is possible, when the source and target vertices of an arc are not strongly connected, to move the source of the arc in the matrix directed graph and leave the resulting matrix determinant unchanged, as long as the source and target vertices are not strongly connected after the move. This result enables graphical strategies for factoring matrix determinants.

相關內容

This paper extends the linear grouped fixed effects (GFE) panel model to allow for heteroskedasticity from a discrete latent group variable. Key features of GFE are preserved, such as individuals belonging to one of a finite number of groups and group membership is unrestricted and estimated. Ignoring group heteroskedasticity may lead to poor classification, which is detrimental to finite sample bias and standard errors of estimators. I introduce the "weighted grouped fixed effects" (WGFE) estimator that minimizes a weighted average of group sum of squared residuals. I establish $\sqrt{NT}$-consistency and normality under a concept of group separation based on second moments. A test of group homoskedasticity is discussed. A fast computation procedure is provided. Simulations show that WGFE outperforms alternatives that exclude second moment information. I demonstrate this approach by considering the link between income and democracy and the effect of unionization on earnings.

Trace theory is a principled framework for defining equivalence relations for concurrent program runs based on a commutativity relation over the set of atomic steps taken by individual program threads. Its simplicity, elegance, and algorithmic efficiency makes it useful in many different contexts including program verification and testing. We study relaxations of trace equivalence with the goal of maintaining its algorithmic advantages. We first prove that the largest appropriate relaxation of trace equivalence, an equivalence relation that preserves the order of steps taken by each thread and what write operation each read operation observes, does not yield efficient algorithms. We prove a linear space lower bound for the problem of checking, in a streaming setting, if two arbitrary steps of a concurrent program run are causally concurrent (i.e. they can be reordered in an equivalent run) or causally ordered (i.e. they always appear in the same order in all equivalent runs). The same problem can be decided in constant space for trace equivalence. Next, we propose a new commutativity-based notion of equivalence called grain equivalence that is strictly more relaxed than trace equivalence, and yet yields a constant space algorithm for the same problem. This notion of equivalence uses commutativity of grains, which are sequences of atomic steps, in addition to the standard commutativity from trace theory. We study the two distinct cases when the grains are contiguous subwords of the input program run and when they are not, formulate the precise definition of causal concurrency in each case, and show that they can be decided in constant space, despite being strict relaxations of the notion of causal concurrency based on trace equivalence.

We provide several new results on the sample complexity of vector-valued linear predictors (parameterized by a matrix), and more generally neural networks. Focusing on size-independent bounds, where only the Frobenius norm distance of the parameters from some fixed reference matrix $W_0$ is controlled, we show that the sample complexity behavior can be surprisingly different than what we may expect considering the well-studied setting of scalar-valued linear predictors. This also leads to new sample complexity bounds for feed-forward neural networks, tackling some open questions in the literature, and establishing a new convex linear prediction problem that is provably learnable without uniform convergence.

In numerical linear algebra, considerable effort has been devoted to obtaining faster algorithms for linear systems whose underlying matrices exhibit structural properties. A prominent success story is the method of generalized nested dissection~[Lipton-Rose-Tarjan'79] for separable matrices. On the other hand, the majority of recent developments in the design of efficient linear program (LP) solves do not leverage the ideas underlying these faster linear system solvers nor consider the separable structure of the constraint matrix. We give a faster algorithm for separable linear programs. Specifically, we consider LPs of the form $\min_{\mathbf{A}\mathbf{x}=\mathbf{b}, \mathbf{l}\leq\mathbf{x}\leq\mathbf{u}} \mathbf{c}^\top\mathbf{x}$, where the graphical support of the constraint matrix $\mathbf{A} \in \mathbb{R}^{n\times m}$ is $O(n^\alpha)$-separable. These include flow problems on planar graphs and low treewidth matrices among others. We present an $\tilde{O}((m+m^{1/2 + 2\alpha}) \log(1/\epsilon))$ time algorithm for these LPs, where $\epsilon$ is the relative accuracy of the solution. Our new solver has two important implications: for the $k$-multicommodity flow problem on planar graphs, we obtain an algorithm running in $\tilde{O}(k^{5/2} m^{3/2})$ time in the high accuracy regime; and when the support of $\mathbf{A}$ is $O(n^\alpha)$-separable with $\alpha \leq 1/4$, our algorithm runs in $\tilde{O}(m)$ time, which is nearly optimal. The latter significantly improves upon the natural approach of combining interior point methods and nested dissection, whose time complexity is lower bounded by $\Omega(\sqrt{m}(m+m^{\alpha\omega}))=\Omega(m^{3/2})$, where $\omega$ is the matrix multiplication constant. Lastly, in the setting of low-treewidth LPs, we recover the results of [DLY,STOC21] and [GS,22] with significantly simpler data structure machinery.

We propose an approach for the generation of topology-optimized structures with text-guided appearance stylization. This methodology aims to enrich the concurrent design of a structure's physical functionality and aesthetic appearance. Users can effortlessly input descriptive text to govern the style of the structure. Our system employs a hash-encoded neural network as the implicit structure representation backbone, which serves as the foundation for the co-optimization of structural mechanical performance, style, and connectivity, to ensure full-color, high-quality 3D-printable solutions. We substantiate the effectiveness of our system through extensive comparisons, demonstrations, and a 3D printing test.

Koopman representations aim to learn features of nonlinear dynamical systems (NLDS) which lead to linear dynamics in the latent space. Theoretically, such features can be used to simplify many problems in modeling and control of NLDS. In this work we study autoencoder formulations of this problem, and different ways they can be used to model dynamics, specifically for future state prediction over long horizons. We discover several limitations of predicting future states in the latent space and propose an inference-time mechanism, which we refer to as Periodic Reencoding, for faithfully capturing long term dynamics. We justify this method both analytically and empirically via experiments in low and high dimensional NLDS.

The searching efficiency of the quantum approximate optimization algorithm is dependent on both the classical and quantum sides of the algorithm. Recently a quantum approximate Bayesian optimization algorithm (QABOA) that includes two mixers was developed, where surrogate-based Bayesian optimization is applied to improve the sampling efficiency of the classical optimizer. A continuous-time quantum walk mixer is used to enhance exploration, and the generalized Grover mixer is also applied to improve exploitation. In this paper, an extension of QABOA is proposed to further improve its searching efficiency. The searching efficiency is enhanced through two aspects. First, two mixers, including one for exploration and the other for exploitation, are applied in an alternating fashion. Second, uncertainty of the quantum circuit is quantified with a new quantum Mat\'ern kernel based on the kurtosis of the basis state distribution, which increases the chance of obtaining the optimum. The proposed new two-mixer QABOA$'$s with and without uncertainty quantification are compared with three single-mixer QABOA$'$s on five discrete and four mixed-integer problems. The results show that the proposed two-mixer QABOA with uncertainty quantification has the best performance in efficiency and consistency for five out of the nine tested problems. The results also show that QABOA with the generalized Grover mixer performs the best among the single-mixer algorithms, thereby demonstrating the benefit of exploitation and the importance of dynamic exploration-exploitation balance in improving searching efficiency.

We propose a new regret minimization algorithm for episodic sparse linear Markov decision process (SMDP) where the state-transition distribution is a linear function of observed features. The only previously known algorithm for SMDP requires the knowledge of the sparsity parameter and oracle access to an unknown policy. We overcome these limitations by combining the doubly robust method that allows one to use feature vectors of \emph{all} actions with a novel analysis technique that enables the algorithm to use data from all periods in all episodes. The regret of the proposed algorithm is $\tilde{O}(\sigma^{-1}_{\min} s_{\star} H \sqrt{N})$, where $\sigma_{\min}$ denotes the restrictive the minimum eigenvalue of the average Gram matrix of feature vectors, $s_\star$ is the sparsity parameter, $H$ is the length of an episode, and $N$ is the number of rounds. We provide a lower regret bound that matches the upper bound up to logarithmic factors on a newly identified subclass of SMDPs. Our numerical experiments support our theoretical results and demonstrate the superior performance of our algorithm.

Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.

Embedding entities and relations into a continuous multi-dimensional vector space have become the dominant method for knowledge graph embedding in representation learning. However, most existing models ignore to represent hierarchical knowledge, such as the similarities and dissimilarities of entities in one domain. We proposed to learn a Domain Representations over existing knowledge graph embedding models, such that entities that have similar attributes are organized into the same domain. Such hierarchical knowledge of domains can give further evidence in link prediction. Experimental results show that domain embeddings give a significant improvement over the most recent state-of-art baseline knowledge graph embedding models.

北京阿比特科技有限公司