亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Functional graphs (FGs) model the graph structures used to analyze the behavior of functions from a discrete set to itself. In turn, such functions are used to study real complex phenomena evolving in time. As the systems involved can be quite large, it is interesting to decompose and factorize them into several subgraphs acting together. Polynomial equations over functional graphs provide a formal way to represent this decomposition and factorization mechanism, and solving them validates or invalidates hypotheses on their decomposability. The current solution method breaks down a single equation into a series of \emph{basic} equations of the form $A\times X=B$ (with $A$, $X$, and $B$ being FGs) to identify the possible solutions. However, it is able to consider just FGs made of cycles only. This work proposes an algorithm for solving these basic equations for general connected FGs. By exploiting a connection with the cancellation problem, we prove that the upper bound to the number of solutions is closely related to the size of the cycle in the coefficient $A$ of the equation. The cancellation problem is also involved in the main algorithms provided by the paper. We introduce a polynomial-time semi-decision algorithm able to provide constraints that a potential solution will have to satisfy if it exists. Then, exploiting the ideas introduced in the first algorithm, we introduce a second exponential-time algorithm capable of finding all solutions by integrating several `hacks' that try to keep the exponential as tight as possible.

相關內容

A $k$-deck of a (coloured) graph is a multiset of its induced $k$-vertex subgraphs. Given a graph $G$, when is it possible to reconstruct with high probability a uniformly random colouring of its vertices in $r$ colours from its $k$-deck? In this paper, we study this question for grids and random graphs. Reconstruction of random colourings of $d$-dimensional $n$-grids from the deck of their $k$-subgrids is one of the most studied colour reconstruction questions. The 1-dimensional case is motivated by the problem of reconstructing DNA sequences from their `shotgunned' stretches. It was comprehensively studied and the above reconstruction question was completely answered in the '90s. In this paper, we get a very precise answer for higher $d$. For every $d\geq 2$ and every $r\geq 2$, we present an almost linear algorithm that reconstructs with high probability a random $r$-colouring of vertices of a $d$-dimensional $n$-grid from the deck of all its $k$-subgrids for every $k\geq(d\log_r n)^{1/d}+1/d+\varepsilon$ and prove that the random $r$-colouring is not reconstructible with high probability if $k\leq (d\log_r n)^{1/d}-\varepsilon$. This answers the question of Narayanan and Yap (that was asked for $d\geq 3$) on "two-point concentration" of the minimum $k$ so that $k$-subgrids determine the entire colouring. Next, we prove that with high probability a uniformly random $r$-colouring of vertices of a uniformly random graph $G(n,1/2)$ is reconstructible from its full $k$-deck if $k\geq 2\log_2 n+8$ and is not reconstructible with high probability if $k\leq\sqrt{2\log_2 n}$. We further show that the colour reconstruction algorithm for random graphs can be modified and used for graph reconstruction: we prove that with high probability $G(n,1/2)$ is reconstructible from its full $k$-deck if $k\geq 2\log_2 n+11$ while it is not reconstructible with high probability if $k\leq 2\sqrt{\log_2 n}$.

Approximated forms of the RII and RIII redistribution matrices are frequently applied to simplify the numerical solution of the radiative transfer problem for polarized radiation, taking partial frequency redistribution (PRD) effects into account. A widely used approximation for RIII is to consider its expression under the assumption of complete frequency redistribution (CRD) in the observer frame (RIII CRD). The adequacy of this approximation for modeling the intensity profiles has been firmly established. By contrast, its suitability for modeling scattering polarization signals has only been analyzed in a few studies, considering simplified settings. In this work, we aim at quantitatively assessing the impact and the range of validity of the RIII CRD approximation in the modeling of scattering polarization. Methods. We first present an analytic comparison between RIII and RIII CRD. We then compare the results of radiative transfer calculations, out of local thermodynamic equilibrium, performed with RIII and RIII CRD in realistic 1D atmospheric models. We focus on the chromospheric Ca i line at 4227 A and on the photospheric Sr i line at 4607 A.

Cross-modal representation learning has become a new normal for bridging the semantic gap between text and visual data. Learning modality agnostic representations in a continuous latent space, however, is often treated as a black-box data-driven training process. It is well-known that the effectiveness of representation learning depends heavily on the quality and scale of training data. For video representation learning, having a complete set of labels that annotate the full spectrum of video content for training is highly difficult if not impossible. These issues, black-box training and dataset bias, make representation learning practically challenging to be deployed for video understanding due to unexplainable and unpredictable results. In this paper, we propose two novel training objectives, likelihood and unlikelihood functions, to unroll semantics behind embeddings while addressing the label sparsity problem in training. The likelihood training aims to interpret semantics of embeddings beyond training labels, while the unlikelihood training leverages prior knowledge for regularization to ensure semantically coherent interpretation. With both training objectives, a new encoder-decoder network, which learns interpretable cross-modal representation, is proposed for ad-hoc video search. Extensive experiments on TRECVid and MSR-VTT datasets show the proposed network outperforms several state-of-the-art retrieval models with a statistically significant performance margin.

Probabilistic dependency graphs (PDGs) are a flexible class of probabilistic graphical models, subsuming Bayesian Networks and Factor Graphs. They can also capture inconsistent beliefs, and provide a way of measuring the degree of this inconsistency. We present the first tractable inference algorithm for PDGs with discrete variables, making the asymptotic complexity of PDG inference similar that of the graphical models they generalize. The key components are: (1) the observation that, in many cases, the distribution a PDG specifies can be formulated as a convex optimization problem (with exponential cone constraints), (2) a construction that allows us to express these problems compactly for PDGs of boundeed treewidth, (3) contributions to the theory of PDGs that justify the construction, and (4) an appeal to interior point methods that can solve such problems in polynomial time. We verify the correctness and complexity of our approach, and provide an implementation of it. We then evaluate our implementation, and demonstrate that it outperforms baseline approaches. Our code is available at //github.com/orichardson/pdg-infer-uai.

Homoscedastic regression error is a common assumption in many high-dimensional regression models and theories. Although heteroscedastic error commonly exists in real-world datasets, testing heteroscedasticity remains largely underexplored under high-dimensional settings. We consider the heteroscedasticity test proposed in Newey and Powell (1987), whose asymptotic theory has been well-established for the low-dimensional setting. We show that the Newey-Powell test can be developed for high-dimensional data. For asymptotic theory, we consider the setting where the number of dimensions grows with the sample size at a linear rate. The asymptotic analysis for the test statistic utilizes the Approximate Message Passing (AMP) algorithm, from which we obtain the limiting distribution of the test. The numerical performance of the test is investigated through an extensive simulation study. As real-data applications, we present the analysis based on "international economic growth" data (Belloni et al. 2011), which is found to be homoscedastic, and "supermarket" data (Lan et al., 2016), which is found to be heteroscedastic.

In the realm of the Internet of Things (IoT), deploying deep learning models to process data generated or collected by IoT devices is a critical challenge. However, direct data transmission can cause network congestion and inefficient execution, given that IoT devices typically lack computation and communication capabilities. Centralized data processing in data centers is also no longer feasible due to concerns over data privacy and security. To address these challenges, we present an innovative Edge-assisted U-Shaped Split Federated Learning (EUSFL) framework, which harnesses the high-performance capabilities of edge servers to assist IoT devices in model training and optimization process. In this framework, we leverage Federated Learning (FL) to enable data holders to collaboratively train models without sharing their data, thereby enhancing data privacy protection by transmitting only model parameters. Additionally, inspired by Split Learning (SL), we split the neural network into three parts using U-shaped splitting for local training on IoT devices. By exploiting the greater computation capability of edge servers, our framework effectively reduces overall training time and allows IoT devices with varying capabilities to perform training tasks efficiently. Furthermore, we proposed a novel noise mechanism called LabelDP to ensure that data features and labels can securely resist reconstruction attacks, eliminating the risk of privacy leakage. Our theoretical analysis and experimental results demonstrate that EUSFL can be integrated with various aggregation algorithms, maintaining good performance across different computing capabilities of IoT devices, and significantly reducing training time and local computation overhead.

We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.

Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.

Knowledge graph completion aims to predict missing relations between entities in a knowledge graph. While many different methods have been proposed, there is a lack of a unifying framework that would lead to state-of-the-art results. Here we develop PathCon, a knowledge graph completion method that harnesses four novel insights to outperform existing methods. PathCon predicts relations between a pair of entities by: (1) Considering the Relational Context of each entity by capturing the relation types adjacent to the entity and modeled through a novel edge-based message passing scheme; (2) Considering the Relational Paths capturing all paths between the two entities; And, (3) adaptively integrating the Relational Context and Relational Path through a learnable attention mechanism. Importantly, (4) in contrast to conventional node-based representations, PathCon represents context and path only using the relation types, which makes it applicable in an inductive setting. Experimental results on knowledge graph benchmarks as well as our newly proposed dataset show that PathCon outperforms state-of-the-art knowledge graph completion methods by a large margin. Finally, PathCon is able to provide interpretable explanations by identifying relations that provide the context and paths that are important for a given predicted relation.

Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspect-specific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models, and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure.

北京阿比特科技有限公司