亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $X = \{X_{u}\}_{u \in U}$ be a real-valued Gaussian process indexed by a set $U$. It can be thought of as an undirected graphical model with every random variable $X_{u}$ serving as a vertex. We characterize this graph in terms of the covariance of $X$ through its reproducing kernel property. Unlike other characterizations in the literature, our characterization does not restrict the index set $U$ to be finite or countable, and hence can be used to model the intrinsic dependence structure of stochastic processes in continuous time/space. Consequently, this characterization is not in terms of the zero entries of an inverse covariance. This poses novel challenges for the problem of recovery of the dependence structure from a sample of independent realizations of $X$, also known as structure estimation. We propose a methodology that circumvents these issues, by targeting the recovery of the underlying graph up to a finite resolution, which can be arbitrarily fine and is limited only by the available sample size. The recovery is shown to be consistent so long as the graph is sufficiently regular in an appropriate sense. We derive corresponding convergence rates and finite sample guarantees. Our methodology is illustrated by means of a simulation study and two data analyses.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

Let $X$ be a $d$-dimensional simplicial complex. A function $F\colon X(k)\to \{0,1\}^k$ is said to be a direct product function if there exists a function $f\colon X(1)\to \{0,1\}$ such that $F(\sigma) = (f(\sigma_1), \ldots, f(\sigma_k))$ for each $k$-face $\sigma$. In an effort to simplify components of the PCP theorem, Goldreich and Safra introduced the problem of direct product testing, which asks whether one can test if $F\colon X(k)\to \{0,1\}^k$ is correlated with a direct product function by querying $F$ on only $2$ inputs. Dinur and Kaufman conjectured that there exist bounded degree complexes with a direct product test in the small soundness regime. We resolve their conjecture by showing that for all $\delta>0$, there exists a family of high-dimensional expanders with degree $O_{\delta}(1)$ and a $2$-query direct product tester with soundness $\delta$. We use the characterization given by a subset of the authors and independently by Dikstein and Dinur, who showed that some form of non-Abelian coboundary expansion (which they called "Unique-Games coboundary expansion") is a necessary and sufficient condition for a complex to admit such direct product testers. Our main technical contribution is a general technique for showing coboundary expansion of complexes with coefficients in a non-Abelian group. This allows us to prove that the high dimensional expanders constructed by Chapman and Lubotzky satisfies the necessary conditions, thus admitting a 2-query direct product tester with small soundness.

A $d$-dimensional simplicial complex $X$ is said to support a direct product tester if any locally consistent function defined on its $k$-faces (where $k\ll d$) necessarily come from a function over its vertices. More precisely, a direct product tester has a distribution $\mu$ over pairs of $k$-faces $(A,A')$, and given query access to $F\colon X(k)\to\{0,1\}^k$ it samples $(A,A')\sim \mu$ and checks that $F[A]|_{A\cap A'} = F[A']|_{A\cap A'}$. The tester should have (1) the ``completeness property'', meaning that any assignment $F$ which is a direct product assignment passes the test with probability $1$, and (2) the ``soundness property'', meaning that if $F$ passes the test with probability $s$, then $F$ must be correlated with a direct product function. Dinur and Kaufman showed that a sufficiently good spectral expanding complex $X$ admits a direct product tester in the ``high soundness'' regime where $s$ is close to $1$. They asked whether there are high dimensional expanders that support direct product tests in the ``low soundness'', when $s$ is close to $0$. We give a characterization of high-dimensional expanders that support a direct product tester in the low soundness regime. We show that spectral expansion is insufficient, and the complex must additionally satisfy a variant of coboundary expansion, which we refer to as \emph{Unique-Games coboundary expanders}. Conversely, we show that this property is also sufficient to get direct product testers. This property can be seen as a high-dimensional generalization of the standard notion of coboundary expansion over non-Abelian groups for 2-dimensional complexes. It asserts that any locally consistent Unique-Games instance obtained using the low-level faces of the complex, must admit a good global solution.

Unlabeled data is a key component of modern machine learning. In general, the role of unlabeled data is to impose a form of smoothness, usually from the similarity information encoded in a base kernel, such as the $\epsilon$-neighbor kernel or the adjacency matrix of a graph. This work revisits the classical idea of spectrally transformed kernel regression (STKR), and provides a new class of general and scalable STKR estimators able to leverage unlabeled data. Intuitively, via spectral transformation, STKR exploits the data distribution for which unlabeled data can provide additional information. First, we show that STKR is a principled and general approach, by characterizing a universal type of "target smoothness", and proving that any sufficiently smooth function can be learned by STKR. Second, we provide scalable STKR implementations for the inductive setting and a general transformation function, while prior work is mostly limited to the transductive setting. Third, we derive statistical guarantees for two scenarios: STKR with a known polynomial transformation, and STKR with kernel PCA when the transformation is unknown. Overall, we believe that this work helps deepen our understanding of how to work with unlabeled data, and its generality makes it easier to inspire new methods.

A fundamental issue in the $\lambda$-calculus is to find appropriate notions for meaningfulness. It is well-known that in the call-by-name $\lambda$-calculus (CbN) the meaningful terms can be identified with the solvable ones, and that this notion is not appropriate in the call-by-value $\lambda$-calculus (CbV). This paper validates the challenging claim that yet another notion, previously introduced in the literature as potential valuability, appropriately represents meaningfulness in CbV. Akin to CbN, this claim is corroborated by proving two essential properties. The first one is genericity, stating that meaningless subterms have no bearing on evaluating normalizing terms. To prove this (which was an open problem), we use a novel approach based on stratified reduction, indifferently applicable to CbN and CbV, and in a quantitative way. The second property concerns consistency of the smallest congruence relation resulting from equating all meaningless terms. While the consistency result is not new, we provide the first direct operational proof of it. We also show that such a congruence has a unique consistent and maximal extension, which coincides with a well-known notion of observational equivalence. Our results thus supply the formal concepts and tools that validate the informal notion of meaningfulness underlying CbN and CbV.

In Linear Logic ($\mathsf{LL}$), the exponential modality $!$ brings forth a distinction between non-linear proofs and linear proofs, where linear means using an argument exactly once. Differential Linear Logic ($\mathsf{DiLL}$) is an extension of Linear Logic which includes additional rules for $!$ which encode differentiation and the ability of linearizing proofs. On the other hand, Graded Linear Logic ($\mathsf{GLL}$) is a variation of Linear Logic in such a way that $!$ is now indexed over a semiring $R$. This $R$-grading allows for non-linear proofs of degree $r \in R$, such that the linear proofs are of degree $1 \in R$. There has been recent interest in combining these two variations of $\mathsf{LL}$ together and developing Graded Differential Linear Logic ($\mathsf{GDiLL}$). In this paper we present a sequent calculus for $\mathsf{GDiLL}$, as well as introduce its categorical semantics, which we call graded differential categories, using both coderelictions and deriving transformations. We prove that symmetric powers always give graded differential categories, and provide other examples of graded differential categories. We also discuss graded versions of (monoidal) coalgebra modalities, additive bialgebra modalities, and the Seely isomorphisms, as well as their implementations in the sequent calculus of $\mathsf{GDiLL}$.

Hybrid quantum-classical algorithms appear to be the most promising approach for near-term quantum applications. An important bottleneck is the classical optimization loop, where the multiple local minima and the emergence of barren plateaux make these approaches less appealing. To improve the optimization the Quantum Natural Gradient (QNG) method [Quantum 4, 269 (2020)] was introduced - a method that uses information about the local geometry of the quantum state-space. While the QNG-based optimization is promising, in each step it requires more quantum resources, since to compute the QNG one requires $O(m^2)$ quantum state preparations, where $m$ is the number of parameters in the parameterized circuit. In this work we propose two methods that reduce the resources/state preparations required for QNG, while keeping the advantages and performance of the QNG-based optimization. Specifically, we first introduce the Random Natural Gradient (RNG) that uses random measurements and the classical Fisher information matrix (as opposed to the quantum Fisher information used in QNG). The essential quantum resources reduce to linear $O(m)$ and thus offer a quadratic "speed-up", while in our numerical simulations it matches QNG in terms of accuracy. We give some theoretical arguments for RNG and then benchmark the method with the QNG on both classical and quantum problems. Secondly, inspired by stochastic-coordinate methods, we propose a novel approximation to the QNG which we call Stochastic-Coordinate Quantum Natural Gradient that optimizes only a small (randomly sampled) fraction of the total parameters at each iteration. This method also performs equally well in our benchmarks, while it uses fewer resources than the QNG.

In open-set recognition (OSR), a promising strategy is exploiting pseudo-unknown data outside given $K$ known classes as an additional $K$+$1$-th class to explicitly model potential open space. However, treating unknown classes without distinction is unequal for them relative to known classes due to the category-agnostic and scale-agnostic of the unknowns. This inevitably not only disrupts the inherent distributions of unknown classes but also incurs both class-wise and instance-wise imbalances between known and unknown classes. Ideally, the OSR problem should model the whole class space as $K$+$\infty$, but enumerating all unknowns is impractical. Since the core of OSR is to effectively model the boundaries of known classes, this means just focusing on the unknowns nearing the boundaries of targeted known classes seems sufficient. Thus, as a compromise, we convert the open classes from infinite to $K$, with a novel concept Target-Aware Universum (TAU) and propose a simple yet effective framework Dual Contrastive Learning with Target-Aware Universum (DCTAU). In details, guided by the targeted known classes, TAU automatically expands the unknown classes from the previous $1$ to $K$, effectively alleviating the distribution disruption and the imbalance issues mentioned above. Then, a novel Dual Contrastive (DC) loss is designed, where all instances irrespective of known or TAU are considered as positives to contrast with their respective negatives. Experimental results indicate DCTAU sets a new state-of-the-art.

Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP, aimed at addressing limitations in existing frameworks while aligning with the ultimate goals of artificial intelligence. This paradigm considers language models as agents capable of observing, acting, and receiving feedback iteratively from external entities. Specifically, language models in this context can: (1) interact with humans for better understanding and addressing user needs, personalizing responses, aligning with human values, and improving the overall user experience; (2) interact with knowledge bases for enriching language representations with factual knowledge, enhancing the contextual relevance of responses, and dynamically leveraging external information to generate more accurate and informed responses; (3) interact with models and tools for effectively decomposing and addressing complex tasks, leveraging specialized expertise for specific subtasks, and fostering the simulation of social behaviors; and (4) interact with environments for learning grounded representations of language, and effectively tackling embodied tasks such as reasoning, planning, and decision-making in response to environmental observations. This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept. We then provide a systematic classification of iNLP, dissecting its various components, including interactive objects, interaction interfaces, and interaction methods. We proceed to delve into the evaluation methodologies used in the field, explore its diverse applications, scrutinize its ethical and safety issues, and discuss prospective research directions. This survey serves as an entry point for researchers who are interested in this rapidly evolving area and offers a broad view of the current landscape and future trajectory of iNLP.

2D-based Industrial Anomaly Detection has been widely discussed, however, multimodal industrial anomaly detection based on 3D point clouds and RGB images still has many untouched fields. Existing multimodal industrial anomaly detection methods directly concatenate the multimodal features, which leads to a strong disturbance between features and harms the detection performance. In this paper, we propose Multi-3D-Memory (M3DM), a novel multimodal anomaly detection method with hybrid fusion scheme: firstly, we design an unsupervised feature fusion with patch-wise contrastive learning to encourage the interaction of different modal features; secondly, we use a decision layer fusion with multiple memory banks to avoid loss of information and additional novelty classifiers to make the final decision. We further propose a point feature alignment operation to better align the point cloud and RGB features. Extensive experiments show that our multimodal industrial anomaly detection model outperforms the state-of-the-art (SOTA) methods on both detection and segmentation precision on MVTec-3D AD dataset. Code is available at //github.com/nomewang/M3DM.

We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.

北京阿比特科技有限公司