亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we develop a novel nonparametric framework to test the independence of two random variables $\mathbf{X}$ and $\mathbf{Y}$ with unknown respective marginals $H(dx)$ and $G(dy)$ and joint distribution $F(dx dy)$, based on {\it Receiver Operating Characteristic} (ROC) analysis and bipartite ranking. The rationale behind our approach relies on the fact that, the independence hypothesis $\mathcal{H}\_0$ is necessarily false as soon as the optimal scoring function related to the pair of distributions $(H\otimes G,\; F)$, obtained from a bipartite ranking algorithm, has a ROC curve that deviates from the main diagonal of the unit square.We consider a wide class of rank statistics encompassing many ways of deviating from the diagonal in the ROC space to build tests of independence. Beyond its great flexibility, this new method has theoretical properties that far surpass those of its competitors. Nonasymptotic bounds for the two types of testing errors are established. From an empirical perspective, the novel procedure we promote in this paper exhibits a remarkable ability to detect small departures, of various types, from the null assumption $\mathcal{H}_0$, even in high dimension, as supported by the numerical experiments presented here.

相關內容

Given some binary matrix $M$, suppose we are presented with the collection of its rows and columns in independent arbitrary orderings. From this information, are we able to recover the unique original orderings and matrix? We present an algorithm that identifies whether there is a unique ordering associated with a set of rows and columns, and outputs either the unique correct orderings for the rows and columns or the full collection of all valid orderings and valid matrices. We show that there is a constant $c > 0$ such that the algorithm terminates in $O(n^2)$ time with high probability and in expectation for random $n \times n$ binary matrices with i.i.d.\ Bernoulli $(p)$ entries $(m_{ij})_{ij=1}^n$ such that $\frac{c\log^2(n)}{n(\log\log(n))^2} \leq p \leq \frac{1}{2}$.

We develop a novel method to construct uniformly valid confidence bands for a nonparametric component $f_1$ in the sparse additive model $Y=f_1(X_1)+\ldots + f_p(X_p) + \varepsilon$ in a high-dimensional setting. Our method integrates sieve estimation into a high-dimensional Z-estimation framework, facilitating the construction of uniformly valid confidence bands for the target component $f_1$. To form these confidence bands, we employ a multiplier bootstrap procedure. Additionally, we provide rates for the uniform lasso estimation in high dimensions, which may be of independent interest. Through simulation studies, we demonstrate that our proposed method delivers reliable results in terms of estimation and coverage, even in small samples.

The Gaussian splatting methods are getting popular. However, their loss function only contains the $\ell_1$ norm and the structural similarity between the rendered and input images, without considering the edges in these images. It is well-known that the edges in an image provide important information. Therefore, in this paper, we propose an Edge Guided Gaussian Splatting (EGGS) method that leverages the edges in the input images. More specifically, we give the edge region a higher weight than the flat region. With such edge guidance, the resulting Gaussian particles focus more on the edges instead of the flat regions. Moreover, such edge guidance does not crease the computation cost during the training and rendering stage. The experiments confirm that such simple edge-weighted loss function indeed improves about $1\sim2$ dB on several difference data sets. With simply plugging in the edge guidance, the proposed method can improve all Gaussian splatting methods in different scenarios, such as human head modeling, building 3D reconstruction, etc.

For any hereditary graph class $F$, we construct optimal adjacency labeling schemes for the classes of subgraphs and induced subgraphs of Cartesian products of graphs in $F$. As a consequence, we show that, if $F$ admits efficient adjacency labels (or, equivalently, small induced-universal graphs) meeting the information-theoretic minimum, then the classes of subgraphs and induced subgraphs of Cartesian products of graphs in $F$ do too. Our proof uses ideas from randomized communication complexity, hashing, and additive combinatorics, and improves upon recent results of Chepoi, Labourel, and Ratel [Journal of Graph Theory, 2020].

Given a simple weighted directed graph $G = (V, E, \omega)$ on $n$ vertices as well as two designated terminals $s, t\in V$, our goal is to compute the shortest path from $s$ to $t$ avoiding any pair of presumably failed edges $f_1, f_2\in E$, which is a natural generalization of the classical replacement path problem which considers single edge failures only. This dual failure replacement paths problem was recently studied by Vassilevska Williams, Woldeghebriel and Xu [FOCS 2022] who designed a cubic time algorithm for general weighted digraphs which is conditionally optimal; in the same paper, for unweighted graphs where $\omega \equiv 1$, the authors presented an algebraic algorithm with runtime $\tilde{O}(n^{2.9146})$, as well as a conditional lower bound of $n^{8/3-o(1)}$ against combinatorial algorithms. However, it was unknown in their work whether fast matrix multiplication is necessary for a subcubic runtime in unweighted digraphs. As our primary result, we present the first truly subcubic combinatorial algorithm for dual failure replacement paths in unweighted digraphs. Our runtime is $\tilde{O}(n^{3-1/18})$. Besides, we also study algebraic algorithms for digraphs with small integer edge weights from $\{-M, -M+1, \cdots, M-1, M\}$. As our secondary result, we obtained a runtime of $\tilde{O}(Mn^{2.8716})$, which is faster than the previous bound of $\tilde{O}(M^{2/3}n^{2.9144} + Mn^{2.8716})$ from [Vassilevska Williams, Woldeghebriela and Xu, 2022].

An $(m,n,R)$-de Bruijn covering array (dBCA) is a doubly periodic $M \times N$ array over an alphabet of size $q$ such that the set of all its $m \times n$ windows form a covering code with radius $R$. An upper bound of the smallest array area of an $(m,n,R)$-dBCA is provided using a probabilistic technique which is similar to the one that was used for an upper bound on the length of a de Bruijn covering sequence. A folding technique to construct a dBCA from a de Bruijn covering sequence or de Bruijn covering sequences code is presented. Several new constructions that yield shorter de Bruijn covering sequences and $(m,n,R)$-dBCAs with smaller areas are also provided. These constructions are mainly based on sequences derived from cyclic codes, self-dual sequences, primitive polynomials, an interleaving technique, folding, and mutual shifts of sequences with the same covering radius. Finally, constructions of de Bruijn covering sequences codes are also discussed.

This paper studies the strength of embedding Call-by-Name ({\tt dCBN}) and Call-by-Value ({\tt dCBV}) into a unifying framework called the Bang Calculus ({\tt dBANG}). These embeddings enable establishing (static and dynamic) properties of {\tt dCBN} and {\tt dCBV} through their respective counterparts in {\tt dBANG}. While some specific static properties have been already successfully studied in the literature, the dynamic ones are more challenging and have been left unexplored. We accomplish that by using a standard embedding for the (easy) {\tt dCBN} case, while a novel one must be introduced for the (difficult) {\tt dCBV} case. Moreover, a key point of our approach is the identification of {\tt dBANG} diligent reduction sequences, which eases the preservation of dynamic properties from {\tt dBANG} to {\tt dCBN}/{\tt dCBV}. We illustrate our methodology through two concrete applications: confluence/factorization for both {\tt dCBN} and {\tt dCBV} are respectively derived from confluence/factorization for {\tt dBANG}.

We formulate a novel approach to solve a class of stochastic problems, referred to as data-consistent inverse (DCI) problems, which involve the characterization of a probability measure on the parameters of a computational model whose subsequent push-forward matches an observed probability measure on specified quantities of interest (QoI) typically associated with the outputs from the computational model. Whereas prior DCI solution methodologies focused on either constructing non-parametric estimates of the densities or the probabilities of events associated with the pre-image of the QoI map, we develop and analyze a constrained quadratic optimization approach based on estimating push-forward measures using weighted empirical distribution functions. The method proposed here is more suitable for low-data regimes or high-dimensional problems than the density-based method, as well as for problems where the probability measure does not admit a density. Numerical examples are included to demonstrate the performance of the method and to compare with the density-based approach where applicable.

When translating a term calculus into a graphical formalism many inessential details are abstracted away. In the case of $\lambda$-calculus translated to proof-nets, these inessential details are captured by a notion of equivalence on $\lambda$-terms known as $\simeq_\sigma$-equivalence, in both the intuitionistic (due to Regnier) and classical (due to Laurent) cases. The purpose of this paper is to uncover a strong bisimulation behind $\simeq_\sigma$-equivalence, as formulated by Laurent for Parigot's $\lambda\mu$-calculus. This is achieved by introducing a relation $\simeq$, defined over a revised presentation of $\lambda\mu$-calculus we dub $\Lambda M$. More precisely, we first identify the reasons behind Laurent's $\simeq_\sigma$-equivalence on $\lambda\mu$-terms failing to be a strong bisimulation. Inspired by Laurent's \emph{Polarized Proof-Nets}, this leads us to distinguish multiplicative and exponential reduction steps on terms. Second, we enrich the syntax of $\lambda\mu$ to allow us to track the exponential operations. These technical ingredients pave the way towards a strong bisimulation for the classical case. We introduce a calculus $\Lambda M$ and a relation $\simeq$ that we show to be a strong bisimulation with respect to reduction in $\Lambda M$, ie. two $\simeq$-equivalent terms have the exact same reduction semantics, a result which fails for Regnier's $\simeq_\sigma$-equivalence in $\lambda$-calculus as well as for Laurent's $\simeq_\sigma$-equivalence in $\lambda\mu$. Although $\simeq$ is formulated over an enriched syntax and hence is not strictly included in Laurent's $\simeq_\sigma$, we show how it can be seen as a restriction of it.

Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspect-specific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models, and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure.

北京阿比特科技有限公司