亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

It is well known that the independent random variables $X$ and $Y$ are uncorrelated in the sense $E[XY]=E[X]\cdot E[Y]$ and that the implication may be reversed in very specific cases only. This paper proves that under general assumptions the conditional uncorrelation of random variables, where the conditioning takes place over the suitable class of test sets, is equivalent to the independence. It is also shown that the mutual independence of $X_1,\dots,X_n$ is equivalent to the fact that any conditional correlation matrix equals to the identity matrix.

相關內容

We present a novel data-driven strategy to choose the hyperparameter $k$ in the $k$-NN regression estimator without using any hold-out data. We treat the problem of choosing the hyperparameter as an iterative procedure (over $k$) and propose using an easily implemented in practice strategy based on the idea of early stopping and the minimum discrepancy principle. This model selection strategy is proven to be minimax-optimal over some smoothness function classes, for instance, the Lipschitz functions class on a bounded domain. The novel method often improves statistical performance on artificial and real-world data sets in comparison to other model selection strategies, such as the Hold-out method, 5-fold cross-validation, and AIC criterion. The novelty of the strategy comes from reducing the computational time of the model selection procedure while preserving the statistical (minimax) optimality of the resulting estimator. More precisely, given a sample of size $n$, if one should choose $k$ among $\left\{ 1, \ldots, n \right\}$, and $\left\{ f^1, \ldots, f^n \right\}$ are the estimators of the regression function, the minimum discrepancy principle requires the calculation of a fraction of the estimators, while this is not the case for the generalized cross-validation, Akaike's AIC criteria, or Lepskii principle.

We investigate shift-invariant vectorial Boolean functions on $n$~bits that are lifted from Boolean functions on $k$~bits, for $k\leq n$. We consider vectorial functions that are not necessarily permutations, but are, in some sense, almost bijective. In this context, we define an almost lifting as a Boolean function for which there is an upper bound on the number of collisions of its lifted functions that does not depend on $n$. We show that if a Boolean function with diameter $k$ is an almost lifting, then the maximum number of collisions of its lifted functions is $2^{k-1}$ for any $n$. Moreover, we search for functions in the class of almost liftings that have good cryptographic properties and for which the non-bijectivity does not cause major security weaknesses. These functions generalize the well-known map $\chi$ used in the Keccak hash function.

Predicting quantum operator matrices such as Hamiltonian, overlap, and density matrices in the density functional theory (DFT) framework is crucial for understanding material properties. Current methods often focus on individual operators and struggle with efficiency and scalability for large systems. Here we introduce a novel deep learning model, SLEM (strictly localized equivariant message-passing) for predicting multiple quantum operators, that achieves state-of-the-art accuracy while dramatically improving computational efficiency. SLEM's key innovation is its strict locality-based design, constructing local, equivariant representations for quantum tensors while preserving physical symmetries. This enables complex many-body dependence without expanding the effective receptive field, leading to superior data efficiency and transferability. Using an innovative SO(2) convolution technique, SLEM reduces the computational complexity of high-order tensor products and is therefore capable of handling systems requiring the $f$ and $g$ orbitals in their basis sets. We demonstrate SLEM's capabilities across diverse 2D and 3D materials, achieving high accuracy even with limited training data. SLEM's design facilitates efficient parallelization, potentially extending DFT simulations to systems with device-level sizes, opening new possibilities for large-scale quantum simulations and high-throughput materials discovery.

We consider the discretization of the $1d$-integral Dirichlet fractional Laplacian by $hp$-finite elements. We present quadrature schemes to set up the stiffness matrix and load vector that preserve the exponential convergence of $hp$-FEM on geometric meshes. The schemes are based on Gauss-Jacobi and Gauss-Legendre rules. We show that taking a number of quadrature points slightly exceeding the polynomial degree is enough to preserve root exponential convergence. The total number of algebraic operations to set up the system is $\mathcal{O}(N^{5/2})$, where $N$ is the problem size. Numerical example illustrate the analysis. We also extend our analysis to the fractional Laplacian in higher dimensions for $hp$-finite element spaces based on shape regular meshes.

The goal of trace reconstruction is to reconstruct an unknown $n$-bit string $x$ given only independent random traces of $x$, where a random trace of $x$ is obtained by passing $x$ through a deletion channel. A Statistical Query (SQ) algorithm for trace reconstruction is an algorithm which can only access statistical information about the distribution of random traces of $x$ rather than individual traces themselves. Such an algorithm is said to be $\ell$-local if each of its statistical queries corresponds to an $\ell$-junta function over some block of $\ell$ consecutive bits in the trace. Since several -- but not all -- known algorithms for trace reconstruction fall under the local statistical query paradigm, it is interesting to understand the abilities and limitations of local SQ algorithms for trace reconstruction. In this paper we establish nearly-matching upper and lower bounds on local Statistical Query algorithms for both worst-case and average-case trace reconstruction. For the worst-case problem, we show that there is an $\tilde{O}(n^{1/5})$-local SQ algorithm that makes all its queries with tolerance $\tau \geq 2^{-\tilde{O}(n^{1/5})}$, and also that any $\tilde{O}(n^{1/5})$-local SQ algorithm must make some query with tolerance $\tau \leq 2^{-\tilde{\Omega}(n^{1/5})}$. For the average-case problem, we show that there is an $O(\log n)$-local SQ algorithm that makes all its queries with tolerance $\tau \geq 1/\mathrm{poly}(n)$, and also that any $O(\log n)$-local SQ algorithm must make some query with tolerance $\tau \leq 1/\mathrm{poly}(n).$

A fundamental quantity of interest in Shannon theory, classical or quantum, is the error exponent of a given channel $W$ and rate $R$: the constant $E(W,R)$ which governs the exponential decay of decoding error when using ever larger optimal codes of fixed rate $R$ to communicate over ever more (memoryless) instances of a given channel $W$. Nearly matching lower and upper bounds are well-known for classical channels. Here I show a lower bound on the error exponent of communication over arbitrary classical-quantum (CQ) channels which matches Dalai's sphere-packing upper bound [IEEE TIT 59, 8027 (2013)] for rates above a critical value, exactly analogous to the case of classical channels. Unlike the classical case, however, the argument does not proceed via a refined analysis of a suitable decoder, but instead by leveraging a bound by Hayashi on the error exponent of the cryptographic task of privacy amplification [CMP 333, 335 (2015)]. This bound is then related to the coding problem via tight entropic uncertainty relations and Gallager's method of constructing capacity-achieving parity-check codes for arbitrary channels. Along the way, I find a lower bound on the error exponent of the task of compression of classical information relative to quantum side information that matches the sphere-packing upper bound of Cheng et al. [IEEE TIT 67, 902 (2021)]. In turn, the polynomial prefactors to the sphere-packing bound found by Cheng et al. may be translated to the privacy amplification problem, sharpening a recent result by Li, Yao, and Hayashi [IEEE TIT 69, 1680 (2023)], at least for linear randomness extractors.

Generalised quantifiers, which include Henkin's branching quantifiers, have been introduced by Mostowski and Lindstr\"om and developed as a substantial topic application of logic, especially model theory, to linguistics with work by Barwise, Cooper, Keenan. In this paper, we mainly study the proof theory of some non-standard quantifiers as second order formulae . Our first example is the usual pair of first order quantifiers (for all / there exists) when individuals are viewed as individual concepts handled by second order deductive rules. Our second example is the study of a second order translation of the simplest branching quantifier: ``A member of each team and a member of each board of directors know each other", for which we propose a second order treatment.

It is a notorious open question whether integer programs (IPs), with an integer coefficient matrix $M$ whose subdeterminants are all bounded by a constant $\Delta$ in absolute value, can be solved in polynomial time. We answer this question in the affirmative if we further require that, by removing a constant number of rows and columns from $M$, one obtains a submatrix $A$ that is the transpose of a network matrix. Our approach focuses on the case where $A$ arises from $M$ after removing $k$ rows only, where $k$ is a constant. We achieve our result in two main steps, the first related to the theory of IPs and the second related to graph minor theory. First, we derive a strong proximity result for the case where $A$ is a general totally unimodular matrix: Given an optimal solution of the linear programming relaxation, an optimal solution to the IP can be obtained by finding a constant number of augmentations by circuits of $[A\; I]$. Second, for the case where $A$ is transpose of a network matrix, we reformulate the problem as a maximum constrained integer potential problem on a graph $G$. We observe that if $G$ is $2$-connected, then it has no rooted $K_{2,t}$-minor for $t = \Omega(k \Delta)$. We leverage this to obtain a tree-decomposition of $G$ into highly structured graphs for which we can solve the problem locally. This allows us to solve the global problem via dynamic programming.

\v{C}ech Persistence diagrams (PDs) are topological descriptors routinely used to capture the geometry of complex datasets. They are commonly compared using the Wasserstein distances $OT_{p}$; however, the extent to which PDs are stable with respect to these metrics remains poorly understood. We partially close this gap by focusing on the case where datasets are sampled on an $m$-dimensional submanifold of $\mathbb{R}^{d}$. Under this manifold hypothesis, we show that convergence with respect to the $OT_{p}$ metric happens exactly when $p\gt m$. We also provide improvements upon the bottleneck stability theorem in this case and prove new laws of large numbers for the total $\alpha$-persistence of PDs. Finally, we show how these theoretical findings shed new light on the behavior of the feature maps on the space of PDs that are used in ML-oriented applications of Topological Data Analysis.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司