亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the problem of distinguishing between two independent samples $\mathbf{G}_n^1,\mathbf{G}_n^2$ of a binomial random graph $G(n,p)$ by first order (FO) sentences. Shelah and Spencer proved that, for a constant $\alpha\in(0,1)$, $G(n,n^{-\alpha})$ obeys FO zero-one law if and only if $\alpha$ is irrational. Therefore, for irrational $\alpha\in(0,1)$, any fixed FO sentence does not distinguish between $\mathbf{G}_n^1,\mathbf{G}_n^2$ with asymptotical probability 1 (w.h.p.) as $n\to\infty$. We show that the minimum quantifier depth $\mathbf{k}_{\alpha}$ of a FO sentence $\varphi=\varphi(\mathbf{G}_n^1,\mathbf{G}_n^2)$ distinguishing between $\mathbf{G}_n^1,\mathbf{G}_n^2$ depends on how closely $\alpha$ can be approximated by rationals: (1) for all non-Liouville $\alpha\in(0,1)$, $\mathbf{k}_{\alpha}=\Omega(\ln\ln\ln n)$ w.h.p.; (2) there are irrational $\alpha\in(0,1)$ with $\mathbf{k}_{\alpha}$ that grow arbitrarily slowly w.h.p.; (3) $\mathbf{k}_{\alpha}=O_p(\frac{\ln n}{\ln\ln n})$ for all $\alpha\in(0,1)$. The main ingredients in our proofs are a novel randomized algorithm that generates asymmetric strictly balanced graphs as well as a new method to study symmetry groups of randomly perturbed graphs.

相關內容

In this work, we explore the dynamical sampling problem on $\ell^2(\mathbb{Z})$ driven by a convolution operator defined by a convolution kernel. This problem is inspired by the need to recover a bandlimited heat diffusion field from space-time samples and its discrete analogue. In this book chapter, we review recent results in the finite-dimensional case and extend these findings to the infinite-dimensional case, focusing on the study of the density of space-time sampling sets.

In $1998,$ Daemen {\it{ et al.}} introduced a circulant Maximum Distance Separable (MDS) matrix in the diffusion layer of the Rijndael block cipher, drawing significant attention to circulant MDS matrices. This block cipher is now universally acclaimed as the AES block cipher. In $2016,$ Liu and Sim introduced cyclic matrices by modifying the permutation of circulant matrices and established the existence of MDS property for orthogonal left-circulant matrices, a notable subclass within cyclic matrices. While circulant matrices have been well-studied in the literature, the properties of cyclic matrices are not. Back in $1961$, Friedman introduced $g$-circulant matrices which form a subclass of cyclic matrices. In this article, we first establish a permutation equivalence between a cyclic matrix and a circulant matrix. We explore properties of cyclic matrices similar to $g$-circulant matrices. Additionally, we determine the determinant of $g$-circulant matrices of order $2^d \times 2^d$ and prove that they cannot be simultaneously orthogonal and MDS over a finite field of characteristic $2$. Furthermore, we prove that this result holds for any cyclic matrix.

Let $G$ be a group with undecidable domino problem (such as $\mathbb{Z}^2$). We prove the undecidability of all nontrivial dynamical properties for sofic $G$-subshifts, that such a result fails for SFTs, and an undecidability result for dynamical properties of $G$-SFTs similar to the Adian-Rabin theorem. For $G$ amenable we prove that topological entropy is not computable from presentations of SFTs, and a more general result for dynamical invariants taking values in partially ordered sets.

We consider Newton's method for finding zeros of mappings from a manifold $\mathcal{X}$ into a vector bundle $\mathcal{E}$. In this setting a connection on $\mathcal{E}$ is required to render the Newton equation well defined, and a retraction on $\mathcal{X}$ is needed to compute a Newton update. We discuss local convergence in terms of suitable differentiability concepts, using a Banach space variant of a Riemannian distance. We also carry over an affine covariant damping strategy to our setting. Finally, we will discuss some applications of our approach, namely, finding fixed points of vector fields, variational problems on manifolds and finding critical points of functionals.

Making inference with spatial extremal dependence models can be computationally burdensome since they involve intractable and/or censored likelihoods. Building on recent advances in likelihood-free inference with neural Bayes estimators, that is, neural networks that approximate Bayes estimators, we develop highly efficient estimators for censored peaks-over-threshold models that {use data augmentation techniques} to encode censoring information in the neural network {input}. Our new method provides a paradigm shift that challenges traditional censored likelihood-based inference methods for spatial extremal dependence models. Our simulation studies highlight significant gains in both computational and statistical efficiency, relative to competing likelihood-based approaches, when applying our novel estimators to make inference with popular extremal dependence models, such as max-stable, $r$-Pareto, and random scale mixture process models. We also illustrate that it is possible to train a single neural Bayes estimator for a general censoring level, precluding the need to retrain the network when the censoring level is changed. We illustrate the efficacy of our estimators by making fast inference on hundreds-of-thousands of high-dimensional spatial extremal dependence models to assess extreme particulate matter 2.5 microns or less in diameter (${\rm PM}_{2.5}$) concentration over the whole of Saudi Arabia.

Let $\mathrm{Sym}_q(m)$ be the space of symmetric matrices in $\mathbb{F}_q^{m\times m}$. A subspace of $\mathrm{Sym}_q(m)$ equipped with the rank distance is called a symmetric rank-metric code. In this paper we study the covering properties of symmetric rank-metric codes. First we characterize symmetric rank-metric codes which are perfect, i.e. that satisfy the equality in the sphere-packing like bound. We show that, despite the rank-metric case, there are non trivial perfect codes. Also, we characterize families of codes which are quasi-perfect.

We study the computational and sample complexity of learning a target function $f_*:\mathbb{R}^d\to\mathbb{R}$ with additive structure, that is, $f_*(x) = \frac{1}{\sqrt{M}}\sum_{m=1}^M f_m(\langle x, v_m\rangle)$, where $f_1,f_2,...,f_M:\mathbb{R}\to\mathbb{R}$ are nonlinear link functions of single-index models (ridge functions) with diverse and near-orthogonal index features $\{v_m\}_{m=1}^M$, and the number of additive tasks $M$ grows with the dimensionality $M\asymp d^\gamma$ for $\gamma\ge 0$. This problem setting is motivated by the classical additive model literature, the recent representation learning theory of two-layer neural network, and large-scale pretraining where the model simultaneously acquires a large number of "skills" that are often localized in distinct parts of the trained network. We prove that a large subset of polynomial $f_*$ can be efficiently learned by gradient descent training of a two-layer neural network, with a polynomial statistical and computational complexity that depends on the number of tasks $M$ and the information exponent of $f_m$, despite the unknown link function and $M$ growing with the dimensionality. We complement this learnability guarantee with computational hardness result by establishing statistical query (SQ) lower bounds for both the correlational SQ and full SQ algorithms.

We show that modeling a Grassmannian as symmetric orthogonal matrices $\operatorname{Gr}(k,\mathbb{R}^n) \cong\{Q \in \mathbb{R}^{n \times n} : Q^{\scriptscriptstyle\mathsf{T}} Q = I, \; Q^{\scriptscriptstyle\mathsf{T}} = Q,\; \operatorname{tr}(Q)=2k - n\}$ yields exceedingly simple matrix formulas for various curvatures and curvature-related quantities, both intrinsic and extrinsic. These include Riemann, Ricci, Jacobi, sectional, scalar, mean, principal, and Gaussian curvatures; Schouten, Weyl, Cotton, Bach, Pleba\'nski, cocurvature, nonmetricity, and torsion tensors; first, second, and third fundamental forms; Gauss and Weingarten maps; and upper and lower delta invariants. We will derive explicit, simple expressions for the aforementioned quantities in terms of standard matrix operations that are stably computable with numerical linear algebra. Many of these aforementioned quantities have never before been presented for the Grassmannian.

We determine the best n-term approximation of generalized Wiener model classes in a Hilbert space $H $. This theory is then applied to several special cases.

$\textbf{OBJECTIVE}$: Ensuring that machine learning (ML) algorithms are safe and effective within all patient groups, and do not disadvantage particular patients, is essential to clinical decision making and preventing the reinforcement of existing healthcare inequities. The objective of this tutorial is to introduce the medical informatics community to the common notions of fairness within ML, focusing on clinical applications and implementation in practice. $\textbf{TARGET AUDIENCE}$: As gaps in fairness arise in a variety of healthcare applications, this tutorial is designed to provide an understanding of fairness, without assuming prior knowledge, to researchers and clinicians who make use of modern clinical data. $\textbf{SCOPE}$: We describe the fundamental concepts and methods used to define fairness in ML, including an overview of why models in healthcare may be unfair, a summary and comparison of the metrics used to quantify fairness, and a discussion of some ongoing research. We illustrate some of the fairness methods introduced through a case study of mortality prediction in a publicly available electronic health record dataset. Finally, we provide a user-friendly R package for comprehensive group fairness evaluation, enabling researchers and clinicians to assess fairness in their own ML work.

北京阿比特科技有限公司