亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we study the random geometric graph $\mathsf{RGG}(n,\mathbb{T}^d,\mathsf{Unif},\sigma^q_p,p)$ with $L_q$ distance where each vertex is sampled uniformly from the $d$-dimensional torus and where the connection radius is chosen so that the marginal edge probability is $p$. In addition to results addressing other questions, we make progress on determining when it is possible to distinguish $\mathsf{RGG}(n,\mathbb{T}^d,\mathsf{Unif},\sigma^q_p,p)$ from the Endos-R\'enyi graph $\mathsf{G}(n,p)$. Our strongest result is in the extreme setting $q = \infty$, in which case $\mathsf{RGG}(n,\mathbb{T}^d,\mathsf{Unif},\sigma^\infty_p,p)$ is the $\mathsf{AND}$ of $d$ 1-dimensional random geometric graphs. We derive a formula similar to the cluster-expansion from statistical physics, capturing the compatibility of subgraphs from each of the $d$ 1-dimensional copies, and use it to bound the signed expectations of small subgraphs. We show that counting signed 4-cycles is optimal among all low-degree tests, succeeding with high probability if and only if $d = \tilde{o}(np).$ In contrast, the signed triangle test is suboptimal and only succeeds when $d = \tilde{o}((np)^{3/4}).$ Our result stands in sharp contrast to the existing literature on random geometric graphs (mostly focused on $L_2$ geometry) where the signed triangle statistic is optimal.

相關內容

In 2023, Kuznetsov and Speranski introduced infinitary action logic with multiplexing $!^m\nabla \mathrm{ACT}_\omega$ and proved that the derivability problem for it lies between the $\omega$ and $\omega^\omega$ levels of the hyperarithmetical hierarchy. We prove that this problem is $\Delta^0_{\omega^\omega}$-complete under Turing reductions. Namely, we show that it is recursively isomorphic to the satisfaction predicate for computable infinitary formulas of rank less than $\omega^\omega$ in the language of arithmetic. As a consequence we prove that the closure ordinal for $!^m\nabla \mathrm{ACT}_\omega$ equals $\omega^\omega$. We also prove that the fragment of $!^m\nabla \mathrm{ACT}_\omega$ where Kleene star is not allowed to be in the scope of the subexponential is $\Delta^0_{\omega^\omega}$-complete. Finally, we present a family of logics, which are fragments of $!^m\nabla \mathrm{ACT}_\omega$, such that the complexity of the $k$-th logic lies between $\Delta^0_{\omega^k}$ and $\Delta^0_{\omega^{k+1}}$.

We say that a (multi)graph $G = (V,E)$ has geometric thickness $t$ if there exists a straight-line drawing $\varphi : V \rightarrow \mathbb{R}^2$ and a $t$-coloring of its edges where no two edges sharing a point in their relative interior have the same color. The Geometric Thickness problem asks whether a given multigraph has geometric thickness at most $t$. This problem was shown to be NP-hard for $t=2$ [Durocher, Gethner, and Mondal, CG 2016]. In this paper, we settle the computational complexity of Geometric Thickness by showing that it is $\exists \mathbb{R}$-complete already for thickness $57$. Moreover, our reduction shows that the problem is $\exists \mathbb{R}$-complete for $8280$-planar graphs, where a graph is $k$-planar if it admits a topological drawing with at most $k$ crossings per edge. In the course of our paper, we answer previous questions on the geometric thickness and on other related problems, in particular, that simultaneous graph embeddings of $58$ edge-disjoint graphs and pseudo-segment stretchability with chromatic number $57$ are $\exists \mathbb{R}$-complete.

We introduce a numerical scheme that approximates solutions to linear PDE's by minimizing a residual in the $W^{-1,p'}(\Omega)$ norm with exponents $p> 2$. The resulting problem is solved by regularized Kacanov iterations, allowing to compute the solution to the non-linear minimization problem even for large exponents $p\gg 2$. Such large exponents remedy instabilities of finite element methods for problems like convection-dominated diffusion.

In this paper, we investigate a problem of actively learning threshold in latent space, where the unknown reward $g(\gamma, v)$ depends on the proposed threshold $\gamma$ and latent value $v$ and it can be $only$ achieved if the threshold is lower than or equal to the unknown latent value. This problem has broad applications in practical scenarios, e.g., reserve price optimization in online auctions, online task assignments in crowdsourcing, setting recruiting bars in hiring, etc. We first characterize the query complexity of learning a threshold with the expected reward at most $\epsilon$ smaller than the optimum and prove that the number of queries needed can be infinitely large even when $g(\gamma, v)$ is monotone with respect to both $\gamma$ and $v$. On the positive side, we provide a tight query complexity $\tilde{\Theta}(1/\epsilon^3)$ when $g$ is monotone and the CDF of value distribution is Lipschitz. Moreover, we show a tight $\tilde{\Theta}(1/\epsilon^3)$ query complexity can be achieved as long as $g$ satisfies one-sided Lipschitzness, which provides a complete characterization for this problem. Finally, we extend this model to an online learning setting and demonstrate a tight $\Theta(T^{2/3})$ regret bound using continuous-arm bandit techniques and the aforementioned query complexity results.

We introduce a \emph{gain function} viewpoint of information leakage by proposing \emph{maximal $g$-leakage}, a rich class of operationally meaningful leakage measures that subsumes recently introduced leakage measures -- {maximal leakage} and {maximal $\alpha$-leakage}. In maximal $g$-leakage, the gain of an adversary in guessing an unknown random variable is measured using a {gain function} applied to the probability of correctly guessing. In particular, maximal $g$-leakage captures the multiplicative increase, upon observing $Y$, in the expected gain of an adversary in guessing a randomized function of $X$, maximized over all such randomized functions. We also consider the scenario where an adversary can make multiple attempts to guess the randomized function of interest. We show that maximal leakage is an upper bound on maximal $g$-leakage under multiple guesses, for any non-negative gain function $g$. We obtain a closed-form expression for maximal $g$-leakage under multiple guesses for a class of concave gain functions. We also study maximal $g$-leakage measure for a specific class of gain functions related to the $\alpha$-loss. In particular, we first completely characterize the minimal expected $\alpha$-loss under multiple guesses and analyze how the corresponding leakage measure is affected with the number of guesses. Finally, we study two variants of maximal $g$-leakage depending on the type of adversary and obtain closed-form expressions for them, which do not depend on the particular gain function considered as long as it satisfies some mild regularity conditions. We do this by developing a variational characterization for the R\'{e}nyi divergence of order infinity which naturally generalizes the definition of pointwise maximal leakage to incorporate arbitrary gain functions.

We classify the {\it Boolean degree $1$ functions} of $k$-spaces in a vector space of dimension $n$ (also known as {\it Cameron-Liebler classes}) over the field with $q$ elements for $n \geq n_0(k, q)$, a problem going back to a work by Cameron and Liebler from 1982. This also implies that two-intersecting sets with respect to $k$-spaces do not exist for $n \geq n_0(k, q)$. Our main ingredient is the Ramsey theory for geometric lattices.

Given a natural number $k\ge 2$, we consider the $k$-submodular cover problem ($k$-SC). The objective is to find a minimum cost subset of a ground set $\mathcal{X}$ subject to the value of a $k$-submodular utility function being at least a certain predetermined value $\tau$. For this problem, we design a bicriteria algorithm with a cost at most $O(1/\epsilon)$ times the optimal value, while the utility is at least $(1-\epsilon)\tau/r$, where $r$ depends on the monotonicity of $g$.

We present $\mathcal{X}^3$ (pronounced XCube), a novel generative model for high-resolution sparse 3D voxel grids with arbitrary attributes. Our model can generate millions of voxels with a finest effective resolution of up to $1024^3$ in a feed-forward fashion without time-consuming test-time optimization. To achieve this, we employ a hierarchical voxel latent diffusion model which generates progressively higher resolution grids in a coarse-to-fine manner using a custom framework built on the highly efficient VDB data structure. Apart from generating high-resolution objects, we demonstrate the effectiveness of XCube on large outdoor scenes at scales of 100m$\times$100m with a voxel size as small as 10cm. We observe clear qualitative and quantitative improvements over past approaches. In addition to unconditional generation, we show that our model can be used to solve a variety of tasks such as user-guided editing, scene completion from a single scan, and text-to-3D. More results and details can be found at //research.nvidia.com/labs/toronto-ai/xcube/.

We study range spaces, where the ground set consists of either polygonal curves in $\mathbb{R}^d$ or polygonal regions in the plane that may contain holes and the ranges are balls defined by an elastic distance measure, such as the Hausdorff distance, the Fr\'echet distance and the dynamic time warping distance. The range spaces appear in various applications like classification, range counting, density estimation and clustering when the instances are trajectories, time series or polygons. The Vapnik-Chervonenkis dimension (VC-dimension) plays an important role when designing algorithms for these range spaces. We show for the Fr\'echet distance of polygonal curves and the Hausdorff distance of polygonal curves and planar polygonal regions that the VC-dimension is upper-bounded by $O(dk\log(km))$ where $k$ is the complexity of the center of a ball, $m$ is the complexity of the polygonal curve or region in the ground set, and $d$ is the ambient dimension. For $d \geq 4$ this bound is tight in each of the parameters $d, k$ and $m$ separately. For the dynamic time warping distance of polygonal curves, our analysis directly yields an upper-bound of $O(\min(dk^2\log(m),dkm\log(k)))$.

Suppose that $K\subset\C$ is compact and that $z_0\in\C\backslash K$ is an external point. An optimal prediction measure for regression by polynomials of degree at most $n,$ is one for which the variance of the prediction at $z_0$ is as small as possible. Hoel and Levine (\cite{HL}) have considered the case of $K=[-1,1]$ and $z_0=x_0\in \R\backslash [-1,1],$ where they show that the support of the optimal measure is the $n+1$ extremme points of the Chebyshev polynomial $T_n(x)$ and characterizing the optimal weights in terms of absolute values of fundamental interpolating Lagrange polynomials. More recently, \cite{BLO} has given the equivalence of the optimal prediction problem with that of finding polynomials of extremal growth. They also study in detail the case of $K=[-1,1]$ and $z_0=ia\in i\R,$ purely imaginary. In this work we generalize the Hoel-Levine formula to the general case when the support of the optimal measure is a finite set and give a formula for the optimal weights in terms of a $\ell_1$ minimization problem.

北京阿比特科技有限公司