亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we explore the spectral properties of general random kernel matrices $[K(U_i,U_j)]_{1\leq i\neq j\leq n}$ from a Lipschitz kernel $K$ with $n$ independent random variables $U_1,U_2,\ldots, U_n$ distributed uniformly over $[0,1]$. In particular we identify a dichotomy in the extreme eigenvalue of the kernel matrix, where, if the kernel $K$ is degenerate, the largest eigenvalue of the kernel matrix (after proper normalization) converges weakly to a weighted sum of independent chi-squared random variables. In contrast, for non-degenerate kernels, it converges to a normal distribution extending and reinforcing earlier results from Koltchinskii and Gin\'e (2000). Further, we apply this result to show a dichotomy in the asymptotic behavior of extreme eigenvalues of $W$-random graphs, which are pivotal in modeling complex networks and analyzing large-scale graph behavior. These graphs are generated using a kernel $W$, termed as graphon, by connecting vertices $i$ and $j$ with probability $W(U_i, U_j)$. Our results show that for a Lipschitz graphon $W$, if the degree function is constant, the fluctuation of the largest eigenvalue (after proper normalization) converges to the weighted sum of independent chi-squared random variables and an independent normal distribution. Otherwise, it converges to a normal distribution.

相關內容

We study computationally-hard fundamental motion planning problems where the goal is to translate $k$ axis-aligned rectangular robots from their initial positions to their final positions without collision, and with the minimum number of translation moves. Our aim is to understand the interplay between the number of robots and the geometric complexity of the input instance measured by the input size, which is the number of bits needed to encode the coordinates of the rectangles' vertices. We focus on axis-aligned translations, and more generally, translations restricted to a given set of directions, and we study the two settings where the robots move in the free plane, and where they are confined to a bounding box. We obtain fixed-parameter tractable (FPT) algorithms parameterized by $k$ for all the settings under consideration. In the case where the robots move serially (i.e., one in each time step) and axis-aligned, we prove a structural result stating that every problem instance admits an optimal solution in which the moves are along a grid, whose size is a function of $k$, that can be defined based on the input instance. This structural result implies that the problem is fixed-parameter tractable parameterized by $k$. We also consider the case in which the robots move in parallel (i.e., multiple robots can move during the same time step), and which falls under the category of Coordinated Motion Planning problems. Finally, we show that, when the robots move in the free plane, the FPT results for the serial motion case carry over to the case where the translations are restricted to any given set of directions.

Like the notion of computation via (strong) monads serves to classify various flavours of impurity, including exceptions, non-determinism, probability, local and global store, the notion of guardedness classifies well-behavedness of cycles in various settings. In its most general form, the guardedness discipline applies to general symmetric monoidal categories and further specializes to Cartesian and co-Cartesian categories, where it governs guarded recursion and guarded iteration respectively. Here, even more specifically, we deal with the semantics of call-by-value guarded iteration. It was shown by Levy, Power and Thielecke that call-by-value languages can be generally interpreted in Freyd categories, but in order to represent effectful function spaces, such a category must canonically arise from a strong monad. We generalize this fact by showing that representing guarded effectful function spaces calls for certain parametrized monads (in the sense of Uustalu). This provides a description of guardedness as an intrinsic categorical property of programs, complementing the existing description of guardedness as a predicate on a category.

We study symmetric tensor decompositions, i.e., decompositions of the form $T = \sum_{i=1}^r u_i^{\otimes 3}$ where $T$ is a symmetric tensor of order 3 and $u_i \in \mathbb{C}^n$.In order to obtain efficient decomposition algorithms, it is necessary to require additional properties from $u_i$. In this paper we assume that the $u_i$ are linearly independent. This implies $r \leq n$,that is, the decomposition of T is undercomplete. We give a randomized algorithm for the following problem in the exact arithmetic model of computation: Let $T$ be an order-3 symmetric tensor that has an undercomplete decomposition.Then given some $T'$ close to $T$, an accuracy parameter $\varepsilon$, and an upper bound B on the condition number of the tensor, output vectors $u'_i$ such that $||u_i - u'_i|| \leq \varepsilon$ (up to permutation and multiplication by cube roots of unity) with high probability. The main novel features of our algorithm are: 1) We provide the first algorithm for this problem that runs in linear time in the size of the input tensor. More specifically, it requires $O(n^3)$ arithmetic operations for all accuracy parameters $\varepsilon =$ 1/poly(n) and B = poly(n). 2) Our algorithm is robust, that is, it can handle inverse-quasi-polynomial noise (in $n$,B,$\frac{1}{\varepsilon}$) in the input tensor. 3) We present a smoothed analysis of the condition number of the tensor decomposition problem. This guarantees that the condition number is low with high probability and further shows that our algorithm runs in linear time, except for some rare badly conditioned inputs. Our main algorithm is a reduction to the complete case ($r=n$) treated in our previous work [Koiran,Saha,CIAC 2023]. For efficiency reasons we cannot use this algorithm as a blackbox. Instead, we show that it can be run on an implicitly represented tensor obtained from the input tensor by a change of basis.

It is well known that any graph admits a crossing-free straight-line drawing in $\mathbb{R}^3$ and that any planar graph admits the same even in $\mathbb{R}^2$. For a graph $G$ and $d \in \{2,3\}$, let $\rho^1_d(G)$ denote the smallest number of lines in $\mathbb{R}^d$ whose union contains a crossing-free straight-line drawing of $G$. For $d=2$, $G$ must be planar. Similarly, let $\rho^2_3(G)$ denote the smallest number of planes in $\mathbb{R}^3$ whose union contains a crossing-free straight-line drawing of $G$. We investigate the complexity of computing these three parameters and obtain the following hardness and algorithmic results. - For $d\in\{2,3\}$, we prove that deciding whether $\rho^1_d(G)\le k$ for a given graph $G$ and integer $k$ is ${\exists\mathbb{R}}$-complete. - Since $\mathrm{NP}\subseteq{\exists\mathbb{R}}$, deciding $\rho^1_d(G)\le k$ is NP-hard for $d\in\{2,3\}$. On the positive side, we show that the problem is fixed-parameter tractable with respect to $k$. - Since ${\exists\mathbb{R}}\subseteq\mathrm{PSPACE}$, both $\rho^1_2(G)$ and $\rho^1_3(G)$ are computable in polynomial space. On the negative side, we show that drawings that are optimal with respect to $\rho^1_2$ or $\rho^1_3$ sometimes require irrational coordinates. - We prove that deciding whether $\rho^2_3(G)\le k$ is NP-hard for any fixed $k \ge 2$. Hence, the problem is not fixed-parameter tractable with respect to $k$ unless $\mathrm{P}=\mathrm{NP}$.

Let $(P,E)$ be a $(d+1)$-uniform geometric hypergraph, where $P$ is an $n$-point set in general position in $\mathbb{R}^d$ and $E\subseteq {P\choose d+1}$ is a collection of $\epsilon{n\choose d+1}$ $d$-dimensional simplices with vertices in $P$, for $0<\epsilon\leq 1$. We show that there is a point $x\in {\mathbb R}^d$ that pierces $\displaystyle \Omega\left(\epsilon^{(d^4+d)(d+1)+\delta}{n\choose d+1}\right)$ simplices in $E$, for any fixed $\delta>0$. This is a dramatic improvement in all dimensions $d\geq 3$, over the previous lower bounds of the general form $\displaystyle \epsilon^{(cd)^{d+1}}n^{d+1}$, which date back to the seminal 1991 work of Alon, B\'{a}r\'{a}ny, F\"{u}redi and Kleitman. As a result, any $n$-point set in general position in $\mathbb{R}^d$ admits only $\displaystyle O\left(n^{d-\frac{1}{d(d-1)^4+d(d-1)}+\delta}\right)$ halving hyperplanes, for any $\delta>0$, which is a significant improvement over the previously best known bound $\displaystyle O\left(n^{d-\frac{1}{(2d)^{d}}}\right)$ in all dimensions $d\geq 5$. An essential ingredient of our proof is the following semi-algebraic Tur\'an-type result of independent interest: Let $(V_1,\ldots,V_k,E)$ be a hypergraph of bounded semi-algebraic description complexity in ${\mathbb R}^d$ that satisfies $|E|\geq \varepsilon |V_1|\cdot\ldots \cdot |V_k|$ for some $\varepsilon>0$. Then there exist subsets $W_i\subseteq V_i$ that satisfy $W_1\times W_2\times\ldots\times W_k\subseteq E$, and $|W_1|\cdot\ldots\cdots|W_k|=\Omega\left(\varepsilon^{d(k-1)+1}|V_1|\cdot |V_2|\cdot\ldots\cdot|V_k|\right)$.

This paper intends to apply the sample-average-approximation (SAA) scheme to solve a system of stochastic equations (SSE), which has many applications in a variety of fields. The SAA is an effective paradigm to address risks and uncertainty in stochastic models from the perspective of Monte Carlo principle. Nonetheless, a numerical conflict arises from the sample size of SAA when one has to make a tradeoff between the accuracy of solutions and the computational cost. To alleviate this issue, we incorporate a gradually reinforced SAA scheme into a differentiable homotopy method and develop a gradually reinforced sample-average-approximation (GRSAA) differentiable homotopy method in this paper. By introducing a series of continuously differentiable functions of the homotopy parameter $t$ ranging between zero and one, we establish a differentiable homotopy system, which is able to gradually increase the sample size of SAA as $t$ descends from one to zero. The set of solutions to the homotopy system contains an everywhere smooth path, which starts from an arbitrary point and ends at a solution to the SAA with any desired accuracy. The GRSAA differentiable homotopy method serves as a bridge to link the gradually reinforced SAA scheme and a differentiable homotopy method and retains the nice property of global convergence the homotopy method possesses while greatly reducing the computational cost for attaining a desired solution to the original SSE. Several numerical experiments further confirm the effectiveness and efficiency of the proposed method.

This paper deals with a nonparametric warped kernel estimator $\widehat b$ of the drift function computed from independent continuous observations of a diffusion process. A risk bound on $\widehat b$ is established. The paper also deals with an extension of the PCO bandwidth selection method for $\widehat b$. Finally, some numerical experiments are provided.

While there exists a rich array of matrix column subset selection problem (CSSP) algorithms for use with interpolative and CUR-type decompositions, their use can often become prohibitive as the size of the input matrix increases. In an effort to address these issues, the authors in \cite{emelianenko2024adaptive} developed a general framework that pairs a column-partitioning routine with a column-selection algorithm. Two of the four algorithms presented in that work paired the Centroidal Voronoi Orthogonal Decomposition (\textsf{CVOD}) and an adaptive variant (\textsf{adaptCVOD}) with the Discrete Empirical Interpolation Method (\textsf{DEIM}) \cite{sorensen2016deim}. In this work, we extend this framework and pair the \textsf{CVOD}-type algorithms with any CSSP algorithm that returns linearly independent columns. Our results include detailed error bounds for the solutions provided by these paired algorithms, as well as expressions that explicitly characterize how the quality of the selected column partition affects the resulting CSSP solution.

We initiate a systematic study of worst-group risk minimization under $(\epsilon, \delta)$-differential privacy (DP). The goal is to privately find a model that approximately minimizes the maximal risk across $p$ sub-populations (groups) with different distributions, where each group distribution is accessed via a sample oracle. We first present a new algorithm that achieves excess worst-group population risk of $\tilde{O}(\frac{p\sqrt{d}}{K\epsilon} + \sqrt{\frac{p}{K}})$, where $K$ is the total number of samples drawn from all groups and $d$ is the problem dimension. Our rate is nearly optimal when each distribution is observed via a fixed-size dataset of size $K/p$. Our result is based on a new stability-based analysis for the generalization error. In particular, we show that $\Delta$-uniform argument stability implies $\tilde{O}(\Delta + \frac{1}{\sqrt{n}})$ generalization error w.r.t. the worst-group risk, where $n$ is the number of samples drawn from each sample oracle. Next, we propose an algorithmic framework for worst-group population risk minimization using any DP online convex optimization algorithm as a subroutine. Hence, we give another excess risk bound of $\tilde{O}\left( \sqrt{\frac{d^{1/2}}{\epsilon K}} +\sqrt{\frac{p}{K\epsilon^2}} \right)$. Assuming the typical setting of $\epsilon=\Theta(1)$, this bound is more favorable than our first bound in a certain range of $p$ as a function of $K$ and $d$. Finally, we study differentially private worst-group empirical risk minimization in the offline setting, where each group distribution is observed by a fixed-size dataset. We present a new algorithm with nearly optimal excess risk of $\tilde{O}(\frac{p\sqrt{d}}{K\epsilon})$.

We consider the statistical linear inverse problem of making inference on an unknown source function in an elliptic partial differential equation from noisy observations of its solution. We employ nonparametric Bayesian procedures based on Gaussian priors, leading to convenient conjugate formulae for posterior inference. We review recent results providing theoretical guarantees on the quality of the resulting posterior-based estimation and uncertainty quantification, and we discuss the application of the theory to the important classes of Gaussian series priors defined on the Dirichlet-Laplacian eigenbasis and Mat\'ern process priors. We provide an implementation of posterior inference for both classes of priors, and investigate its performance in a numerical simulation study.

北京阿比特科技有限公司