亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the problem of estimating the size of maximum matching and minimum vertex cover in sublinear time. Denoting the number of vertices by $n$ and the average degree in the graph by $\bar{d}$, we obtain the following results for both problems: * A multiplicative $(2+\epsilon)$-approximation that takes $\tilde{O}(n/\epsilon^2)$ time using adjacency list queries. * A multiplicative-additive $(2, \epsilon n)$-approximation in $\tilde{O}((\bar{d} + 1)/\epsilon^2)$ time using adjacency list queries. * A multiplicative-additive $(2, \epsilon n)$-approximation in $\tilde{O}(n/\epsilon^{3})$ time using adjacency matrix queries. All three results are provably time-optimal up to polylogarithmic factors culminating a long line of work on these problems. Our main contribution and the key ingredient leading to the bounds above is a new and near-tight analysis of the average query complexity of the randomized greedy maximal matching algorithm which improves upon a seminal result of Yoshida, Yamamoto, and Ito [STOC'09].

相關內容

We propose the first algorithm for non-rigid 2D-to-3D shape matching, where the input is a 2D shape represented as a planar curve and a 3D shape represented as a surface; the output is a continuous curve on the surface. We cast the problem as finding the shortest circular path on the product 3-manifold of the surface and the curve. We prove that the optimal matching can be computed in polynomial time with a (worst-case) complexity of $O(mn^2\log(n))$, where $m$ and $n$ denote the number of vertices on the template curve and the 3D shape respectively. We also demonstrate that in practice the runtime is essentially linear in $m\!\cdot\! n$ making it an efficient method for shape analysis and shape retrieval. Quantitative evaluation confirms that the method provides excellent results for sketch-based deformable 3D shape retrieval.

Modern 5G wireless cellular networks use massive multiple-input multiple-output (MIMO) technology. This concept entails using an antenna array at a base station to concurrently service many mobile devices that have several antennas on their side. In this field, a significant role is played by the precoding (beamforming) problem. During downlink, an important part of precoding is the power allocation problem that distributes power between transmitted symbols. In this paper, we consider the power allocation problem for a class of precodings that asymptotically work as regularized zero-forcing. Under some realistic assumptions, we simplify the sum spectral efficiency functional and obtain tractable expressions for it. We prove that equal power allocation provides optimum for the simplified functional with total power constraint (TPC). Also, low-complexity algorithms that improve equal power allocation in the case of per-antenna power constraints are proposed. On simulations using Quadriga, the proposed algorithms show a significant gain in sum spectral efficiency while using a similar computing time as the reference solutions.

The undecidability of basic decision problems for general FIFO machines such as reachability and unboundedness is well-known. In this paper, we provide an underapproximation for the general model by considering only runs that are input-bounded (i.e. the sequence of messages sent through a particular channel belongs to a given bounded language). We prove, by reducing this model to a counter machine with restricted zero tests, that the rational-reachability problem (and by extension, control-state reachability, unboundedness, deadlock, etc.) is decidable. This class of machines subsumes input-letter-bounded machines, flat machines, linear FIFO nets, and monogeneous machines, for which some of these problems were already shown to be decidable. These theoretical results can form the foundations to build a tool to verify general FIFO machines based on the analysis of input-bounded machines.

Graph-SLAM is a well-established algorithm for constructing a topological map of the environment while simultaneously attempting the localisation of the robot. It relies on scan matching algorithms to align noisy observations along robot's movements to compute an estimate of the current robot's location. We propose a fundamentally different approach to scan matching tasks to improve the estimation of roto-translation displacements and therefore the performance of the full SLAM algorithm. A Monte-Carlo approach is used to generate weighted hypotheses of the geometrical displacement between two scans, and then we cluster these hypotheses to compute the displacement that results in the best alignment. To cope with clusterization on roto-translations, we propose a novel clustering approach that robustly extends Gaussian Mean-Shift to orientations by factorizing the kernel density over the roto-translation components. We demonstrate the effectiveness of our method in an extensive set of experiments using both synthetic data and the Intel Research Lab's benchmarking datasets. The results confirms that our approach has superior performance in terms of matching accuracy and runtime computation than the state-of-the-art iterative point-based scan matching algorithms.

Logistic Bandits have recently undergone careful scrutiny by virtue of their combined theoretical and practical relevance. This research effort delivered statistically efficient algorithms, improving the regret of previous strategies by exponentially large factors. Such algorithms are however strikingly costly as they require $\Omega(t)$ operations at each round. On the other hand, a different line of research focused on computational efficiency ($\mathcal{O}(1)$ per-round cost), but at the cost of letting go of the aforementioned exponential improvements. Obtaining the best of both world is unfortunately not a matter of marrying both approaches. Instead we introduce a new learning procedure for Logistic Bandits. It yields confidence sets which sufficient statistics can be easily maintained online without sacrificing statistical tightness. Combined with efficient planning mechanisms we design fast algorithms which regret performance still match the problem-dependent lower-bound of Abeille et al. (2021). To the best of our knowledge, those are the first Logistic Bandit algorithms that simultaneously enjoy statistical and computational efficiency.

Proximal Policy Optimization (PPO) is a highly popular model-free reinforcement learning (RL) approach. However, in continuous state and actions spaces and a Gaussian policy -- common in computer animation and robotics -- PPO is prone to getting stuck in local optima. In this paper, we observe a tendency of PPO to prematurely shrink the exploration variance, which naturally leads to slow progress. Motivated by this, we borrow ideas from CMA-ES, a black-box optimization method designed for intelligent adaptive Gaussian exploration, to derive PPO-CMA, a novel proximal policy optimization approach that can expand the exploration variance on objective function slopes and shrink the variance when close to the optimum. This is implemented by using separate neural networks for policy mean and variance and training the mean and variance in separate passes. Our experiments demonstrate a clear improvement over vanilla PPO in many difficult OpenAI Gym MuJoCo tasks.

We show that for the problem of testing if a matrix $A \in F^{n \times n}$ has rank at most $d$, or requires changing an $\epsilon$-fraction of entries to have rank at most $d$, there is a non-adaptive query algorithm making $\widetilde{O}(d^2/\epsilon)$ queries. Our algorithm works for any field $F$. This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix. Our algorithm is the first such algorithm which does not read a submatrix, and instead reads a carefully selected non-adaptive pattern of entries in rows and columns of $A$. We complement our algorithm with a matching query complexity lower bound for non-adaptive testers over any field. We also give tight bounds of $\widetilde{\Theta}(d^2)$ queries in the sensing model for which query access comes in the form of $\langle X_i, A\rangle:=tr(X_i^\top A)$; perhaps surprisingly these bounds do not depend on $\epsilon$. We next develop a novel property testing framework for testing numerical properties of a real-valued matrix $A$ more generally, which includes the stable rank, Schatten-$p$ norms, and SVD entropy. Specifically, we propose a bounded entry model, where $A$ is required to have entries bounded by $1$ in absolute value. We give upper and lower bounds for a wide range of problems in this model, and discuss connections to the sensing model above.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司