亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the algorithm of Gurvich, Khachyian and Karzanov (GKK algorithm) when it is ran over mean-payoff games with no simple cycle of weight zero. We propose a new symmetric analysis, lowering the $O(n^2 N)$ upper-bound of Pisaruk on the number of iterations down to $N + Ep + Em$, which is smaller than $nN$, where $n$ is the number of vertices, $N$ is the largest absolute value of a weight, and $Ep$ and $Em$ are respectively the largest finite energy and dual-energy values of the game. Since each iteration is computed in $O(m)$, this improves on the state of the art pseudopolynomial $O(mnN)$ runtime bound of Brim, Chaloupka, Doyen, Gentilini and Raskin, by taking into account the structure of the game graph. We complement our result by showing that the analysis of Dorfman, Kaplan and Zwick also applies to the GKK algorithm, which is thus also subject to the state of the art combinatorial runtime bound of $O(m 2^{n/2})$.

相關內容

We show how to translate a subset of RISC-V machine code compiled from a subset of C to quadratic unconstrained binary optimization (QUBO) models that may be solved by a quantum annealing machine: given a bound $n$, there is input $I$ to a program $P$ such that $P$ runs into a given program state $E$ executing no more than $n$ machine instructions if and only if the QUBO model of $P$ for $n$ evaluates to 0 on $I$. Thus, with more qubits on the machine than variables in the QUBO model, quantum annealing the model reaches 0 (ground) energy in constant time with high probability on some input $I$ that is part of the ground state if and only if $P$ runs into $E$ on $I$ executing no more than $n$ instructions. Translation takes $\mathcal{O}(n^2)$ time effectively turning a quantum annealer into a polynomial-time symbolic execution engine and bounded model checker, eliminating their path and state explosion problems. Here, we take advantage of the fact that any machine instruction may only increase the size of the program state by a constant amount of bits. Translation time comes down from $\mathcal{O}(n^2)$ to $\mathcal{O}(n\cdot|P|)$ if memory consumption of $P$ is bounded by a constant, establishing a linear (quadratic) upper bound on quantum space, in number of qubits on a quantum annealer, in terms of algorithmic time (space) in classical computing. The construction provides a non-relativizing argument for $NP\subseteq BQP$, without violating the optimality of Grover's algorithm, also on gate-model quantum machines, and motivates a temporal and spatial metric of quantum advantage. Our prototypical open-source toolchain translates machine code that runs on real RISC-V hardware to models that can be solved by real quantum annealing hardware, as shown in our experiments.

Multiple antenna arrays play a key role in wireless networks for communications but also localization and sensing. The use of large antenna arrays pushes towards a propagation regime in which the wavefront is no longer plane but spherical. This allows to infer the position and orientation of an arbitrary source from the received signal without the need of using multiple anchor nodes. To understand the fundamental limits of large antenna arrays for localization, this paper fusions wave propagation theory with estimation theory, and computes the Cram{\'e}r-Rao Bound (CRB) for the estimation of the three Cartesian coordinates of the source on the basis of the electromagnetic vector field, observed over a rectangular surface area. To simplify the analysis, we assume that the source is a dipole, whose center is located on the line perpendicular to the surface center, with an orientation a priori known. Numerical and asymptotic results are given to quantify the CRBs, and to gain insights into the effect of various system parameters on the ultimate estimation accuracy. It turns out that surfaces of practical size may guarantee a centimeter-level accuracy in the mmWave bands.

In this work, we introduce an inverse averaging finite element method (IAFEM) for solving the size-modified Poisson-Nernst-Planck (SMPNP) equations. Comparing with the classical Poisson-Nernst-Planck (PNP) equations, the SMPNP equations add a nonlinear term to each of the Nernst-Planck (NP) fluxes to describe the steric repulsion which can treat multiple nonuniform particle sizes in simulations. Since the new terms include sums and gradients of ion concentrations, the nonlinear coupling of SMPNP equations is much stronger than that of PNP equations. By introducing a generalized Slotboom transform, each of the size-modified NP equation is transformed into a self-adjoint equation with exponentially behaved coefficient, which has similar simple form to the standard NP equation with the Slotboom transformation. This treatment enables employing our recently developed inverse averaging technique to deal with the exponential coefficients of the reformulated formulations, featured with advantages of numerical stability and flux conservation especially in strong nonlinear and convection-dominated cases. Comparing with previous stabilization methods, the IAFEM proposed in this paper can still possess the numerical stability when dealing with convection-dominated problems. And it is more concise and easier to be numerically implemented. Numerical experiments about a model problem with analytic solutions are presented to verify the accuracy and order of IAFEM for SMPNP equations. Studies about the size-effects of a sphere model and an ion channel system are presented to show that our IAFEM is more effective and robust than the traditional finite element method (FEM) when solving SMPNP equations in simulations of biological systems.

We analyze the orthogonal greedy algorithm when applied to dictionaries $\mathbb{D}$ whose convex hull has small entropy. We show that if the metric entropy of the convex hull of $\mathbb{D}$ decays at a rate of $O(n^{-\frac{1}{2}-\alpha})$ for $\alpha > 0$, then the orthogonal greedy algorithm converges at the same rate on the variation space of $\mathbb{D}$. This improves upon the well-known $O(n^{-\frac{1}{2}})$ convergence rate of the orthogonal greedy algorithm in many cases, most notably for dictionaries corresponding to shallow neural networks. These results hold under no additional assumptions on the dictionary beyond the decay rate of the entropy of its convex hull. In addition, they are robust to noise in the target function and can be extended to convergence rates on the interpolation spaces of the variation norm. Finally, we show that these improved rates are sharp and prove a negative result showing that the iterates generated by the orthogonal greedy algorithm cannot in general be bounded in the variation norm of $\mathbb{D}$.

A matrix formalism for the determination of the best estimator in certain simulation-based parameter estimation problems will be presented and discussed. The equations, termed as the Linear Template Fit, combine a linear regression with a least square method and its optimization. The Linear Template Fit employs only predictions that are calculated beforehand and which are provided for a few values of the parameter of interest. Therefore, the Linear Template Fit is particularly suited for parameter estimation with computationally intensive simulations that are otherwise often limited in their usability for statistical inference, or for performance critical applications. Equations for error propagation are discussed, and the analytic form provides comprehensive insights into the parameter estimation problem. Furthermore, the quickly-converging algorithm of the Quadratic Template Fit will be presented, which is suitable for a non-linear dependence on the parameters. As an example application, a determination of the strong coupling constant, $\alpha_s(m_Z)$, from inclusive jet cross section data at the CERN Large Hadron Collider is studied and compared with previously published results.

Escaping saddle points is a central research topic in nonconvex optimization. In this paper, we propose a simple gradient-based algorithm such that for a smooth function $f\colon\mathbb{R}^n\to\mathbb{R}$, it outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}(\log n/\epsilon^{1.75})$ iterations. Compared to the previous state-of-the-art algorithms by Jin et al. with $\tilde{O}((\log n)^{4}/\epsilon^{2})$ or $\tilde{O}((\log n)^{6}/\epsilon^{1.75})$ iterations, our algorithm is polynomially better in terms of $\log n$ and matches their complexities in terms of $1/\epsilon$. For the stochastic setting, our algorithm outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}((\log n)^{2}/\epsilon^{4})$ iterations. Technically, our main contribution is an idea of implementing a robust Hessian power method using only gradients, which can find negative curvature near saddle points and achieve the polynomial speedup in $\log n$ compared to the perturbed gradient descent methods. Finally, we also perform numerical experiments that support our results.

We study the problem of learning in the stochastic shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state. We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus to guarantee both optimism and convergence of the associated value iteration scheme. We prove that EB-SSP achieves the minimax regret rate $\widetilde{O}(B_{\star} \sqrt{S A K})$, where $K$ is the number of episodes, $S$ is the number of states, $A$ is the number of actions and $B_{\star}$ bounds the expected cumulative cost of the optimal policy from any state, thus closing the gap with the lower bound. Interestingly, EB-SSP obtains this result while being parameter-free, i.e., it does not require any prior knowledge of $B_{\star}$, nor of $T_{\star}$ which bounds the expected time-to-goal of the optimal policy from any state. Furthermore, we illustrate various cases (e.g., positive costs, or general costs when an order-accurate estimate of $T_{\star}$ is available) where the regret only contains a logarithmic dependence on $T_{\star}$, thus yielding the first horizon-free regret bound beyond the finite-horizon MDP setting.

In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司