亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We lay the foundations of a new theory for algorithms and computational complexity by parameterizing the instances of a computational problem as a moduli scheme. Considering the geometry of the scheme associated to 3-SAT, we separate P and NP.

相關內容

CC在計算復雜性方面表現突出。它的學科處于數學與計算機理論科學的交叉點,具有清晰的數學輪廓和嚴格的數學格式。官網鏈接: · 近似 · 路徑 · MoDELS · 稀疏 ·
2021 年 11 月 10 日

We introduce a generic template for developing regret minimization algorithms in the Stochastic Shortest Path (SSP) model, which achieves minimax optimal regret as long as certain properties are ensured. The key of our analysis is a new technique called implicit finite-horizon approximation, which approximates the SSP model by a finite-horizon counterpart only in the analysis without explicit implementation. Using this template, we develop two new algorithms: the first one is model-free (the first in the literature to our knowledge) and minimax optimal under strictly positive costs; the second one is model-based and minimax optimal even with zero-cost state-action pairs, matching the best existing result from [Tarbouriech et al., 2021b]. Importantly, both algorithms admit highly sparse updates, making them computationally more efficient than all existing algorithms. Moreover, both can be made completely parameter-free.

Gaussian covariance graph model is a popular model in revealing underlying dependency structures among random variables. A Bayesian approach to the estimation of covariance structures uses priors that force zeros on some off-diagonal entries of covariance matrices and put a positive definite constraint on matrices. In this paper, we consider a spike and slab prior on off-diagonal entries, which uses a mixture of point-mass and normal distribution. The point-mass naturally introduces sparsity to covariance structures so that the resulting posterior from this prior renders covariance structure learning. Under this prior, we calculate posterior model probabilities of covariance structures using Laplace approximation. We show that the error due to Laplace approximation becomes asymptotically marginal at some rate depending on the posterior convergence rate of covariance matrix under the Frobenius norm. With the approximated posterior model probabilities, we propose a new framework for estimating a covariance structure. Since the Laplace approximation is done around the mode of conditional posterior of covariance matrix, which cannot be obtained in the closed form, we propose a block coordinate descent algorithm to find the mode and show that the covariance matrix can be estimated using this algorithm once the structure is chosen. Through a simulation study based on five numerical models, we show that the proposed method outperforms graphical lasso and sample covariance matrix in terms of root mean squared error, max norm, spectral norm, specificity, and sensitivity. Also, the advantage of the proposed method is demonstrated in terms of accuracy compared to our competitors when it is applied to linear discriminant analysis (LDA) classification to breast cancer diagnostic dataset.

In this paper we propose a tool for high-dimensional approximation based on trigonometric polynomials where we allow only low-dimensional interactions of variables. In a general high-dimensional setting, it is already possible to deal with special sampling sets such as sparse grids or rank-1 lattices. This requires black-box access to the function, i.e., the ability to evaluate it at any point. Here, we focus on scattered data points and grouped frequency index sets along the dimensions. From there we propose a fast matrix-vector multiplication, the grouped Fourier transform, for high-dimensional grouped index sets. Those transformations can be used in the application of the previously introduced method of approximating functions with low superposition dimension based on the analysis of variance (ANOVA) decomposition where there is a one-to-one correspondence from the ANOVA terms to our proposed groups. The method is able to dynamically detected important sets of ANOVA terms in the approximation. In this paper, we consider the involved least-squares problem and add different forms of regularization: Classical Tikhonov-regularization, namely, regularized least squares and the technique of group lasso, which promotes sparsity in the groups. As for the latter, there are no explicit solution formulas which is why we applied the fast iterative shrinking-thresholding algorithm to obtain the minimizer. Moreover, we discuss the possibility of incorporating smoothness information into the least-squares problem. Numerical experiments in under-, overdetermined, and noisy settings indicate the applicability of our algorithms. While we consider periodic functions, the idea can be directly generalized to non-periodic functions as well.

We propose and analyze a pressure-stabilized projection Lagrange--Galerkin scheme for the transient Oseen problem. The proposed scheme inherits the following advantages from the projection Lagrange--Galerkin scheme. The first advantage is computational efficiency. The scheme decouples the computation of each component of the velocity and pressure. The other advantage is essential unconditional stability. Here we also use the equal-order approximation for the velocity and pressure, and add a symmetric pressure stabilization term. This enriched pressure space enables us to obtain accurate solutions for small viscosity. First, we show an error estimate for the velocity for small viscosity. Then we show convergence results for the pressure. Numerical examples of a test problem show higher accuracy of the proposed scheme for small viscosity.

We consider an elliptic linear-quadratic parameter estimation problem with a finite number of parameters. An adaptive finite element method driven by an a posteriori error estimator for the error in the parameters is presented. Unlike prior results in the literature, our estimator, which is composed of standard energy error residual estimators for the state equation and suitable co-state problems, reflects the faster convergence of the parameter error compared to the (co)-state variables. We show optimal convergence rates of our method; in particular and unlike prior works, we prove that the estimator decreases with a rate that is the sum of the best approximation rates of the state and co-state variables. Experiments confirm that our method matches the convergence rate of the parameter error.

Recently there has been significant theoretical progress on understanding the convergence and generalization of gradient-based methods on nonconvex losses with overparameterized models. Nevertheless, many aspects of optimization and generalization and in particular the critical role of small random initialization are not fully understood. In this paper, we take a step towards demystifying this role by proving that small random initialization followed by a few iterations of gradient descent behaves akin to popular spectral methods. We also show that this implicit spectral bias from small random initialization, which is provably more prominent for overparameterized models, also puts the gradient descent iterations on a particular trajectory towards solutions that are not only globally optimal but also generalize well. Concretely, we focus on the problem of reconstructing a low-rank matrix from a few measurements via a natural nonconvex formulation. In this setting, we show that the trajectory of the gradient descent iterations from small random initialization can be approximately decomposed into three phases: (I) a spectral or alignment phase where we show that that the iterates have an implicit spectral bias akin to spectral initialization allowing us to show that at the end of this phase the column space of the iterates and the underlying low-rank matrix are sufficiently aligned, (II) a saddle avoidance/refinement phase where we show that the trajectory of the gradient iterates moves away from certain degenerate saddle points, and (III) a local refinement phase where we show that after avoiding the saddles the iterates converge quickly to the underlying low-rank matrix. Underlying our analysis are insights for the analysis of overparameterized nonconvex optimization schemes that may have implications for computational problems beyond low-rank reconstruction.

Branchwidth determines how graphs, and more generally, arbitrary connectivity (basically symmetric and submodular) functions could be decomposed into a tree-like structure by specific cuts. We develop a general framework for designing fixed-parameter tractable (FPT) 2-approximation algorithms for branchwidth of connectivity functions. The first ingredient of our framework is combinatorial. We prove a structural theorem establishing that either a sequence of particular refinement operations could decrease the width of a branch decomposition or that the width of the decomposition is already within a factor of 2 from the optimum. The second ingredient is an efficient implementation of the refinement operations for branch decompositions that support efficient dynamic programming. We present two concrete applications of our general framework. $\bullet$ An algorithm that for a given $n$-vertex graph $G$ and integer $k$ in time $2^{2^{O(k)}} n^2$ either constructs a rank decomposition of $G$ of width at most $2k$ or concludes that the rankwidth of $G$ is more than $k$. It also yields a $(2^{2k+1}-1)$-approximation algorithm for cliquewidth within the same time complexity, which in turn, improves to $f(k)n^2$ the running times of various algorithms on graphs of cliquewidth $k$. Breaking the "cubic barrier" for rankwidth and cliquewidth was an open problem in the area. $\bullet$ An algorithm that for a given $n$-vertex graph $G$ and integer $k$ in time $2^{O(k)} n$ either constructs a branch decomposition of $G$ of width at most $2k$ or concludes that the branchwidth of $G$ is more than $k$. This improves over the 3-approximation that follows from the recent treewidth 2-approximation of Korhonen [FOCS 2021].

We investigate identifying the boundary of a domain from sample points in the domain. We introduce new estimators for the normal vector to the boundary, distance of a point to the boundary, and a test for whether a point lies within a boundary strip. The estimators can be efficiently computed and are more accurate than the ones present in the literature. We provide rigorous error estimates for the estimators. Furthermore we use the detected boundary points to solve boundary-value problems for PDE on point clouds. We prove error estimates for the Laplace and eikonal equations on point clouds. Finally we provide a range of numerical experiments illustrating the performance of our boundary estimators, applications to PDE on point clouds, and tests on image data sets.

Online allocation problems with resource constraints are central problems in revenue management and online advertising. In these problems, requests arrive sequentially during a finite horizon and, for each request, a decision maker needs to choose an action that consumes a certain amount of resources and generates reward. The objective is to maximize cumulative rewards subject to a constraint on the total consumption of resources. In this paper, we consider a data-driven setting in which the reward and resource consumption of each request are generated using an input model that is unknown to the decision maker. We design a general class of algorithms that attain good performance in various input models without knowing which type of input they are facing. In particular, our algorithms are asymptotically optimal under independent and identically distributed inputs as well as various non-stationary stochastic input models, and they attain an asymptotically optimal fixed competitive ratio when the input is adversarial. Our algorithms operate in the Lagrangian dual space: they maintain a dual multiplier for each resource that is updated using online mirror descent. By choosing the reference function accordingly, we recover the dual sub-gradient descent and dual multiplicative weights update algorithm. The resulting algorithms are simple, fast, and do not require convexity in the revenue function, consumption function and action space, in contrast to existing methods for online allocation problems. We discuss applications to network revenue management, online bidding in repeated auctions with budget constraints, online proportional matching with high entropy, and personalized assortment optimization with limited inventory.

We propose a globally convergent numerical method, called the convexification, to numerically compute the viscosity solution to first-order Hamilton-Jacobi equations through the vanishing viscosity process where the viscosity parameter is a fixed small number. By convexification, we mean that we employ a suitable Carleman weight function to convexify the cost functional defined directly from the form of the Hamilton-Jacobi equation under consideration. The strict convexity of this functional is rigorously proved using a new Carleman estimate. We also prove that the unique minimizer of the this strictly convex functional can be reached by the gradient descent method. Moreover, we show that the minimizer well approximates the viscosity solution of the Hamilton-Jacobi equation as the noise contained in the boundary data tends to zero. Some interesting numerical illustrations are presented.

北京阿比特科技有限公司