亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We approximate the d complex zeros of a univariate polynomial p(x) of a degree d or those zeros that lie in a fixed region of interest on the complex plane such as a disc or a square. Our divide and conquer algorithm of STOC 1995 supports solution of this problem in optimal Boolean time (up to a poly-logarithmic factor), that is, runs nearly as fast as one can access the coefficients of p with the precision necessary to support required accuracy of the output. That record complexity has not been matched by any other algorithm yet, but our root-finder of 1995 is quite involved and has never been implemented. We present alternative nearly optimal root-finders based on our novel variants of the classical subdivision iterations. Unlike our predecessor of 1995, we require randomization of Las Vegas type, allowing us to detect any output error at a dominated computational cost, but our new root-finders are much simpler to implement than their predecessor of 1995. According to the results of extensive test with standard test polynomials for their preliminary version, which incorporates only a part of our novel techniques, the new root-finders compete and for a large class of inputs significantly supersedes the package of root-finding subroutines MPSolve, which for decades has been user's choice package. Unlike our predecessor of 1995 and all known fast algorithms for the cited tasks of polynomial root-finding, our new algorithms can be also applied to a polynomial given by a black box oracle for its evaluation rather than by its coefficients. This makes our root-finders particularly efficient for polynomials p(x) that can be evaluated fast such as the Mandelbrot polynomials or those given by the sum of a small number of shifted monomials. Our algorithm can be readily extended to fast approximation of the eigenvalues of a matrix or a matrix polynomial.

相關內容

FAST:Conference on File and Storage Technologies。 Explanation:文件和存儲技術會議。 Publisher:USENIX。 SIT:

The dynamic ranking, due to its increasing importance in many applications, is becoming crucial, especially with the collection of voluminous time-dependent data. One such application is sports statistics, where dynamic ranking aids in forecasting the performance of competitive teams, drawing on historical and current data. Despite its usefulness, predicting and inferring rankings pose challenges in environments necessitating time-dependent modeling. This paper introduces a spectral ranker called Kernel Rank Centrality, designed to rank items based on pairwise comparisons over time. The ranker operates via kernel smoothing in the Bradley-Terry model, utilizing a Markov chain model. Unlike the maximum likelihood approach, the spectral ranker is nonparametric, demands fewer model assumptions and computations, and allows for real-time ranking. We establish the asymptotic distribution of the ranker by applying an innovative group inverse technique, resulting in a uniform and precise entrywise expansion. This result allows us to devise a new inferential method for predictive inference, previously unavailable in existing approaches. Our numerical examples showcase the ranker's utility in predictive accuracy and constructing an uncertainty measure for prediction, leveraging data from the National Basketball Association (NBA). The results underscore our method's potential compared to the gold standard in sports, the Arpad Elo rating system.

Matrix factorization is an inference problem that has acquired importance due to its vast range of applications that go from dictionary learning to recommendation systems and machine learning with deep networks. The study of its fundamental statistical limits represents a true challenge, and despite a decade-long history of efforts in the community, there is still no closed formula able to describe its optimal performances in the case where the rank of the matrix scales linearly with its size. In the present paper, we study this extensive rank problem, extending the alternative 'decimation' procedure that we recently introduced, and carry out a thorough study of its performance. Decimation aims at recovering one column/line of the factors at a time, by mapping the problem into a sequence of neural network models of associative memory at a tunable temperature. Though being sub-optimal, decimation has the advantage of being theoretically analyzable. We extend its scope and analysis to two families of matrices. For a large class of compactly supported priors, we show that the replica symmetric free entropy of the neural network models takes a universal form in the low temperature limit. For sparse Ising prior, we show that the storage capacity of the neural network models diverges as sparsity in the patterns increases, and we introduce a simple algorithm based on a ground state search that implements decimation and performs matrix factorization, with no need of an informative initialization.

Robust iterative methods for solving large sparse systems of linear algebraic equations often suffer from the problem of optimizing the corresponding tuning parameters. To improve the performance of the problem of interest, specific parameter tuning is required, which in practice can be a time-consuming and tedious task. This paper proposes an optimization algorithm for tuning the numerical method parameters. The algorithm combines the evolution strategy with the pre-trained neural network used to filter the individuals when constructing the new generation. The proposed coupling of two optimization approaches allows to integrate the adaptivity properties of the evolution strategy with a priori knowledge realized by the neural network. The use of the neural network as a preliminary filter allows for significant weakening of the prediction accuracy requirements and reusing the pre-trained network with a wide range of linear systems. The detailed algorithm efficiency evaluation is performed for a set of model linear systems, including the ones from the SuiteSparse Matrix Collection and the systems from the turbulent flow simulations. The obtained results show that the pre-trained neural network can be effectively reused to optimize parameters for various linear systems, and a significant speedup in the calculations can be achieved at the cost of about 100 trial solves. The hybrid evolution strategy decreases the calculation time by more than 6 times for the black box matrices from the SuiteSparse Matrix Collection and by a factor of 1.4-2 for the sequence of linear systems when modeling turbulent flows. This results in a speedup of up to 1.8 times for the turbulent flow simulations performed in the paper.

Densest Subgraph Problem (DSP) is an important primitive problem with a wide range of applications, including fraud detection, community detection and DNA motif discovery. Edge-based density is one of the most common metrics in DSP. Although a maximum flow algorithm can exactly solve it in polynomial time, the increasing amount of data and the high complexity of algorithms motivate scientists to find approximation algorithms. Among these, its duality of linear programming derives several iterative algorithms including Greedy++, Frank-Wolfe and FISTA which redistribute edge weights to find the densest subgraph, however, these iterative algorithms vibrate around the optimal solution, which are not satisfactory for fast convergence. We propose our main algorithm Locally Optimal Weight Distribution (LOWD) to distribute the remaining edge weights in a locally optimal operation to converge to the optimal solution monotonically. Theoretically, we show that it will reach the optimal state of a specific linear programming which is called locally-dense decomposition. Besides, we show that it is not necessary to consider most of the edges in the original graph. Therefore, we develop a pruning algorithm using a modified Counting Sort to prune graphs by removing unnecessary edges and nodes, and then we can search the densest subgraph in a much smaller graph.

This note addresses the question of optimally estimating a linear functional of an object acquired through linear observations corrupted by random noise, where optimality pertains to a worst-case setting tied to a symmetric, convex, and closed model set containing the object. It complements the article "Statistical Estimation and Optimal Recovery" published in the Annals of Statistics in 1994. There, Donoho showed (among other things) that, for Gaussian noise, linear maps provide near-optimal estimation schemes relatively to a performance measure relevant in Statistical Estimation. Here, we advocate for a different performance measure arguably more relevant in Optimal Recovery. We show that, relatively to this new measure, linear maps still provide near-optimal estimation schemes even if the noise is merely log-concave. Our arguments, which make a connection to the deterministic noise situation and bypass properties specific to the Gaussian case, offer an alternative to parts of Donoho's proof.

The Fast Reciprocal Square Root Algorithm is a well-established approximation technique consisting of two stages: first, a coarse approximation is obtained by manipulating the bit pattern of the floating point argument using integer instructions, and second, the coarse result is refined through one or more steps, traditionally using Newtonian iteration but alternatively using improved expressions with carefully chosen numerical constants found by other authors. The algorithm was widely used before microprocessors carried built-in hardware support for computing reciprocal square roots. At the time of writing, however, there is in general no hardware acceleration for computing other fixed fractional powers. This paper generalises the algorithm to cater to all rational powers, and to support any polynomial degree(s) in the refinement step(s), and under the assumption of unlimited floating point precision provides a procedure which automatically constructs provably optimal constants in all of these cases. It is also shown that, under certain assumptions, the use of monic refinement polynomials yields results which are much better placed with respect to the cost/accuracy tradeoff than those obtained using general polynomials. Further extensions are also analysed, and several new best approximations are given.

In this paper we give the first efficient algorithms for the $k$-center problem on dynamic graphs undergoing edge updates. In this problem, the goal is to partition the input into $k$ sets by choosing $k$ centers such that the maximum distance from any data point to the closest center is minimized. It is known that it is NP-hard to get a better than $2$ approximation for this problem. While in many applications the input may naturally be modeled as a graph, all prior works on $k$-center problem in dynamic settings are on metrics. In this paper, we give a deterministic decremental $(2+\epsilon)$-approximation algorithm and a randomized incremental $(4+\epsilon)$-approximation algorithm, both with amortized update time $kn^{o(1)}$ for weighted graphs. Moreover, we show a reduction that leads to a fully dynamic $(2+\epsilon)$-approximation algorithm for the $k$-center problem, with worst-case update time that is within a factor $k$ of the state-of-the-art upper bound for maintaining $(1+\epsilon)$-approximate single-source distances in graphs. Matching this bound is a natural goalpost because the approximate distances of each vertex to its center can be used to maintain a $(2+\epsilon)$-approximation of the graph diameter and the fastest known algorithms for such a diameter approximation also rely on maintaining approximate single-source distances.

Modelling in biology must adapt to increasingly complex and massive data. The efficiency of the inference algorithms used to estimate model parameters is therefore questioned. Many of these are based on stochastic optimization processes which waste a significant part of the computation time due to their rejection sampling approaches. We introduce the Fixed Landscape Inference MethOd (flimo), a new likelihood-free inference method for continuous state-space stochastic models. It applies deterministic gradient-based optimization algorithms to obtain a point estimate of the parameters, minimizing the difference between the data and some simulations according to some prescribed summary statistics. In this sense, it is analogous to Approximate Bayesian Computation (ABC). Like ABC, it can also provide an approximation of the distribution of the parameters. Three applications are proposed: a usual theoretical example, namely the inference of the parameters of g-and-k distributions; a population genetics problem, not so simple as it seems, namely the inference of a selective value from time series in a Wright-Fisher model; and simulations from a Ricker model, representing chaotic population dynamics. In the two first applications, the results show a drastic reduction of the computational time needed for the inference phase compared to the other methods, despite an equivalent accuracy. Even when likelihood-based methods are applicable, the simplicity and efficiency of flimo make it a compelling alternative. Implementations in Julia and in R are available on //metabarcoding.org/flimo. To run flimo, the user must simply be able to simulate data according to the chosen model.

In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.

北京阿比特科技有限公司