亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

It is shown how to compute quotients efficiently in non-commutative univariate polynomial rings. This expands on earlier work where generic efficient quotients were introduced with a primary focus on commutative domains. In this article, fast algorithms are given for left and right quotients when the polynomial variable commutes with coefficients. These algorithms are based on the concept of the ``whole shifted inverse'', which is a specialized quotient where the dividend is a power of the polynomial variable. When the variable does not commute with coefficients, that is for skew polynomials, left and right whole shifted inverses are defined and the left whole shifted inverse may be used to compute the right quotient. For skew polynomials, the computation of whole shifted inverses is not asymptotically fast, but once obtained, quotients may be computed with one multiplication. Examples are shown of polynomials with matrix coefficients and differential operators and a proof-of-concept Maple implementation is given.

相關內容

FAST:Conference on File and Storage Technologies。 Explanation:文件和存儲技術會議。 Publisher:USENIX。 SIT:

We presents a self-stabilizing algorithm for the unison problem which achieves an efficient trade-off between time, workload and space in a weak model. Our algorithm is defined in the atomic-state model and works in a simplified version of the \emph{stone age} model in which networks are anonymous and local ports are unlabelled. It makes no assumption on the daemon and thus stabilizes under the weakest one: the distributed unfair daemon. Assuming a period $B \geq 2D+2$, our algorithm stabilizes in at most $2D-2$ rounds and $O(\min(n^2B, n^3))$ moves, while using $\lceil\log B\rceil +2$ bits per node where $D$ is the network diameter and $n$ the number of nodes. In particular and to the best of our knowledge, it is the first self-stabilizing unison for arbitrary anonymous networks achieving an asymptotically optimal stabilization time in rounds using a bounded memory at each node. Finally, we show that our solution allows to efficiently simulate synchronous self-stabilizing algorithms in an asynchronous environment. This provides new state-of-the-art algorithm solving both the leader election and the spanning tree construction problem in any identified connected network which, to the best of our knowledge, beat all known solutions in the literature.

Matrix factorization is an inference problem that has acquired importance due to its vast range of applications that go from dictionary learning to recommendation systems and machine learning with deep networks. The study of its fundamental statistical limits represents a true challenge, and despite a decade-long history of efforts in the community, there is still no closed formula able to describe its optimal performances in the case where the rank of the matrix scales linearly with its size. In the present paper, we study this extensive rank problem, extending the alternative 'decimation' procedure that we recently introduced, and carry out a thorough study of its performance. Decimation aims at recovering one column/line of the factors at a time, by mapping the problem into a sequence of neural network models of associative memory at a tunable temperature. Though being sub-optimal, decimation has the advantage of being theoretically analyzable. We extend its scope and analysis to two families of matrices. For a large class of compactly supported priors, we show that the replica symmetric free entropy of the neural network models takes a universal form in the low temperature limit. For sparse Ising prior, we show that the storage capacity of the neural network models diverges as sparsity in the patterns increases, and we introduce a simple algorithm based on a ground state search that implements decimation and performs matrix factorization, with no need of an informative initialization.

In this paper, we propose a test procedure based on the LASSO methodology to test the global null hypothesis of no dependence between a response variable and $p$ predictors, where $n$ observations with $n < p$ are available. The proposed procedure is similar to the F-test for a linear model, which evaluates significance based on the ratio of explained to unexplained variance. However, the F-test is not suitable for models where $p \geq n$. This limitation is due to the fact that when $p \geq n$, the unexplained variance is zero and thus the F-statistic can no longer be calculated. In contrast, the proposed extension of the LASSO methodology overcomes this limitation by using the number of non-zero coefficients in the LASSO model as a test statistic after suitably specifying the regularization parameter. The method allows reliable analysis of high-dimensional datasets with as few as $n = 40$ observations. The performance of the method is tested by means of a power study.

Robust iterative methods for solving large sparse systems of linear algebraic equations often suffer from the problem of optimizing the corresponding tuning parameters. To improve the performance of the problem of interest, specific parameter tuning is required, which in practice can be a time-consuming and tedious task. This paper proposes an optimization algorithm for tuning the numerical method parameters. The algorithm combines the evolution strategy with the pre-trained neural network used to filter the individuals when constructing the new generation. The proposed coupling of two optimization approaches allows to integrate the adaptivity properties of the evolution strategy with a priori knowledge realized by the neural network. The use of the neural network as a preliminary filter allows for significant weakening of the prediction accuracy requirements and reusing the pre-trained network with a wide range of linear systems. The detailed algorithm efficiency evaluation is performed for a set of model linear systems, including the ones from the SuiteSparse Matrix Collection and the systems from the turbulent flow simulations. The obtained results show that the pre-trained neural network can be effectively reused to optimize parameters for various linear systems, and a significant speedup in the calculations can be achieved at the cost of about 100 trial solves. The hybrid evolution strategy decreases the calculation time by more than 6 times for the black box matrices from the SuiteSparse Matrix Collection and by a factor of 1.4-2 for the sequence of linear systems when modeling turbulent flows. This results in a speedup of up to 1.8 times for the turbulent flow simulations performed in the paper.

The stochastic partial differential equation (SPDE) approach is widely used for modeling large spatial datasets. It is based on representing a Gaussian random field $u$ on $\mathbb{R}^d$ as the solution of an elliptic SPDE $L^\beta u = \mathcal{W}$ where $L$ is a second-order differential operator, $2\beta$ (belongs to natural number starting from 1) is a positive parameter that controls the smoothness of $u$ and $\mathcal{W}$ is Gaussian white noise. A few approaches have been suggested in the literature to extend the approach to allow for any smoothness parameter satisfying $\beta>d/4$. Even though those approaches work well for simulating SPDEs with general smoothness, they are less suitable for Bayesian inference since they do not provide approximations which are Gaussian Markov random fields (GMRFs) as in the original SPDE approach. We address this issue by proposing a new method based on approximating the covariance operator $L^{-2\beta}$ of the Gaussian field $u$ by a finite element method combined with a rational approximation of the fractional power. This results in a numerically stable GMRF approximation which can be combined with the integrated nested Laplace approximation (INLA) method for fast Bayesian inference. A rigorous convergence analysis of the method is performed and the accuracy of the method is investigated with simulated data. Finally, we illustrate the approach and corresponding implementation in the R package rSPDE via an application to precipitation data which is analyzed by combining the rSPDE package with the R-INLA software for full Bayesian inference.

We consider parametrized linear-quadratic optimal control problems and provide their online-efficient solutions by combining greedy reduced basis methods and machine learning algorithms. To this end, we first extend the greedy control algorithm, which builds a reduced basis for the manifold of optimal final time adjoint states, to the setting where the objective functional consists of a penalty term measuring the deviation from a desired state and a term describing the control energy. Afterwards, we apply machine learning surrogates to accelerate the online evaluation of the reduced model. The error estimates proven for the greedy procedure are further transferred to the machine learning models and thus allow for efficient a posteriori error certification. We discuss the computational costs of all considered methods in detail and show by means of two numerical examples the tremendous potential of the proposed methodology.

In this manuscript we derive the optimal out-of-sample causal predictor for a linear system that has been observed in $k+1$ within-sample environments. In this model we consider $k$ shifted environments and one observational environment. Each environment corresponds to a linear structural equation model (SEM) with its own shift and noise vector, both in $L^2$. The strength of the shifts can be put in a certain order, and we may therefore speak of all shifts that are less or equally strong than a given shift. We consider the space of all shifts are $\gamma$ times less or equally strong than any weighted average of the observed shift vectors with weights on the unit sphere. For each $\beta\in\mathbb{R}^p$ we show that the supremum of the risk functions $R_{\tilde{A}}(\beta)$ over $\tilde{A}\in C^\gamma$ has a worst-risk decomposition into a (positive) linear combination of risk functions, depending on $\gamma$. We then define the causal regularizer, $\beta_\gamma$, as the argument $\beta$ that minimizes this risk. The main result of the paper is that this regularizer can be consistently estimated with a plug-in estimator outside a set of zero Lebesgue measure in the parameter space. A practical obstacle for such estimation is that it involves the solution of a general degree polynomial which cannot be done explicitly. Therefore we also prove that an approximate plug-in estimator using the bisection method is also consistent. An interesting by-product of the proof of the main result is that the plug-in estimation of the argmin of the maxima of a finite set of quadratic risk functions is consistent outside a set of zero Lebesgue measure in the parameter space.

The Weighted Path Order of Yamada is a powerful technique for proving termination. It is also supported by CeTA, a certifier for checking untrusted termination proofs. To be more precise, CeTA contains a verified function that computes for two terms whether one of them is larger than the other for a given WPO, i.e., where all parameters of the WPO have been fixed. The problem of this verified function is its exponential runtime in the worst case. Therefore, in this work we develop a polynomial time implementation of WPO that is based on memoization. It also improves upon an earlier verified implementation of the Recursive Path Order: the RPO-implementation uses full terms as keys for the memory, a design which simplified the soundness proofs, but has some runtime overhead. In this work, keys are just numbers, so that the lookup in the memory is faster. Although trivial on paper, this change introduces some challenges for the verification task.

The notion of tail adversarial stability has been proven useful in obtaining limit theorems for tail dependent time series. Its implication and advantage over the classical strong mixing framework has been examined for max-linear processes, but not yet studied for additive linear processes. In this article, we fill this gap by verifying the tail adversarial stability condition for regularly varying additive linear processes. We in addition consider extensions of the result to a stochastic volatility generalization and to a max-linear counterpart. We also address the invariance of tail adversarial stability under monotone transforms. Some implications for limit theorems in statistical context are also discussed.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

北京阿比特科技有限公司