亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A Las Vegas randomized algorithm is given to compute the Smith multipliers for a nonsingular integer matrix $A$, that is, unimodular matrices $U$ and $V$ such that $AV=US$, with $S$ the Smith normal form of $A$. The expected running time of the algorithm is about the same as required to multiply together two matrices of the same dimension and size of entries as $A$. Explicit bounds are given for the size of the entries in both unimodular multipliers. The main tool used by the algorithm is the Smith massager, a relaxed version of $V$, the unimodular matrix specifying the column operations of the Smith computation. From the perspective of efficiency, the main tools used are fast linear solving and partial linearization of integer matrices. As an application of the Smith with multipliers algorithm, a fast algorithm is given to find the fractional part of the inverse of the input matrix.

相關內容

FAST:Conference on File and Storage Technologies。 Explanation:文件和存(cun)儲技術會(hui)議。 Publisher:USENIX。 SIT:

In this paper, we propose a variationally consistent technique for decreasing the maximum eigenfrequencies of structural dynamics related finite element formulations. Our approach is based on adding a symmetric positive-definite term to the mass matrix that follows from the integral of the traction jump across element boundaries. The added term is weighted by a small factor, for which we derive a suitable, and simple, element-local parameter choice. For linear problems, we show that our mass-scaling method produces no adverse effects in terms of spatial accuracy and orders of convergence. We illustrate these properties in one, two and three spatial dimension, for quadrilateral elements and triangular elements, and for up to fourth order polynomials basis functions. To extend the method to non-linear problems, we introduce a linear approximation and show that a sizeable increase in critical time-step size can be achieved while only causing minor (even beneficial) influences on the dynamic response.

We determine the exact AND-gate cost of checking if $a\leq x < b$, where $a$ and $b$ are constant integers. Perhaps surprisingly, we find that the cost of interval checking never exceeds that of a single comparison and, in some cases, it is even lower.

Modern wireless cellular networks use massive multiple-input multiple-output (MIMO) technology. This technology involves operations with an antenna array at a base station that simultaneously serves multiple mobile devices which also use multiple antennas on their side. For this, various precoding and detection techniques are used, allowing each user to receive the signal intended for him from the base station. There is an important class of linear precoding called Regularized Zero-Forcing (RZF). In this work, we propose Adaptive RZF (ARZF) with a special kind of regularization matrix with different coefficients for each layer of multi-antenna users. These regularization coefficients are defined by explicit formulas based on SVD decompositions of user channel matrices. We study the optimization problem, which is solved by the proposed algorithm, with the connection to other possible problem statements. We also compare the proposed algorithm with state-of-the-art linear precoding algorithms on simulations with the Quadriga channel model. The proposed approach provides a significant increase in quality with the same computation time as in the reference methods.

We present and investigate a new type of implicit fractional linear multistep method of order two for fractional initial value problems. The method is obtained from the second order super convergence of the Gr\"unwald-Letnikov approximation of the fractional derivative at a non-integer shift point. The proposed method is of order two consistency and coincides with the backward difference method of order two for classical initial value problems when the order of the derivative is one. The weight coefficients of the proposed method are obtained from the Gr\"unwald weights and hence computationally efficient compared with that of the fractional backward difference formula of order two. The stability properties are analyzed and shown that the stability region of the method is larger than that of the fractional Adams-Moulton method of order two and the fractional trapezoidal method. Numerical result and illustrations are presented to justify the analytical theories.

Let $P$ be a polyhedron, defined by a system $A x \leq b$, where $A \in Z^{m \times n}$, $rank(A) = n$, and $b \in Z^{m}$. In the Integer Feasibility Problem, we need to decide whether $P \cap Z^n = \emptyset$ or to find some $x \in P \cap Z^n$ in the opposite case. Currently, its state of the art algorithm, due to \cite{DadushDis,DadushFDim} (see also \cite{Convic,ConvicComp,DConvic} for more general formulations), has the complexity bound $O(n)^n \cdot poly(\phi)$, where $\phi = size(A,b)$. It is a long-standing open problem to break the $O(n)^n$ dimension-dependence in the complexity of ILP algorithms. We show that if the matrix $A$ has a small $l_1$ or $l_\infty$ norm, or $A$ is sparse and has bounded elements, then the integer feasibility problem can be solved faster. More precisely, we give the following complexity bounds \begin{gather*} \min\{\|A\|_{\infty}, \|A\|_1\}^{5 n} \cdot 2^n \cdot poly(\phi), \bigl( \|A\|_{\max} \bigr)^{5 n} \cdot \min\{cs(A),rs(A)\}^{3 n} \cdot 2^n \cdot poly(\phi). \end{gather*} Here $\|A\|_{\max}$ denotes the maximal absolute value of elements of $A$, $cs(A)$ and $rs(A)$ denote the maximal number of nonzero elements in columns and rows of $A$, respectively. We present similar results for the integer linear counting and optimization problems. Additionally, we apply the last result for multipacking and multicover problems on graphs and hypergraphs, where we need to choose a minimal/maximal multiset of vertices to cover/pack the edges by a prescribed number of times. For example, we show that the stable multiset and vertex multicover problems on simple graphs admit FPT-algorithms with the complexity bound $2^{O(|V|)} \cdot poly(\phi)$, where $V$ is the vertex set of a given graph.

The rapid development of high-throughput technologies has enabled the generation of data from biological or disease processes that span multiple layers, like genomic, proteomic or metabolomic data, and further pertain to multiple sources, like disease subtypes or experimental conditions. In this work, we propose a general statistical framework based on Gaussian graphical models for horizontal (i.e. across conditions or subtypes) and vertical (i.e. across different layers containing data on molecular compartments) integration of information in such datasets. We start with decomposing the multi-layer problem into a series of two-layer problems. For each two-layer problem, we model the outcomes at a node in the lower layer as dependent on those of other nodes in that layer, as well as all nodes in the upper layer. We use a combination of neighborhood selection and group-penalized regression to obtain sparse estimates of all model parameters. Following this, we develop a debiasing technique and asymptotic distributions of inter-layer directed edge weights that utilize already computed neighborhood selection coefficients for nodes in the upper layer. Subsequently, we establish global and simultaneous testing procedures for these edge weights. Performance of the proposed methodology is evaluated on synthetic and real data.

We present a fundamentally new regularization method for the solution of the Fredholm integral equation of the first kind, in which we incorporate solutions corresponding to a range of Tikhonov regularizers into the end result. This method identifies solutions within a much larger function space, spanned by this set of regularized solutions, than is available to conventional regularizaton methods. Each of these solutions is regularized to a different extent. In effect, we combine the stability of solutions with greater degrees of regularization with the resolution of those that are less regularized. In contrast, current methods involve selection of a single, or in some cases several, regularization parameters that define an optimal degree of regularization. Because the identified solution is within the span of a set of differently-regularized solutions, we call this method \textit{span of regularizations}, or SpanReg. We demonstrate the performance of SpanReg through a non-negative least squares analysis employing a Gaussian basis, and demonstrate the improved recovery of bimodal Gaussian distribution functions as compared to conventional methods. We also demonstrate that this method exhibits decreased dependence of the end result on the optimality of regularization parameter selection. We further illustrate the method with an application to myelin water fraction mapping in the human brain from experimental magnetic resonance imaging relaxometry data. We expect SpanReg to be widely applicable as an effective new method for regularization of inverse problems.

In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.

Multiplying matrices is among the most fundamental and compute-intensive operations in machine learning. Consequently, there has been significant work on efficiently approximating matrix multiplies. We introduce a learning-based algorithm for this task that greatly outperforms existing methods. Experiments using hundreds of matrices from diverse domains show that it often runs $100\times$ faster than exact matrix products and $10\times$ faster than current approximate methods. In the common case that one matrix is known ahead of time, our method also has the interesting property that it requires zero multiply-adds. These results suggest that a mixture of hashing, averaging, and byte shuffling$-$the core operations of our method$-$could be a more promising building block for machine learning than the sparsified, factorized, and/or scalar quantized matrix products that have recently been the focus of substantial research and hardware investment.

We show that for the problem of testing if a matrix $A \in F^{n \times n}$ has rank at most $d$, or requires changing an $\epsilon$-fraction of entries to have rank at most $d$, there is a non-adaptive query algorithm making $\widetilde{O}(d^2/\epsilon)$ queries. Our algorithm works for any field $F$. This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix. Our algorithm is the first such algorithm which does not read a submatrix, and instead reads a carefully selected non-adaptive pattern of entries in rows and columns of $A$. We complement our algorithm with a matching query complexity lower bound for non-adaptive testers over any field. We also give tight bounds of $\widetilde{\Theta}(d^2)$ queries in the sensing model for which query access comes in the form of $\langle X_i, A\rangle:=tr(X_i^\top A)$; perhaps surprisingly these bounds do not depend on $\epsilon$. We next develop a novel property testing framework for testing numerical properties of a real-valued matrix $A$ more generally, which includes the stable rank, Schatten-$p$ norms, and SVD entropy. Specifically, we propose a bounded entry model, where $A$ is required to have entries bounded by $1$ in absolute value. We give upper and lower bounds for a wide range of problems in this model, and discuss connections to the sensing model above.

北京阿比特科技有限公司