亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Randomized algorithms in numerical linear algebra can be fast, scalable and robust. This paper examines the effect of sketching on the right singular vectors corresponding to the smallest singular values of a tall-skinny matrix. We analyze a fast algorithm by Gilbert, Park and Wakin for finding the trailing right singular vectors using randomization by examining the quality of the solution using multiplicative perturbation theory. For an $m\times n$ ($m\geq n$) matrix, the algorithm runs with complexity $O(mn\log n +n^3)$ which is faster than the standard $O(mn^2)$ methods. In applications, numerical experiments show great speedups including a $30\times$ speedup for the AAA algorithm and $10\times$ speedup for the total least squares problem.

相關內容

In the dynamic approximate maximum bipartite matching problem we are given bipartite graph $G$ undergoing updates and our goal is to maintain a matching of $G$ which is large compared the maximum matching size $\mu(G)$. We define a dynamic matching algorithm to be $\alpha$ (respectively $(\alpha, \beta)$)-approximate if it maintains matching $M$ such that at all times $|M | \geq \mu(G) \cdot \alpha$ (respectively $|M| \geq \mu(G) \cdot \alpha - \beta$). We present the first deterministic $(1-\epsilon )$-approximate dynamic matching algorithm with $O(poly(\epsilon ^{-1}))$ amortized update time for graphs undergoing edge insertions. Previous solutions either required super-constant [Gupta FSTTCS'14, Bhattacharya-Kiss-Saranurak SODA'23] or exponential in $1/\epsilon $ [Grandoni-Leonardi-Sankowski-Schwiegelshohn-Solomon SODA'19] update time. Our implementation is arguably simpler than the mentioned algorithms and its description is self contained. Moreover, we show that if we allow for additive $(1, \epsilon \cdot n)$-approximation our algorithm seamlessly extends to also handle vertex deletions, on top of edge insertions. This makes our algorithm one of the few small update time algorithms for $(1-\epsilon )$-approximate dynamic matching allowing for updates both increasing and decreasing the maximum matching size of $G$ in a fully dynamic manner.

The number of modes in a probability density function is representative of the model's complexity and can also be viewed as the number of existing subpopulations. Despite its relevance, little research has been devoted to its estimation. Focusing on the univariate setting, we propose a novel approach targeting prediction accuracy inspired by some overlooked aspects of the problem. We argue for the need for structure in the solutions, the subjective and uncertain nature of modes, and the convenience of a holistic view blending global and local density properties. Our method builds upon a combination of flexible kernel estimators and parsimonious compositional splines. Feature exploration, model selection and mode testing are implemented in the Bayesian inference paradigm, providing soft solutions and allowing to incorporate expert judgement in the process. The usefulness of our proposal is illustrated through a case study in sports analytics, showcasing multiple companion visualisation tools. A thorough simulation study demonstrates that traditional modality-driven approaches paradoxically struggle to provide accurate results. In this context, our method emerges as a top-tier alternative offering innovative solutions for analysts.

We consider simple stochastic games $\mathcal G$ with energy-parity objectives, a combination of quantitative rewards with a qualitative parity condition. The Maximizer tries to avoid running out of energy while simultaneously satisfying a parity condition. We present an algorithm to approximate the value of a given configuration in 2-NEXPTIME. Moreover, $\varepsilon$-optimal strategies for either player require at most $O(2EXP(|{\mathcal G}|)\cdot\log(\frac{1}{\varepsilon}))$ memory modes.

The SHAP framework provides a principled method to explain the predictions of a model by computing feature importance. Motivated by applications in finance, we introduce the Top-k Identification Problem (TkIP), where the objective is to identify the k features with the highest SHAP values. While any method to compute SHAP values with uncertainty estimates (such as KernelSHAP and SamplingSHAP) can be trivially adapted to solve TkIP, doing so is highly sample inefficient. The goal of our work is to improve the sample efficiency of existing methods in the context of solving TkIP. Our key insight is that TkIP can be framed as an Explore-m problem--a well-studied problem related to multi-armed bandits (MAB). This connection enables us to improve sample efficiency by leveraging two techniques from the MAB literature: (1) a better stopping-condition (to stop sampling) that identifies when PAC (Probably Approximately Correct) guarantees have been met and (2) a greedy sampling scheme that judiciously allocates samples between different features. By adopting these methods we develop KernelSHAP@k and SamplingSHAP@k to efficiently solve TkIP, offering an average improvement of $5\times$ in sample-efficiency and runtime across most common credit related datasets.

Coded distributed computing (CDC) was introduced to greatly reduce the communication load for MapReduce computing systems. Such a system has $K$ nodes, $N$ input files, and $Q$ Reduce functions. Each input file is mapped by $r$ nodes and each Reduce function is computed by $s$ nodes. The architecture must allow for coding techniques that achieve the maximum multicast gain. Some CDC schemes that achieve optimal communication load have been proposed before. The parameters $N$ and $Q$ in those schemes, however, grow too fast with respect to $K$ to be of great practical value. To improve the situation, researchers have come up with some asymptotically optimal cascaded CDC schemes with $s+r=K$ from symmetric designs. In this paper, we propose new asymptotically optimal cascaded CDC schemes. Akin to known schemes, ours have $r+s=K$ and make use of symmetric designs as construction tools. Unlike previous schemes, ours have much smaller communication loads, given the same set of parameters $K$, $r$, $N$, and $Q$. We also expand the construction tools to include almost difference sets. Using them, we have managed to construct a new asymptotically optimal cascaded CDC scheme.

For many statistical experiments, there exists a multitude of optimal designs. If we consider models with uncorrelated observations and adopt the approach of approximate experimental design, the set of all optimal designs typically forms a multivariate polytope. In this paper, we mathematically characterize the polytope of optimal designs. In particular, we show that its vertices correspond to the so-called minimal optimum designs. Consequently, we compute the vertices for several classical multifactor regression models of the first and the second degree. To this end, we use software tools based on rational arithmetic; therefore, the computed list is accurate and complete. The polytope of optimal experimental designs, and its vertices, can be applied in several ways. For instance, it can aid in constructing cost-efficient and efficient exact designs.

For any Boolean functions $f$ and $g$, the question whether $R(f\circ g) = \tilde{\Theta}(R(f)R(g))$, is known as the composition question for the randomized query complexity. Similarly, the composition question for the approximate degree asks whether $\widetilde{deg}(f\circ g) = \tilde{\Theta}(\widetilde{deg}(f)\cdot\widetilde{deg}(g))$. These questions are two of the most important and well-studied problems, and yet we are far from answering them satisfactorily. It is known that the measures compose if one assumes various properties of the outer function $f$ (or inner function $g$). This paper extends the class of outer functions for which $\text{R}$ and $\widetilde{\text{deg}}$ compose. A recent landmark result (Ben-David and Blais, 2020) showed that $R(f \circ g) = \Omega(noisyR(f)\cdot R(g))$. This implies that composition holds whenever $noisyR(f) = \Tilde{\Theta}(R(f))$. We show two results: (1)When $R(f) = \Theta(n)$, then $noisyR(f) = \Theta(R(f))$. (2) If $\text{R}$ composes with respect to an outer function, then $\text{noisyR}$ also composes with respect to the same outer function. On the other hand, no result of the type $\widetilde{deg}(f \circ g) = \Omega(M(f) \cdot \widetilde{deg}(g))$ (for some non-trivial complexity measure $M(\cdot)$) was known to the best of our knowledge. We prove that $\widetilde{deg}(f\circ g) = \widetilde{\Omega}(\sqrt{bs(f)} \cdot \widetilde{deg}(g)),$ where $bs(f)$ is the block sensitivity of $f$. This implies that $\widetilde{\text{deg}}$ composes when $\widetilde{\text{deg}}(f)$ is asymptotically equal to $\sqrt{\text{bs}(f)}$. It is already known that both $\text{R}$ and $\widetilde{\text{deg}}$ compose when the outer function is symmetric. We also extend these results to weaker notions of symmetry with respect to the outer function.

The theory of mixed finite element methods for solving different types of elliptic partial differential equations in saddle-point formulation is well established since many decades. However, this topic was mostly studied for variational formulations defined upon the same finite-element product spaces of both shape- and test-pairs of primal variable-multiplier. Whenever these two product spaces are different the saddle point problem is asymmetric. It turns out that the conditions to be satisfied by the finite elements product spaces stipulated in the few works on this case may be of limited use in practice. The purpose of this paper is to provide an in-depth analysis of the well-posedness and the uniform stability of asymmetric approximate saddle point problems, based on the theory of continuous linear operators on Hilbert spaces. Our approach leads to necessary and sufficient conditions for such properties to hold, expressed in a readily exploitable form with fine constants. In particular standard interpolation theory suffices to estimate the error of a conforming method.

In this work, we investigate the interval generalized Sylvester matrix equation ${\bf{A}}X{\bf{B}}+{\bf{C}}X{\bf{D}}={\bf{F}}$ and develop some techniques for obtaining outer estimations for the so-called united solution set of this interval system. First, we propose a modified variant of the Krawczyk operator which causes reducing computational complexity to cubic, compared to Kronecker product form. We then propose an iterative technique for enclosing the solution set. These approaches are based on spectral decompositions of the midpoints of ${\bf{A}}$, ${\bf{B}}$, ${\bf{C}}$ and ${\bf{D}}$ and in both of them we suppose that the midpoints of ${\bf{A}}$ and ${\bf{C}}$ are simultaneously diagonalizable as well as for the midpoints of the matrices ${\bf{B}}$ and ${\bf{D}}$. Some numerical experiments are given to illustrate the performance of the proposed methods.

In this work, we investigate the interval generalized Sylvester matrix equation ${\bf{A}}X{\bf{B}}+{\bf{C}}X{\bf{D}}={\bf{F}}$ and develop some techniques for obtaining outer estimations for the so-called united solution set of this interval system. First, we propose a modified variant of the Krawczyk operator which causes reducing computational complexity to cubic, compared to Kronecker product form. We then propose an iterative technique for enclosing the solution set. These approaches are based on spectral decompositions of the midpoints of ${\bf{A}}$, ${\bf{B}}$, ${\bf{C}}$ and ${\bf{D}}$ and in both of them we suppose that the midpoints of ${\bf{A}}$ and ${\bf{C}}$ are simultaneously diagonalizable as well as for the midpoints of the matrices ${\bf{B}}$ and ${\bf{D}}$. Some numerical experiments are given to illustrate the performance of the proposed methods.

北京阿比特科技有限公司