亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Deflation techniques are typically used to shift isolated clusters of small eigenvalues in order to obtain a tighter distribution and a smaller condition number. Such changes induce a positive effect in the convergence behavior of Krylov subspace methods, which are among the most popular iterative solvers for large sparse linear systems. We develop a deflation strategy for symmetric saddle point matrices by taking advantage of their underlying block structure. The vectors used for deflation come from an elliptic singular value decomposition relying on the generalized Golub-Kahan bidiagonalization process. The block targeted by deflation is the off-diagonal one since it features a problematic singular value distribution for certain applications. One example is the Stokes flow in elongated channels, where the off-diagonal block has several small, isolated singular values, depending on the length of the channel. Applying deflation to specific parts of the saddle point system is important when using solvers such as CRAIG, which operates on individual blocks rather than the whole system. The theory is developed by extending the existing framework for deflating square matrices before applying a Krylov subspace method like MINRES. Numerical experiments confirm the merits of our strategy and lead to interesting questions about using approximate vectors for deflation.

相關內容

在數學中,鞍點或極大極小點是函數圖形表面上的一點,其正交方向上的斜率(導數)都為零,但它不是函數的局部極值。鞍點是在某一軸向(峰值之間)有一個相對最小的臨界點,在交叉軸上有一個相對最大的臨界點。

Explanations in XAI are typically developed by AI experts and focus on algorithmic transparency and the inner workings of AI systems. Research has shown that such explanations do not meet the needs of users who do not have AI expertise. As a result, explanations are often ineffective in making system decisions interpretable and understandable. We aim to strengthen a socio-technical view of AI by following a Human-Centered Explainable Artificial Intelligence (HC-XAI) approach, which investigates the explanation needs of end-users (i.e., subject matter experts and lay users) in specific usage contexts. One of the most influential works in this area is the XAI Question Bank (XAIQB) by Liao et al. The authors propose a set of questions that end-users might ask when using an AI system, which in turn is intended to help developers and designers identify and address explanation needs. Although the XAIQB is widely referenced, there are few reports of its use in practice. In particular, it is unclear to what extent the XAIQB sufficiently captures the explanation needs of end-users and what potential problems exist in the practical application of the XAIQB. To explore these open questions, we used the XAIQB as the basis for analyzing 12 think-aloud software explorations with subject matter experts. We investigated the suitability of the XAIQB as a tool for identifying explanation needs in a specific usage context. Our analysis revealed a number of explanation needs that were missing from the question bank, but that emerged repeatedly as our study participants interacted with an AI system. We also found that some of the XAIQB questions were difficult to distinguish and required interpretation during use. Our contribution is an extension of the XAIQB with 11 new questions. In addition, we have expanded the descriptions of all new and existing questions to facilitate their use.

We investigate a fundamental vertex-deletion problem called (Induced) Subgraph Hitting: given a graph $G$ and a set $\mathcal{F}$ of forbidden graphs, the goal is to compute a minimum-sized set $S$ of vertices of $G$ such that $G-S$ does not contain any graph in $\mathcal{F}$ as an (induced) subgraph. This is a generic problem that encompasses many well-known problems that were extensively studied on their own, particularly (but not only) from the perspectives of both approximation and parameterization. In this paper, we study the approximability of the problem on a large variety of graph classes. Our first result is a linear-time $(1+\varepsilon)$-approximation reduction from (Induced) Subgraph Hitting on any graph class $\mathcal{G}$ of bounded expansion to the same problem on bounded degree graphs within $\mathcal{G}$. This directly yields linear-size $(1+\varepsilon)$-approximation lossy kernels for the problems on any bounded-expansion graph classes. Our second result is a linear-time approximation scheme for (Induced) Subgraph Hitting on any graph class $\mathcal{G}$ of polynomial expansion, based on the local-search framework of Har-Peled and Quanrud [SICOMP 2017]. This approximation scheme can be applied to a more general family of problems that aim to hit all subgraphs satisfying a certain property $\pi$ that is efficiently testable and has bounded diameter. Both of our results have applications to Subgraph Hitting (not induced) on wide classes of geometric intersection graphs, resulting in linear-size lossy kernels and (near-)linear time approximation schemes for the problem.

We develop a framework for algorithms finding diameter in graphs of bounded distance Vapnik-Chervonenkis dimension, in (parametrized) sub-quadratic time complexity. The class of bounded distance VC-dimension graphs is wide, including, e.g. all minor-free graphs. We build on the work of Ducoffe et al., improving their technique. With our approach the algorithms become simpler and faster, working in $\widetilde{\mathcal{O}}(k \cdot V^{1-1/d} \cdot E)$ time complexity, where $k$ is the diameter, $d$ is the VC-dimension. Furthermore, it allows us to use the technique in more general setting. In particular, we use this framework for geometric intersection graphs, i.e. graphs where vertices are identical geometric objects on a plane and the adjacency is defined by intersection. Applying our approach for these graphs, we answer a question posed by Bringmann et al., finding a $\widetilde{\mathcal{O}}(n^{7/4})$ parametrized diameter algorithm for unit square intersection graph of size $n$, as well as a more general algorithm for convex polygon intersection graphs.

We consider gradient-related methods for low-rank matrix optimization with a smooth cost function. The methods operate on single factors of the low-rank factorization and share aspects of both alternating and Riemannian optimization. Two possible choices for the search directions based on Gauss-Southwell type selection rules are compared: one using the gradient of a factorized non-convex formulation, the other using the Riemannian gradient. While both methods provide gradient convergence guarantees that are similar to the unconstrained case, numerical experiments on a quadratic cost function indicate that the version based on the Riemannian gradient is significantly more robust with respect to small singular values and the condition number of the cost function. As a side result of our approach, we also obtain new convergence results for the alternating least squares method.

In this article, we propose two kinds of neural networks inspired by power method and inverse power method to solve linear eigenvalue problems. These neural networks share similar ideas with traditional methods, in which the differential operator is realized by automatic differentiation. The eigenfunction of the eigenvalue problem is learned by the neural network and the iterative algorithms are implemented by optimizing the specially defined loss function. The largest positive eigenvalue, smallest eigenvalue and interior eigenvalues with the given prior knowledge can be solved efficiently. We examine the applicability and accuracy of our methods in the numerical experiments in one dimension, two dimensions and higher dimensions. Numerical results show that accurate eigenvalue and eigenfunction approximations can be obtained by our methods.

In this paper we derive tight lower bounds resolving the hardness status of several fundamental weighted matroid problems. One notable example is budgeted matroid independent set, for which we show there is no fully polynomial-time approximation scheme (FPTAS), indicating the Efficient PTAS of [Doron-Arad, Kulik and Shachnai, SOSA 2023] is the best possible. Furthermore, we show that there is no pseudo-polynomial time algorithm for exact weight matroid independent set, implying the algorithm of [Camerini, Galbiati and Maffioli, J. Algorithms 1992] for representable matroids cannot be generalized to arbitrary matroids. Similarly, we show there is no Fully PTAS for constrained minimum basis of a matroid and knapsack cover with a matroid, implying the existing Efficient PTAS for the former is optimal. For all of the above problems, we obtain unconditional lower bounds in the oracle model, where the independent sets of the matroid can be accessed only via a membership oracle. We complement these results by showing that the same lower bounds hold under standard complexity assumptions, even if the matroid is encoded as part of the instance. All of our bounds are based on a specifically structured family of paving matroids.

In this study, a novel preconditioner based on the absolute-value block $\alpha$-circulant matrix approximation is developed, specifically designed for nonsymmetric dense block lower triangular Toeplitz (BLTT) systems that emerge from the numerical discretization of evolutionary equations. Our preconditioner is constructed by taking an absolute-value of a block $\alpha$-circulant matrix approximation to the BLTT matrix. To apply our preconditioner, the original BLTT linear system is converted into a symmetric form by applying a time-reversing permutation transformation. Then, with our preconditioner, the preconditioned minimal residual method (MINRES) solver is employed to solve the symmetrized linear system. With properly chosen $\alpha$, the eigenvalues of the preconditioned matrix are proven to be clustered around $\pm1$ without any significant outliers. With the clustered spectrum, we show that the preconditioned MINRES solver for the preconditioned system has a convergence rate independent of system size. To the best of our knowledge, this is the first preconditioned MINRES method with size-independent convergence rate for the dense BLTT system. The efficacy of the proposed preconditioner is corroborated by our numerical experiments, which reveal that it attains optimal convergence.

In this paper, we introduce novel fast matrix inversion algorithms that leverage triangular decomposition and recurrent formalism, incorporating Strassen's fast matrix multiplication. Our research places particular emphasis on triangular matrices, where we propose a novel computational approach based on combinatorial techniques for finding the inverse of a general non-singular triangular matrix. Unlike iterative methods, our combinatorial approach for (block) triangular-type matrices enables direct computation of the matrix inverse through a nonlinear combination of carefully selected combinatorial entries from the initial matrix. This unique characteristic makes our proposed method fully parallelizable, offering significant potential for efficient implementation on parallel computing architectures. Our approach demonstrates intriguing features that allow the derivation of recurrent relations for constructing the matrix inverse. By combining the (block) combinatorial approach, with a recursive triangular split method for inverting triangular matrices, we develop potentially competitive algorithms that strike a balance between efficiency and accuracy. We provide rigorous mathematical proofs of the newly presented method. Additionally, we conduct extensive numerical tests to showcase its applicability and efficiency. The comprehensive evaluation and experimental results presented in this paper confirm the practical utility of our proposed algorithms, demonstrating their superiority over classical approaches in terms of computational efficiency.

Finding diverse solutions to optimization problems has been of practical interest for several decades, and recently enjoyed increasing attention in research. While submodular optimization has been rigorously studied in many fields, its diverse solutions extension has not. In this study, we consider the most basic variants of submodular optimization, and propose two simple greedy algorithms, which are known to be effective at maximizing monotone submodular functions. These are equipped with parameters that control the trade-off between objective and diversity. Our theoretical contribution shows their approximation guarantees in both objective value and diversity, as functions of their respective parameters. Our experimental investigation with maximum vertex coverage instances demonstrates their empirical differences in terms of objective-diversity trade-offs.

We consider a cooperative multi-agent system consisting of a team of agents with decentralized information. Our focus is on the design of symmetric (i.e. identical) strategies for the agents in order to optimize a finite horizon team objective. We start with a general information structure and then consider some special cases. The constraint of using symmetric strategies introduces new features and complications in the team problem. For example, we show in a simple example that randomized symmetric strategies may outperform deterministic symmetric strategies. We also discuss why some of the known approaches for reducing agents' private information in teams may not work under the constraint of symmetric strategies. We then adopt the common information approach for our problem and modify it to accommodate the use of symmetric strategies. This results in a common information based dynamic program where each step involves minimization over a single function from the space of an agent's private information to the space of probability distributions over actions. We present specialized models where private information can be reduced using simple dynamic program based arguments.

北京阿比特科技有限公司