亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the tensor equation whose coefficient tensor is a nonsingular M-tensor and whose right side vector is nonnegative. Such a tensor equation may have a large number of nonnegative solutions. It is already known that the tensor equation has a maximal nonnegative solution and a minimal nonnegative solution (called extremal solutions collectively). However, the existing proofs do not show how the extremal solutions can be computed. The existing numerical methods can find one of the nonnegative solutions, without knowing whether the computed solution is an extremal solution. In this paper, we present new proofs for the existence of extremal solutions. Our proofs are much shorter than existing ones and more importantly they give numerical methods that can compute the extremal solutions. Linear convergence of these numerical methods is also proved under mild assumptions. Some of our discussions also allow the coefficient tensor to be a Z-tensor or allow the right side vector to have some negative elements.

相關內容

We propose a new paradigm for designing efficient p-adaptive arbitrary high order methods. We consider arbitrary high order iterative schemes that gain one order of accuracy at each iteration and we modify them in order to match the accuracy achieved in a specific iteration with the discretization accuracy of the same iteration. Apart from the computational advantage, the new modified methods allow to naturally perform p-adaptivity, stopping the iterations when appropriate conditions are met. Moreover, the modification is very easy to be included in an existing implementation of an arbitrary high order iterative scheme and it does not ruin the possibility of parallelization, if this was achievable by the original method. An application to the ADER method for hyperbolic Partial Differential Equations (PDEs) is presented here. We explain how such framework can be interpreted as an arbitrary high order iterative scheme, by recasting it as a Deferred Correction (DeC) method, and how to easily modify it to obtain a more efficient formulation, in which a local a posteriori limiter can be naturally integrated leading to p-adaptivity and structure preserving properties. Finally, the novel approach is extensively tested against classical benchmarks for compressible gas dynamics to show the robustness and the computational efficiency.

For the intersection of the Stiefel manifold and the set of nonnegative matrices in $\mathbb{R}^{n\times r}$, we present global and local error bounds with easily computable residual functions and explicit coefficients. Moreover, we show that the error bounds cannot be improved except for the coefficients, which explains why two square-root terms are necessary in the bounds when $1 < r < n$ for the nonnegativity and orthogonality, respectively. The error bounds are applied to penalty methods for minimizing a Lipschitz continuous function with nonnegative orthogonality constraints. Under only the Lipschitz continuity of the objective function, we prove the exactness of penalty problems that penalize the nonnegativity constraint, or the orthogonality constraint, or both constraints. Our results cover both global and local minimizers.

A path query extracts vertex tuples from a labeled graph, based on the words that are formed by the paths connecting the vertices. We study the computational complexity of measuring the contribution of edges and vertices to an answer to a path query, focusing on the class of conjunctive regular path queries. To measure this contribution, we adopt the traditional Shapley value from cooperative game theory. This value has been recently proposed and studied in the context of relational database queries and has uses in a plethora of other domains. We first study the contribution of edges and show that the exact Shapley value is almost always hard to compute. Specifically, it is #P-hard to calculate the contribution of an edge whenever at least one (non-redundant) conjunct allows for a word of length three or more. In the case of regular path queries (i.e., no conjunction), the problem is tractable if the query has only words of length at most two; hence, this property fully characterizes the tractability of the problem. On the other hand, if we allow for an approximation error, then it is straightforward to obtain an efficient scheme (FPRAS) for an additive approximation. Yet, a multiplicative approximation is harder to obtain. We establish that in the case of conjunctive regular path queries, a multiplicative approximation of the Shapley value of an edge can be computed in polynomial time if and only if all query atoms are finite languages (assuming non-redundancy and conventional complexity limitations). We also study the analogous situation where we wish to determine the contribution of a vertex, rather than an edge, and establish complexity results of similar nature.

In this paper, we propose a low-cost, parameter-free, and pressure-robust Stokes solver based on the enriched Galerkin (EG) method with a discontinuous velocity enrichment function. The EG method employs the interior penalty discontinuous Galerkin (IPDG) formulation to weakly impose the continuity of the velocity function. However, the symmetric IPDG formulation, despite of its advantage of symmetry, requires a lot of computational effort to choose an optimal penalty parameter and to compute different trace terms. In order to reduce such effort, we replace the derivatives of the velocity function with its weak derivatives computed by the geometric data of elements. Therefore, our modified EG (mEG) method is a parameter-free numerical scheme which has reduced computational complexity as well as optimal rates of convergence. Moreover, we achieve pressure-robustness for the mEG method by employing a velocity reconstruction operator on the load vector on the right-hand side of the discrete system. The theoretical results are confirmed through numerical experiments with two- and three-dimensional examples.

The polynomial kernels are widely used in machine learning and they are one of the default choices to develop kernel-based classification and regression models. However, they are rarely used and considered in numerical analysis due to their lack of strict positive definiteness. In particular they do not enjoy the usual property of unisolvency for arbitrary point sets, which is one of the key properties used to build kernel-based interpolation methods. This paper is devoted to establish some initial results for the study of these kernels, and their related interpolation algorithms, in the context of approximation theory. We will first prove necessary and sufficient conditions on point sets which guarantee the existence and uniqueness of an interpolant. We will then study the Reproducing Kernel Hilbert Spaces (or native spaces) of these kernels and their norms, and provide inclusion relations between spaces corresponding to different kernel parameters. With these spaces at hand, it will be further possible to derive generic error estimates which apply to sufficiently smooth functions, thus escaping the native space. Finally, we will show how to employ an efficient stable algorithm to these kernels to obtain accurate interpolants, and we will test them in some numerical experiment. After this analysis several computational and theoretical aspects remain open, and we will outline possible further research directions in a concluding section. This work builds some bridges between kernel and polynomial interpolation, two topics to which the authors, to different extents, have been introduced under the supervision or through the work of Stefano De Marchi. For this reason, they wish to dedicate this work to him in the occasion of his 60th birthday.

We tackle the problem of computing counterfactual explanations -- minimal changes to the features that flip an undesirable model prediction. We propose a solution to this question for linear Support Vector Machine (SVMs) models. Moreover, we introduce a way to account for weighted actions that allow for more changes in certain features than others. In particular, we show how to find counterfactual explanations with the purpose of increasing model interpretability. These explanations are valid, change only actionable features, are close to the data distribution, sparse, and take into account correlations between features. We cast this as a mixed integer programming optimization problem. Additionally, we introduce two novel scale-invariant cost functions for assessing the quality of counterfactual explanations and use them to evaluate the quality of our approach with a real medical dataset. Finally, we build a support vector machine model to predict whether law students will pass the Bar exam using protected features, and used our algorithms to uncover the inherent biases of the SVM.

Given an $n\times n$ matrix with integer entries in the range $[-h,h]$, how close can two of its distinct eigenvalues be? The best previously known examples have a minimum gap of $h^{-O(n)}$. Here we give an explicit construction of matrices with entries in $[0,h]$ with two eigenvalues separated by at most $h^{-n^2/16+o(n^2)}$. Up to a constant in the exponent, this agrees with the known lower bound of $\Omega((2\sqrt{n})^{-n^2}h^{-n^2})$ \cite{mahler1964inequality}. Bounds on the minimum gap are relevant to the worst case analysis of algorithms for diagonalization and computing canonical forms of integer matrices (e.g. \cite{dey2021bit}). In addition to our explicit construction, we show there are many matrices with a slightly larger gap of roughly $h^{-n^2/32}$. We also construct 0-1 matrices which have two eigenvalues separated by at most $2^{-n^2/64+o(n^2)}$.

In this paper, we consider a fully-discrete approximation of an abstract evolution equation deploying a non-conforming spatial approximation and finite differences in time (Rothe-Galerkin method). The main result is the convergence of the discrete solutions to a weak solution of the continuous problem. Therefore, the result can be interpreted either as a justification of the numerical method or as an alternative way of constructing weak solutions. We formulate the problem in the very general and abstract setting of so-called non-conforming Bochner pseudo-monotone operators, which allows for a unified treatment of several evolution problems. Our abstract results for non-conforming Bochner pseudo-monotone operators allow to establish (weak) convergence just by verifying a few natural assumptions on the operators time-by-time and on the discretization spaces. Hence, applications and extensions to several other evolution problems can be performed easily. We exemplify the applicability of our approach on several DG schemes for the unsteady $p$-Navier-Stokes problem. The results of some numerical experiments are reported in the final section.

In this article we present a numerical analysis for a third-order differential equation with non-periodic boundary conditions and time-dependent coefficients, namely, the linear Korteweg-de Vries Burgers equation. This numerical analysis is motived due to the dispersive and dissipative phenomena that government this kind of equations. This work builds on previous methods for dispersive equations with constant coefficients, expanding the field to include a new class of equations which until now have eluded the time-evolving parameters. More precisely, throughout the Legendre-Petrov-Galerkin method we prove stability and convergence results of the approximation in appropriate weighted Sobolev spaces. These results allow to show the role and trade off of these temporal parameters into the model. Afterwards, we numerically investigate the dispersion-dissipation relation for several profiles, further provide insights into the implementation method, which allow to exhibit the accuracy and efficiency of our numerical algorithms.

A critical problem in post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This fragmentation of goals causes not only an inconsistent conceptual understanding of explanations but also the practical challenge of not knowing which method to use when. In this work, we begin to address these challenges by unifying eight popular post hoc explanation methods (LIME, C-LIME, SHAP, Occlusion, Vanilla Gradients, Gradients x Input, SmoothGrad, and Integrated Gradients). We show that these methods all perform local function approximation of the black-box model, differing only in the neighbourhood and loss function used to perform the approximation. This unification enables us to (1) state a no free lunch theorem for explanation methods which demonstrates that no single method can perform optimally across all neighbourhoods, and (2) provide a guiding principle to choose among methods based on faithfulness to the black-box model. We empirically validate these theoretical results using various real-world datasets, model classes, and prediction tasks. By bringing diverse explanation methods into a common framework, this work (1) advances the conceptual understanding of these methods, revealing their shared local function approximation objective, properties, and relation to one another, and (2) guides the use of these methods in practice, providing a principled approach to choose among methods and paving the way for the creation of new ones.

北京阿比特科技有限公司