亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We extend the framework of a posteriori error estimation by preconditioning in [Li, Y., Zikatanov, L.: Computers \& Mathematics with Applications. \textbf{91}, 192-201 (2021)] and derive new a posteriori error estimates for H(curl)-elliptic two-phase interface problems. The proposed error estimator provides two-sided bounds for the discretization error and is robust with respect to coefficient variation under mild assumptions. For H(curl) problems with constant coefficients, the performance of this estimator is numerically compared with the one analyzed in [Sch\"oberl, J.: Math.~Comp. \textbf{77}(262), 633-649 (2008)].

相關內容

We consider the classic online problem of scheduling on a single machine to minimize total flow time. In STOC 2021, the concept of robustness to distortion in processing times was introduced: for every distortion factor $\mu$, an $O(\mu^2)$-competitive algorithm $\operatorname{ALG}_{\mu}$ which handles distortions up to $\mu$ was presented. However, using that result requires one to know the distortion of the input in advance, which is impractical. We present the first \emph{distortion-oblivious} algorithms: algorithms which are competitive for \emph{every} input of \emph{every} distortion, and thus do not require knowledge of the distortion in advance. Moreover, the competitive ratios of our algorithms are $\tilde{O}(\mu)$, which is a quadratic improvement over the algorithm from STOC 2021, and is nearly optimal (we show a randomized lower bound of $\Omega(\mu)$ on competitiveness).

We propose a computationally-friendly adaptive learning rate schedule, "AdaLoss", which directly uses the information of the loss function to adjust the stepsize in gradient descent methods. We prove that this schedule enjoys linear convergence in linear regression. Moreover, we provide a linear convergence guarantee over the non-convex regime, in the context of two-layer over-parameterized neural networks. If the width of the first-hidden layer in the two-layer networks is sufficiently large (polynomially), then AdaLoss converges robustly \emph{to the global minimum} in polynomial time. We numerically verify the theoretical results and extend the scope of the numerical experiments by considering applications in LSTM models for text clarification and policy gradients for control problems.

In this paper we consider the semi-discretization in space of a first order scalar transport equation. For the space discretization we use standard continuous finite elements. To obtain stability we add a penalty on the jump of the gradient over element faces. We recall some global error estimates for smooth and rough solutions and then prove a new local error estimate for the transient linear transport equation. In particular we show that in the stabilized method the effect of non-smooth features in the solution decay exponentially from the space time zone where the solution is rough so that smooth features will be transported unperturbed. Locally the $L^2$-norm error converges with the expected order $O(h^{k+\frac12})$. We then illustrate the results numerically. In particular we show the good local accuracy in the smooth zone of the stabilized method and that the standard Galerkin fails to approximate a solution that is smooth at the final time if discontinuities have been present in the solution at some time during the evolution.

We propose and study a new multilevel method for the numerical approximation of a Gibbs distribution $\pi$ on R d , based on (over-damped) Langevin diffusions. This method both inspired by [PP18] and [GMS + 20] relies on a multilevel occupation measure, i.e. on an appropriate combination of R occupation measures of (constant-step) discretized schemes of the Langevin diffusion with respective steps $\gamma$r = $\gamma$02 --r , r = 0,. .. , R. For a given diffusion, we first state a result under general assumptions which guarantees an $\epsilon$-approximation (in a L 2-sense) with a cost proportional to $\epsilon$ --2 (i.e. proportional to a Monte-Carlo method without bias) or $\epsilon$ --2 | log $\epsilon$| 3 under less contractive assumptions. This general result is then applied to over-damped Langevin diffusions in a strongly convex setting, with a study of the dependence in the dimension d and in the spectrum of the Hessian matrix D 2 U of the potential U : R d $\rightarrow$ R involved in the Gibbs distribution. This leads to strategies with cost in O(d$\epsilon$ --2 log 3 (d$\epsilon$ --2)) and in O(d$\epsilon$ --2) under an additional condition on the third derivatives of U. In particular, in our last main result, we show that, up to universal constants, an appropriate choice of the diffusion coefficient and of the parameters of the procedure leads to a cost controlled by ($\lambda$ U $\lor$1) 2 $\lambda$ 3 U d$\epsilon$ --2 (where$\lambda$U and $\lambda$ U respectively denote the supremum and the infimum of the largest and lowest eigenvalue of D 2 U). In our numerical illustrations, we show that our theoretical bounds are confirmed in practice and finally propose an opening to some theoretical or numerical strategies in order to increase the robustness of the procedure when the largest and smallest eigenvalues of D 2 U are respectively too large or too small.

We study the problem of approximating the eigenspectrum of a symmetric matrix $A \in \mathbb{R}^{n \times n}$ with bounded entries (i.e., $\|A\|_{\infty} \leq 1$). We present a simple sublinear time algorithm that approximates all eigenvalues of $A$ up to additive error $\pm \epsilon n$ using those of a randomly sampled $\tilde{O}(\frac{1}{\epsilon^4}) \times \tilde O(\frac{1}{\epsilon^4})$ principal submatrix. Our result can be viewed as a concentration bound on the full eigenspectrum of a random principal submatrix. It significantly extends existing work which shows concentration of just the spectral norm [Tro08]. It also extends work on sublinear time algorithms for testing the presence of large negative eigenvalues in the spectrum [BCJ20]. To complement our theoretical results, we provide numerical simulations, which demonstrate the effectiveness of our algorithm in approximating the eigenvalues of a wide range of matrices.

Given two sets $S$ and $T$ of points in the plane, of total size $n$, a {many-to-many} matching between $S$ and $T$ is a set of pairs $(p,q)$ such that $p\in S$, $q\in T$ and for each $r\in S\cup T$, $r$ appears in at least one such pair. The {cost of a pair} $(p,q)$ is the (Euclidean) distance between $p$ and $q$. In the {minimum-cost many-to-many matching} problem, the goal is to compute a many-to-many matching such that the sum of the costs of the pairs is minimized. This problem is a restricted version of minimum-weight edge cover in a bipartite graph, and hence can be solved in $O(n^3)$ time. In a more restricted setting where all the points are on a line, the problem can be solved in $O(n\log n)$ time [Colannino, Damian, Hurtado, Langerman, Meijer, Ramaswami, Souvaine, Toussaint; Graphs Comb., 2007]. However, no progress has been made in the general planar case in improving the cubic time bound. In this paper, we obtain an $O(n^2\cdot poly(\log n))$ time exact algorithm and an $O( n^{3/2}\cdot poly(\log n))$ time $(1+\epsilon)$-approximation in the planar case. Our results affirmatively address an open problem posed in [Colannino et al., Graphs Comb., 2007].

In this paper, from a theoretical perspective, we study how powerful graph neural networks (GNNs) can be for learning approximation algorithms for combinatorial problems. To this end, we first establish a new class of GNNs that can solve strictly a wider variety of problems than existing GNNs. Then, we bridge the gap between GNN theory and the theory of distributed local algorithms to theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios and that no GNN can perform better than with these ratios. This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems. Furthermore, we prove that adding coloring or weak-coloring to each node feature improves these approximation ratios. This indicates that preprocessing and feature engineering theoretically strengthen model capabilities.

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.

北京阿比特科技有限公司