亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a deterministic Kaczmarz algorithm for solving linear systems $A\x=\b$. Different from previous Kaczmarz algorithms, we use reflections in each step of the iteration. This generates a series of points distributed with patterns on a sphere centered at a solution. Firstly, we prove that taking the average of $O(\eta/\epsilon)$ points leads to an effective approximation of the solution up to relative error $\epsilon$, where $\eta$ is a parameter depending on $A$ and can be bounded above by the square of the condition number. We also show how to select these points efficiently. From the numerical tests, our Kaczmarz algorithm usually converges more quickly than the (block) randomized Kaczmarz algorithms. Secondly, when the linear system is consistent, the Kaczmarz algorithm returns the solution that has the minimal distance to the initial vector. This gives a method to solve the least-norm problem. Finally, we prove that our Kaczmarz algorithm indeed solves the linear system $A^TW^{-1}A \x = A^TW^{-1} \b$, where $W$ is the low-triangular matrix such that $W+W^T=2AA^T$. The relationship between this linear system and the original one is studied.

相關內容

In the graphical calculus of planar string diagrams, equality is generated by exchange moves, which swap the heights of adjacent vertices. We show that left- and right-handed exchanges each give strongly normalizing rewrite strategies for connected string diagrams. We use this result to give a linear-time solution to the equivalence problem in the connected case, and a quadratic solution in the general case. We also give a stronger proof of the Joyal-Street coherence theorem, settling Selinger's conjecture on recumbent isotopy.

Probabilistic zero forcing is a coloring game played on a graph where the goal is to color every vertex blue starting with an initial blue vertex set. As long as the graph is connected, if at least one vertex is blue then eventually all of the vertices will be colored blue. The most studied parameter in probabilistic zero forcing is the expected propagation time starting from a given vertex of $G.$ In this paper we improve on upper bounds for the expected propagation time by Geneson and Hogben and Chan et al. in terms of a graph's order and radius. In particular, for a connected graph $G$ of order $n$ and radius $r,$ we prove the bound $\text{ept}(G) = O(r\log(n/r)).$ We also show using Doob's Optional Stopping Theorem and a combinatorial object known as a cornerstone that $\text{ept}(G) \le n/2 + O(\log n).$ Finally, we derive an explicit lower bound $\text{ept}(G)\ge \log_2 \log_2 n.$

In the $(1+\varepsilon,r)$-approximate near-neighbor problem for curves (ANNC) under some distance measure $\delta$, the goal is to construct a data structure for a given set $\mathcal{C}$ of curves that supports approximate near-neighbor queries: Given a query curve $Q$, if there exists a curve $C\in\mathcal{C}$ such that $\delta(Q,C)\le r$, then return a curve $C'\in\mathcal{C}$ with $\delta(Q,C')\le(1+\varepsilon)r$. There exists an efficient reduction from the $(1+\varepsilon)$-approximate nearest-neighbor problem to ANNC, where in the former problem the answer to a query is a curve $C\in\mathcal{C}$ with $\delta(Q,C)\le(1+\varepsilon)\cdot\delta(Q,C^*)$, where $C^*$ is the curve of $\mathcal{C}$ closest to $Q$. Given a set $\mathcal{C}$ of $n$ curves, each consisting of $m$ points in $d$ dimensions, we construct a data structure for ANNC that uses $n\cdot O(\frac{1}{\varepsilon})^{md}$ storage space and has $O(md)$ query time (for a query curve of length $m$), where the similarity between two curves is their discrete Fr\'echet or dynamic time warping distance. Our method is simple to implement, deterministic, and results in an exponential improvement in both query time and storage space compared to all previous bounds. Further, we also consider the asymmetric version of ANNC, where the length of the query curves is $k \ll m$, and obtain essentially the same storage and query bounds as above, except that $m$ is replaced by $k$. Finally, we apply our method to a version of approximate range counting for curves and achieve similar bounds.

We first consider the problem of approximating a few eigenvalues of a rational matrix-valued function closest to a prescribed target. It is assumed that the proper rational part of the rational matrix-valued function is expressed in the transfer function form $H(s) = C (sI - A)^{-1} B$, where the middle factor is large, whereas the number of rows of $C$ and the number of columns of $B$ are equal and small. We propose a subspace framework that performs two-sided or one-sided projections on the state-space representation of $H(\cdot)$, commonly employed in model reduction and giving rise to a reduced transfer function. At every iteration, the projection subspaces are expanded to attain Hermite interpolation conditions at the eigenvalues of the reduced transfer function closest to the target, which in turn leads to a new reduced transfer function. We prove in theory that, when a sequence of eigenvalues of the reduced transfer functions converges to an eigenvalue of the full problem, it converges at least at a quadratic rate. In the second part, we extend the proposed framework to locate the eigenvalues of a general square large-scale nonlinear meromorphic matrix-valued function $T(\cdot)$, where we exploit a representation $\mathcal{R}(s) = C(s) A(s)^{-1} B(s) - D(s)$ defined in terms of the block components of $T(\cdot)$. The numerical experiments illustrate that the proposed framework is reliable in locating a few eigenvalues closest to the target point, and that, with respect to runtime, it is competitive to established methods for nonlinear eigenvalue problems.

This paper focuses on stochastic saddle point problems with decision-dependent distributions in both the static and time-varying settings. These are problems whose objective is the expected value of a stochastic payoff function, where random variables are drawn from a distribution induced by a distributional map. For general distributional maps, the problem of finding saddle points is in general computationally burdensome, even if the distribution is known. To enable a tractable solution approach, we introduce the notion of equilibrium points -- which are saddle points for the stationary stochastic minimax problem that they induce -- and provide conditions for their existence and uniqueness. We demonstrate that the distance between the two classes of solutions is bounded provided that the objective has a strongly-convex-strongly-concave payoff and Lipschitz continuous distributional map. We develop deterministic and stochastic primal-dual algorithms and demonstrate their convergence to the equilibrium point. In particular, by modeling errors emerging from a stochastic gradient estimator as sub-Weibull random variables, we provide error bounds in expectation and in high probability that hold for each iteration; moreover, we show convergence to a neighborhood in expectation and almost surely. Finally, we investigate a condition on the distributional map -- which we call opposing mixture dominance -- that ensures the objective is strongly-convex-strongly-concave. Under this assumption, we show that primal-dual algorithms converge to the saddle points in a similar fashion.

This paper considers the temporal discretization of an inverse problem subject to a time fractional diffusion equation. Firstly, the convergence of the L1 scheme is established with an arbitrary sectorial operator of spectral angle $< \pi/2 $, that is the resolvent set of this operator contains $ \{z\in\mathbb C\setminus\{0\}:\ |\operatorname{Arg} z|< \theta\}$ for some $ \pi/2 < \theta < \pi $. The relationship between the time fractional order $\alpha \in (0, 1)$ and the constants in the error estimates is precisely characterized, revealing that the L1 scheme is robust as $ \alpha $ approaches $ 1 $. Then an inverse problem of a fractional diffusion equation is analyzed, and the convergence analysis of a temporal discretization of this inverse problem is given. Finally, numerical results are provided to confirm the theoretical results.

This paper presents local minimax regret lower bounds for adaptively controlling linear-quadratic-Gaussian (LQG) systems. We consider smoothly parametrized instances and provide an understanding of when logarithmic regret is impossible which is both instance specific and flexible enough to take problem structure into account. This understanding relies on two key notions: That of local-uninformativeness; when the optimal policy does not provide sufficient excitation for identification of the optimal policy, and yields a degenerate Fisher information matrix; and that of information-regret-boundedness, when the small eigenvalues of a policy-dependent information matrix are boundable in terms of the regret of that policy. Combined with a reduction to Bayesian estimation and application of Van Trees' inequality, these two conditions are sufficient for proving regret bounds on order of magnitude $\sqrt{T}$ in the time horizon, $T$. This method yields lower bounds that exhibit tight dimensional dependencies and scale naturally with control-theoretic problem constants. For instance, we are able to prove that systems operating near marginal stability are fundamentally hard to learn to control. We further show that large classes of systems satisfy these conditions, among them any state-feedback system with both $A$- and $B$-matrices unknown. Most importantly, we also establish that a nontrivial class of partially observable systems, essentially those that are over-actuated, satisfy these conditions, thus providing a $\sqrt{T}$ lower bound also valid for partially observable systems. Finally, we turn to two simple examples which demonstrate that our lower bound captures classical control-theoretic intuition: our lower bounds diverge for systems operating near marginal stability or with large filter gain -- these can be arbitrarily hard to (learn to) control.

Given an $n$-point metric space $(M,d)$, {\sc metric $1$-median} asks for a point $p\in M$ minimizing $\sum_{x\in M}\,d(p,x)$. We show that for each computable function $f\colon \mathbb{Z}^+\to\mathbb{Z}^+$ satisfying $f(n)=\omega(1)$, {\sc metric $1$-median} has a deterministic, $o(n)$-query, $o(f(n)\cdot\log n)$-approximation and nonadaptive algorithm. Previously, no deterministic $o(n)$-query $o(n)$-approximation algorithms are known for {\sc metric $1$-median}. On the negative side, we prove each deterministic $O(n)$-query algorithm for {\sc metric $1$-median} to be not $(\delta\log n)$-approximate for a sufficiently small constant $\delta>0$. We also refute the existence of deterministic $o(n)$-query $O(\log n)$-approximation algorithms.

Consider using the right-preconditioned GMRES for obtaining the minimum-norm solution of underdetermined inconsistent least squares problems. Morikuni (Ph.D. thesis, 2013) showed that for some inconsistent and ill-conditioned problems, the iterates may diverge. This is mainly because the Hessenberg matrix in the GMRES method becomes very ill-conditioned so that the backward substitution of the resulting triangular system becomes numerically unstable. We propose a stabilized GMRES based on solving the normal equations corresponding to the above triangular system using the standard Cholesky decomposition. This has the effect of shifting upwards the tiny singular values of the Hessenberg matrix which lead to an inaccurate solution. Thus, the process becomes numerically stable and the system becomes consistent, rendering better convergence and a more accurate solution. Numerical experiments show that the proposed method is robust and efficient. The method can be considered as a way of making GMRES stable for highly ill-conditioned inconsistent problems.

In this paper, we study arbitrary infinite binary information systems each of which consists of an infinite set called universe and an infinite set of two-valued functions (attributes) defined on the universe. We consider the notion of a problem over information system which is described by a finite number of attributes and a mapping corresponding a decision to each tuple of attribute values. As algorithms for problem solving, we use deterministic and nondeterministic decision trees. As time and space complexity, we study the depth and the number of nodes in the decision trees. In the worst case, with the growth of the number of attributes in the problem description, (i) the minimum depth of deterministic decision trees grows either almost as logarithm or linearly, (ii) the minimum depth of nondeterministic decision trees either is bounded from above by a constant or grows linearly, (iii) the minimum number of nodes in deterministic decision trees has either polynomial or exponential growth, and (iv) the minimum number of nodes in nondeterministic decision trees has either polynomial or exponential growth. Based on these results, we divide the set of all infinite binary information systems into five complexity classes, and study for each class issues related to time-space trade-off for decision trees.

北京阿比特科技有限公司