亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work we revisit the arithmetic and bit complexity of Hermitian eigenproblems. We first provide an analysis for the divide-and-conquer tridiagonal eigensolver of Gu and Eisenstat [GE95] in the Real RAM model, when accelerated with the Fast Multipole Method. The analysis asserts the claimed nearly-$O(n^2)$ complexity to compute a full diagonalization of a symmetric tridiagonal matrix. Combined with the tridiagonal reduction algorithm of Sch\"onhage [Sch72], it implies that a Hermitian matrix can be diagonalized deterministically in $O(n^{\omega}\log(n)+n^2\mathrm{polylog}(n/\epsilon))$ arithmetic operations, where $\omega\lesssim 2.371$ is the square matrix multiplication exponent. This improves the classic deterministic $O(n^3)$ diagonalization algorithms, and derandomizes the $ O(n^{\omega}\log^2(n/\epsilon))$ algorithm of [BGVKS, FOCS '20]. Ultimately, this has a direct application to the SVD, which is widely used as a subroutine in advanced algorithms, but its complexity and approximation guarantees are often unspecified. In finite precision, we show that Sch\"onhage's algorithm is stable in floating point using $O(\log(n/\epsilon))$ bits. Combined with the (rational arithmetic) algorithm of Bini and Pan [BP91], it provides a deterministic algorithm to compute all the eigenvalues of a Hermitian matrix in $O\left(n^{\omega}F\left(\log(n/\epsilon)\right)+n^2\mathrm{polylog}(n/\epsilon)\right)$ bit operations, where $F(b)\in\widetilde{O}(b)$ is the bit complexity of a single floating point operation on $b$ bits. This improves the best known $\widetilde{O}(n^3)$ deterministic and $O\left( n^{\omega}\log^2(n/\epsilon)F\left(\log^4(n/\epsilon)\log(n)\right)\right)$ randomized complexities. We conclude with some other useful subroutines such as computing spectral gaps, condition numbers, and spectral projectors, and few open problems.

相關內容

In this work, we present a novel variant of the stochastic gradient descent method termed as iteratively regularized stochastic gradient descent (IRSGD) method to solve nonlinear ill-posed problems in Hilbert spaces. Under standard assumptions, we demonstrate that the mean square iteration error of the method converges to zero for exact data. In the presence of noisy data, we first propose a heuristic parameter choice rule (HPCR) based on the method suggested by Hanke and Raus, and then apply the IRSGD method in combination with HPCR. Precisely, HPCR selects the regularization parameter without requiring any a-priori knowledge of the noise level. We show that the method terminates in finitely many steps in case of noisy data and has regularizing features. Further, we discuss the convergence rates of the method using well-known source and other related conditions under HPCR as well as discrepancy principle. To the best of our knowledge, this is the first work that establishes both the regularization properties and convergence rates of a stochastic gradient method using a heuristic type rule in the setting of infinite-dimensional Hilbert spaces. Finally, we provide the numerical experiments to showcase the practical efficacy of the proposed method.

In this manuscript, we study the stability of the origin for the multivariate geometric Brownian motion. More precisely, under suitable sufficient conditions, we construct a Lyapunov function such that the origin of the multivariate geometric Brownian motion is globally asymptotically stable in probability. Moreover, we show that such conditions can be rewritten as a Bilinear Matrix Inequality (BMI) feasibility problem. We stress that no commutativity relations between the drift matrix and the noise dispersion matrices are assumed and therefore the so-called Magnus representation of the solution of the multivariate geometric Brownian motion is complicated. In addition, we exemplify our method in numerous specific models from the literature such as random linear oscillators, satellite dynamics, inertia systems, diagonal and non-diagonal noise systems, cancer self-remission and smoking.

The paper aims at proposing an efficient and stable quasi-interpolation based method for numerically computing the Helmholtz-Hodge decomposition of a vector field. To this end, we first explicitly construct a matrix kernel in a general form from polyharmonic splines such that it includes divergence-free/curl-free/harmonic matrix kernels as special cases. Then we apply the matrix kernel to vector decomposition via the convolution technique together with the Helmholtz-Hodge decomposition. More precisely, we show that if we convolve a vector field with a scaled divergence-free (curl-free) matrix kernel, then the resulting divergence-free (curl-free) convolution sequence converges to the corresponding divergence-free (curl-free) part of the Helmholtz-Hodge decomposition of the field. Finally, by discretizing the convolution sequence via certain quadrature rule, we construct a family of (divergence-free/curl-free) quasi-interpolants for the Helmholtz-Hodge decomposition (defined both in the whole space and over a bounded domain). Corresponding error estimates derived in the paper show that our quasi-interpolation based method yields convergent approximants to both the vector field and its Helmholtz-Hodge decomposition

Logistic regression is a classical model for describing the probabilistic dependence of binary responses to multivariate covariates. We consider the predictive performance of the maximum likelihood estimator (MLE) for logistic regression, assessed in terms of logistic risk. We consider two questions: first, that of the existence of the MLE (which occurs when the dataset is not linearly separated), and second that of its accuracy when it exists. These properties depend on both the dimension of covariates and on the signal strength. In the case of Gaussian covariates and a well-specified logistic model, we obtain sharp non-asymptotic guarantees for the existence and excess logistic risk of the MLE. We then generalize these results in two ways: first, to non-Gaussian covariates satisfying a certain two-dimensional margin condition, and second to the general case of statistical learning with a possibly misspecified logistic model. Finally, we consider the case of a Bernoulli design, where the behavior of the MLE is highly sensitive to the parameter direction.

Understanding adversarial examples is crucial for improving the model's robustness, as they introduce imperceptible perturbations that deceive models. Effective adversarial examples, therefore, offer the potential to train more robust models by removing their singularities. We propose NODE-AdvGAN, a novel approach that treats adversarial generation as a continuous process and employs a Neural Ordinary Differential Equation (NODE) for simulating the dynamics of the generator. By mimicking the iterative nature of traditional gradient-based methods, NODE-AdvGAN generates smoother and more precise perturbations that preserve high perceptual similarity when added to benign images. We also propose a new training strategy, NODE-AdvGAN-T, which enhances transferability in black-box attacks by effectively tuning noise parameters during training. Experiments demonstrate that NODE-AdvGAN and NODE-AdvGAN-T generate more effective adversarial examples that achieve higher attack success rates while preserving better perceptual quality than traditional GAN-based methods.

Overlaps between words are crucial in many areas of computer science, such as code design, stringology, and bioinformatics. A self overlapping word is characterized by its periods and borders. A period of a word $u$ is the starting position of a suffix of $u$ that is also a prefix $u$, and such a suffix is called a border. Each word of length, say $n>0$, has a set of periods, but not all combinations of integers are sets of periods. Computing the period set of a word $u$ takes linear time in the length of $u$. We address the question of computing, the set, denoted $\Gamma_n$, of all period sets of words of length $n$. Although period sets have been characterized, there is no formula to compute the cardinality of $\Gamma_n$ (which is exponential in $n$), and the known dynamic programming algorithm to enumerate $\Gamma_n$ suffers from its space complexity. We present an incremental approach to compute $\Gamma_n$ from $\Gamma_{n-1}$, which reduces the space complexity, and then a constructive certification algorithm useful for verification purposes. The incremental approach defines a parental relation between sets in $\Gamma_{n-1}$ and $\Gamma_n$, enabling one to investigate the dynamics of period sets, and their intriguing statistical properties. Moreover, the period set of a word $u$ is the key for computing the absence probability of $u$ in random texts. Thus, knowing $\Gamma_n$ is useful to assess the significance of word statistics, such as the number of missing words in a random text.

In this study, we present an optimal implicit algorithm specifically designed to accurately solve the multi-species nonlinear 0D-2V axisymmetric Fokker-Planck-Rosenbluth (FPR) collision equation while preserving mass, momentum, and energy. Our approach relies on the utilization of nonlinear Shkarofsky's formula of FPR (FPRS) collision operator in the spherical-polar coordinate. The key innovation lies in the introduction of a new function named King, with the adoption of the Legendre polynomial expansion for the angular coordinate and King function expansion for the speed coordinate. The Legendre polynomial expansion will converge exponentially and the King method, a moment convergence algorithm, could ensure the conservation with high precision in discrete form. Additionally, post-step projection onto manifolds is employed to exactly enforce symmetries of the collision operators. Through solving several typical problems across various nonequilibrium configurations, we demonstrate the high accuracy and superior performance of the presented algorithm for weakly anisotropic plasmas.

We consider the application of the generalized Convolution Quadrature (gCQ) to approximate the solution of an important class of sectorial problems. The gCQ is a generalization of Lubich's Convolution Quadrature (CQ) that allows for variable steps. The available stability and convergence theory for the gCQ requires non realistic regularity assumptions on the data, which do not hold in many applications of interest, such as the approximation of subdiffusion equations. It is well known that for non smooth enough data the original CQ, with uniform steps, presents an order reduction close to the singularity. We generalize the analysis of the gCQ to data satisfying realistic regularity assumptions and provide sufficient conditions for stability and convergence on arbitrary sequences of time points. We consider the particular case of graded meshes and show how to choose them optimally, according to the behaviour of the data. An important advantage of the gCQ method is that it allows for a fast and memory reduced implementation. We describe how the fast and oblivious gCQ can be implemented and illustrate our theoretical results with several numerical experiments.

We carry out a stability and convergence analysis for the fully discrete scheme obtained by combining a finite or virtual element spatial discretization with the upwind-discontinuous Galerkin time-stepping applied to the time-dependent advection-diffusion equation. A space-time streamline-upwind Petrov-Galerkin term is used to stabilize the method. More precisely, we show that the method is inf-sup stable with constant independent of the diffusion coefficient, which ensures the robustness of the method in the convection- and diffusion-dominated regimes. Moreover, we prove optimal convergence rates in both regimes for the error in the energy norm. An important feature of the presented analysis is the control in the full $L^2(0,T;L^2(\Omega))$ norm without the need of introducing an artificial reaction term in the model. We finally present some numerical experiments in $(3 + 1)$-dimensions that validate our theoretical results.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司