亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a finite set of matrices with integer entries, the matrix mortality problem asks if there exists a product of these matrices equal to the zero matrix. We consider a special case of this problem where all entries of the matrices are nonnegative. This case is equivalent to the NFA mortality problem, which, given an NFA, asks for a word $w$ such that the image of every state under $w$ is the empty set. The size of the alphabet of the NFA is then equal to the number of matrices in the set. We study the length of shortest such words depending on the size of the alphabet. We show that for an NFA with $n$ states this length can be at least $2^n - 1$ for an alphabet of size $n$, $2^{(n - 4)/2}$ for an alphabet of size $3$ and $2^{(n - 2)/3}$ for an alphabet of size $2$. We also discuss further open problems related to mortality of NFAs and DFAs.

相關內容

Alphabet is mostly a collection of companies. This newer Google is a bit slimmed down, with the companies that are pretty far afield of our main internet products contained in Alphabet instead.

Approximating a univariate function on the interval $[-1,1]$ with a polynomial is among the most classical problems in numerical analysis. When the function evaluations come with noise, a least-squares fit is known to reduce the effect of noise as more samples are taken. The generic algorithm for the least-squares problem requires $O(Nn^2)$ operations, where $N+1$ is the number of sample points and $n$ is the degree of the polynomial approximant. This algorithm is unstable when $n$ is large, for example $n\gg \sqrt{N}$ for equispaced sample points. In this study, we blend numerical analysis and statistics to introduce a stable and fast $O(N\log N)$ algorithm called NoisyChebtrunc based on the Chebyshev interpolation. It has the same error reduction effect as least-squares and the convergence is spectral until the error reaches $O(\sigma \sqrt{{n}/{N}})$, where $\sigma$ is the noise level, after which the error continues to decrease at the Monte-Carlo $O(1/\sqrt{N})$ rate. To determine the polynomial degree, NoisyChebtrunc employs a statistical criterion, namely Mallows' $C_p$. We analyze NoisyChebtrunc in terms of the variance and concentration in the infinity norm to the underlying noiseless function. These results show that with high probability the infinity-norm error is bounded by a small constant times $\sigma \sqrt{{n}/{N}}$, when the noise {is} independent and follows a subgaussian or subexponential distribution. We illustrate the performance of NoisyChebtrunc with numerical experiments.

In this work, the infinite GMRES algorithm, recently proposed by Correnty et al., is employed in contour integral-based nonlinear eigensolvers, avoiding the computation of costly factorizations at each quadrature node to solve the linear systems efficiently. Several techniques are applied to make the infinite GMRES memory-friendly, computationally efficient, and numerically stable in practice. More specifically, we analyze the relationship between polynomial eigenvalue problems and their scaled linearizations, and provide a novel weighting strategy which can significantly accelerate the convergence of infinite GMRES in this particular context. We also adopt the technique of TOAR to infinite GMRES to reduce the memory footprint. Theoretical analysis and numerical experiments are provided to illustrate the efficiency of the proposed algorithm.

We explore a scaled spectral preconditioner for the efficient solution of sequences of symmetric and positive-definite linear systems. We design the scaled preconditioner not only as an approximation of the inverse of the linear system but also with consideration of its use within the conjugate gradient (CG) method. We propose three different strategies for selecting a scaling parameter, which aims to position the eigenvalues of the preconditioned matrix in a way that reduces the energy norm of the error, the quantity that CG monotonically decreases at each iteration. Our focus is on accelerating convergence especially in the early iterations, which is particularly important when CG is truncated due to computational cost constraints. Numerical experiments provide in data assimilation confirm that the scaled spectral preconditioner can significantly improve early CG convergence with negligible computational cost.

We propose a new step-wise approach to proving observational equivalence, and in particular reasoning about fragility of observational equivalence. Our approach is based on what we call local reasoning. The local reasoning exploits the graphical concept of neighbourhood, and it extracts a new, formal, concept of robustness as a key sufficient condition of observational equivalence. Moreover, our proof methodology is capable of proving a generalised notion of observational equivalence. The generalised notion can be quantified over syntactically restricted contexts instead of all contexts, and also quantitatively constrained in terms of the number of reduction steps. The operational machinery we use is given by a hypergraph-rewriting abstract machine inspired by Girard's Geometry of Interaction. The behaviour of language features, including function abstraction and application, is provided by hypergraph-rewriting rules. We demonstrate our proof methodology using the call-by-value lambda-calculus equipped with (higher-order) state.

Neutron noise analysis is a predominant technique for fissile matter identification with passive methods. Quantifying the uncertainties associated with the estimated nuclear parameters is crucial for decision-making. A conservative uncertainty quantification procedure is possible by solving a Bayesian inverse problem with the help of statistical surrogate models but generally leads to large uncertainties due to the surrogate models' errors. In this work, we develop two methods for robust uncertainty quantification in neutron and gamma noise analysis based on the resolution of Bayesian inverse problems. We show that the uncertainties can be reduced by including information on gamma correlations. The investigation of a joint analysis of the neutron and gamma observations is also conducted with the help of active learning strategies to fine-tune surrogate models. We test our methods on a model of the SILENE reactor core, using simulated and real-world measurements.

We propose a fast scheme for approximating the Mittag-Leffler function by an efficient sum-of-exponentials (SOE), and apply the scheme to the viscoelastic model of wave propagation with mixed finite element methods for the spatial discretization and the Newmark-beta scheme for the second-order temporal derivative. Compared with traditional L1 scheme for fractional derivative, our fast scheme reduces the memory complexity from $\mathcal O(N_sN) $ to $\mathcal O(N_sN_{exp})$ and the computation complexity from $\mathcal O(N_sN^2)$ to $\mathcal O(N_sN_{exp}N)$, where $N$ denotes the total number of temporal grid points, $N_{exp}$ is the number of exponentials in SOE, and $N_s$ represents the complexity of memory and computation related to the spatial discretization. Numerical experiments are provided to verify the theoretical results.

We formalize a complete proof of the regular case of Fermat's Last Theorem in the Lean4 theorem prover. Our formalization includes a proof of Kummer's lemma, that is the main obstruction to Fermat's Last Theorem for regular primes. Rather than following the modern proof of Kummer's lemma via class field theory, we prove it by using Hilbert's Theorems 90-94 in a way that is more amenable to formalization.

Randomized matrix algorithms have become workhorse tools in scientific computing and machine learning. To use these algorithms safely in applications, they should be coupled with posterior error estimates to assess the quality of the output. To meet this need, this paper proposes two diagnostics: a leave-one-out error estimator for randomized low-rank approximations and a jackknife resampling method to estimate the variance of the output of a randomized matrix computation. Both of these diagnostics are rapid to compute for randomized low-rank approximation algorithms such as the randomized SVD and randomized Nystr\"om approximation, and they provide useful information that can be used to assess the quality of the computed output and guide algorithmic parameter choices.

We study high-dimensional, ridge-regularized logistic regression in a setting in which the covariates may be missing or corrupted by additive noise. When both the covariates and the additive corruptions are independent and normally distributed, we provide exact characterizations of both the prediction error as well as the estimation error. Moreover, we show that these characterizations are universal: as long as the entries of the data matrix satisfy a set of independence and moment conditions, our guarantees continue to hold. Universality, in turn, enables the detailed study of several imputation-based strategies when the covariates are missing completely at random. We ground our study by comparing the performance of these strategies with the conjectured performance -- stemming from replica theory in statistical physics -- of the Bayes optimal procedure. Our analysis yields several insights including: (i) a distinction between single imputation and a simple variant of multiple imputation and (ii) that adding a simple ridge regularization term to single-imputed logistic regression can yield an estimator whose prediction error is nearly indistinguishable from the Bayes optimal prediction error. We supplement our findings with extensive numerical experiments.

We construct an interpolatory high-order cubature rule to compute integrals of smooth functions over self-affine sets with respect to an invariant measure. The main difficulty is the computation of the cubature weights, which we characterize algebraically, by exploiting a self-similarity property of the integral. We propose an $h$-version and a $p$-version of the cubature, present an error analysis and conduct numerical experiments.

北京阿比特科技有限公司