亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper combines modern numerical computation with theoretical results to improve our understanding of the growth factor problem for Gaussian elimination. On the computational side we obtain lower bounds for the maximum growth for complete pivoting for $n=1:75$ and $n=100$ using the Julia JuMP optimization package. At $n=100$ we obtain a growth factor bigger than $3n$. The numerical evidence suggests that the maximum growth factor is bigger than $n$ if and only if $n \ge 11$. We also present a number of theoretical results. We show that the maximum growth factor over matrices with entries restricted to a subset of the reals is nearly equal to the maximum growth factor over all real matrices. We also show that the growth factors under floating point arithmetic and exact arithmetic are nearly identical. Finally, through numerical search, and stability and extrapolation results, we provide improved lower bounds for the maximum growth factor. Specifically, we find that the largest growth factor is bigger than $1.0045n$, and the lim sup of the ratio with $n$ is greater than or equal to $3.317$. In contrast to the old conjecture that growth might never be bigger than $n$, it seems likely that the maximum growth divided by $n$ goes to infinity as $n \rightarrow \infty$.

相關內容

By the MAXSAT problem, we are given a set $V$ of $m$ variables and a collection $C$ of $n$ clauses over $V$. We will seek a truth assignment to maximize the number of satisfied clauses. This problem is $\textit{NP}$-hard even for its restricted version, the 2-MAXSAT problem by which every clause contains at most 2 literals. In this paper, we discuss a polynomial time algorithm to solve this problem. Its time complexity is bounded by O($n^2m^3$). Hence, we provide a proof of $P$ = $\textit{NP}$.

In this article on variational regularization for ill-posed nonlinear problems, we are once again discussing the consequences of an oversmoothing penalty term. This means in our model that the searched-for solution of the considered nonlinear operator equation does not belong to the domain of definition of the penalty functional. In the past years, such variational regularization has been investigated comprehensively in Hilbert scales, but rarely in a Banach space setting. Our present results try to establish a theoretical justification of oversmoothing regularization in Banach scales. This new study includes convergence rates results for a priori and a posteriori choices of the regularization parameter, both for H\"older-type smoothness and low order-type smoothness. An illustrative example is intended to indicate the specificity of occurring non-reflexive Banach spaces.

In this paper, we find a sample complexity bound for learning a simplex from noisy samples. Assume a dataset of size $n$ is given which includes i.i.d. samples drawn from a uniform distribution over an unknown simplex in $\mathbb{R}^K$, where samples are assumed to be corrupted by a multi-variate additive Gaussian noise of an arbitrary magnitude. We prove the existence of an algorithm that with high probability outputs a simplex having a $\ell_2$ distance of at most $\varepsilon$ from the true simplex (for any $\varepsilon>0$). Also, we theoretically show that in order to achieve this bound, it is sufficient to have $n\ge\left(K^2/\varepsilon^2\right)e^{\Omega\left(K/\mathrm{SNR}^2\right)}$ samples, where $\mathrm{SNR}$ stands for the signal-to-noise ratio. This result solves an important open problem and shows as long as $\mathrm{SNR}\ge\Omega\left(K^{1/2}\right)$, the sample complexity of the noisy regime has the same order to that of the noiseless case. Our proofs are a combination of the so-called sample compression technique in \citep{ashtiani2018nearly}, mathematical tools from high-dimensional geometry, and Fourier analysis. In particular, we have proposed a general Fourier-based technique for recovery of a more general class of distribution families from additive Gaussian noise, which can be further used in a variety of other related problems.

We study numerical integration over bounded regions in $\mathbb{R}^s, s\ge1$ with respect to some probability measure. We replace random sampling with quasi-Monte Carlo methods, where the underlying point set is derived from deterministic constructions that aim to fill the space more evenly than random points. Such quasi-Monte Carlo point sets are ordinarily designed for the uniform measure, and the theory only works for product measures when a coordinate-wise transformation is applied. Going beyond this setting, we first consider the case where the target density is a mixture distribution where each term in the mixture comes from a product distribution. Next we consider target densities which can be approximated with such mixture distributions. We require the approximation to be a sum of coordinate-wise products and the approximation to be positive everywhere (so that they can be re-scaled to probability density functions). We use tensor product hat function approximations for this purpose here, since a hat function approximation of a positive function is itself positive. We also study more complex algorithms, where we first approximate the target density with a general Gaussian mixture distribution and approximate the mixtures with an adaptive hat function approximation on rotated intervals. The Gaussian mixture approximation allows us to locate the essential parts of the target density, whereas the adaptive hat function approximation allows us to approximate the finer structure of the target density. We prove convergence rates for each of the integration techniques based on quasi-Monte Carlo sampling for integrands with bounded partial mixed derivatives. The employed algorithms are based on digital $(t,s)$-sequences over the finite field $\mathbb{F}_2$ and an inversion method. Numerical examples illustrate the performance of the algorithms for some target densities and integrands.

As phasor measurement units (PMUs) become more widely used in transmission power systems, a fast state estimation (SE) algorithm that can take advantage of their high sample rates is needed. To accomplish this, we present a method that uses graph neural networks (GNNs) to learn complex bus voltage estimates from PMU voltage and current measurements. We propose an original implementation of GNNs over the power system's factor graph to simplify the integration of various types and quantities of measurements on power system buses and branches. Furthermore, we augment the factor graph to improve the robustness of GNN predictions. This model is highly efficient and scalable, as its computational complexity is linear with respect to the number of nodes in the power system. Training and test examples were generated by randomly sampling sets of power system measurements and annotated with the exact solutions of linear SE with PMUs. The numerical results demonstrate that the GNN model provides an accurate approximation of the SE solutions. Furthermore, errors caused by PMU malfunctions or communication failures that would normally make the SE problem unobservable have a local effect and do not deteriorate the results in the rest of the power system.

In this paper we study the type IV Knorr Held space time models. Such models typically apply intrinsic Markov random fields and constraints are imposed for identifiability. INLA is an efficient inference tool for such models where constraints are dealt with through a conditioning by kriging approach. When the number of spatial and/or temporal time points become large, it becomes computationally expensive to fit such models, partly due to the number of constraints involved. We propose a new approach, HyMiK, dividing constraints into two separate sets where one part is treated through a mixed effect approach while the other one is approached by the standard conditioning by kriging method, resulting in a more efficient procedure for dealing with constraints. The new approach is easy to apply based on existing implementations of INLA. We run the model on simulated data, on a real data set containing dengue fever cases in Brazil and another real data set of confirmed positive test cases of Covid-19 in the counties of Norway. For all cases we get very similar results when comparing the new approach with the tradition one while at the same time obtaining a significant increase in computational speed, varying on a factor from 2 to 4, depending on the sizes of the data sets.

We consider sequential state and parameter learning in state-space models with intractable state transition and observation processes. By exploiting low-rank tensor-train (TT) decompositions, we propose new sequential learning methods for joint parameter and state estimation under the Bayesian framework. Our key innovation is the introduction of scalable function approximation tools such as TT for recursively learning the sequentially updated posterior distributions. The function approximation perspective of our methods offers tractable error analysis and potentially alleviates the particle degeneracy faced by many particle-based methods. In addition to the new insights into algorithmic design, our methods complement conventional particle-based methods. Our TT-based approximations naturally define conditional Knothe--Rosenblatt (KR) rearrangements that lead to filtering, smoothing and path estimation accompanying our sequential learning algorithms, which open the door to removing potential approximation bias. We also explore several preconditioning techniques based on either linear or nonlinear KR rearrangements to enhance the approximation power of TT for practical problems. We demonstrate the efficacy and efficiency of our proposed methods on several state-space models, in which our methods achieve state-of-the-art estimation accuracy and computational performance.

This paper investigates the mean square error (MSE)-optimal conditional mean estimator (CME) in one-bit quantized systems in the context of channel estimation with jointly Gaussian inputs. We analyze the relationship of the generally nonlinear CME to the linear Bussgang estimator, a well-known method based on Bussgang's theorem. We highlight a novel observation that the Bussgang estimator is equal to the CME for different special cases, including the case of univariate Gaussian inputs and the case of multiple pilot signals in the absence of additive noise prior to the quantization. For the general cases we conduct numerical simulations to quantify the gap between the Bussgang estimator and the CME. This gap increases for higher dimensions and longer pilot sequences. We propose an optimal pilot sequence, motivated by insights from the CME, and derive a novel closed-form expression of the MSE for that case. Afterwards, we find a closed-form limit of the MSE in the asymptotically large number of pilots regime that also holds for the Bussgang estimator. Lastly, we present numerical experiments for various system parameters and for different performance metrics which illuminate the behavior of the optimal channel estimator in the quantized regime. In this context, the well-known stochastic resonance effect that appears in quantized systems can be quantified.

We present a dimension-incremental algorithm for the nonlinear approximation of high-dimensional functions in an arbitrary bounded orthonormal product basis. Our goal is to detect a suitable truncation of the basis expansion of the function, where the corresponding basis support is assumed to be unknown. Our method is based on point evaluations of the considered function and adaptively builds an index set of a suitable basis support such that the approximately largest basis coefficients are still included. For this purpose, the algorithm only needs a suitable search space that contains the desired index set. Throughout the work, there are various minor modifications of the algorithm discussed as well, which may yield additional benefits in several situations. For the first time, we provide a proof of a detection guarantee for such an index set in the function approximation case under certain assumptions on the sub-methods used within our algorithm, which can be used as a foundation for similar statements in various other situations as well. Some numerical examples in different settings underline the effectiveness and accuracy of our method.

In this paper, we propose, analyze and implement efficient time parallel methods for the Cahn-Hilliard (CH) equation. It is of great importance to develop efficient numerical methods for the CH equation, given the range of applicability of the CH equation has. The CH equation generally needs to be simulated for a very long time to get the solution of phase coarsening stage. Therefore it is desirable to accelerate the computation using parallel method in time. We present linear and nonlinear Parareal methods for the CH equation depending on the choice of fine approximation. We illustrate our results by numerical experiments.

北京阿比特科技有限公司