亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Traditional finite element approaches are well-known to introduce spurious oscillations when applied to advection-dominated problems. We explore alleviation of this issue from the perspective of a generalized finite element formulation, which enables stabilization through an enrichment process. The presented work uses solution-tailored enrichments for the numerical solution of the one-dimensional, unsteady Burgers equation. Mainly, generalizable exponential and hyperbolic tangent enrichments effectively capture local, steep boundary layer/shock features. Results show natural alleviation of oscillations and return smooth numerical solutions over coarse grids. Additionally, significantly improved error levels are observed compared to Lagrangian finite element methods.

相關內容

 Processing 是一門開源編程語言和與之配套的集成開發環境(IDE)的名稱。Processing 在電子藝術和視覺設計社區被用來教授編程基礎,并運用于大量的新媒體和互動藝術作品中。

We consider Broyden's method and some accelerated schemes for nonlinear equations having a strongly regular singularity of first order with a one-dimensional nullspace. Our two main results are as follows. First, we show that the use of a preceding Newton--like step ensures convergence for starting points in a starlike domain with density 1. This extends the domain of convergence of these methods significantly. Second, we establish that the matrix updates of Broyden's method converge q-linearly with the same asymptotic factor as the iterates. This contributes to the long--standing question whether the Broyden matrices converge by showing that this is indeed the case for the setting at hand. Furthermore, we prove that the Broyden directions violate uniform linear independence, which implies that existing results for convergence of the Broyden matrices cannot be applied. Numerical experiments of high precision confirm the enlarged domain of convergence, the q-linear convergence of the matrix updates, and the lack of uniform linear independence. In addition, they suggest that these results can be extended to singularities of higher order and that Broyden's method can converge r-linearly without converging q-linearly. The underlying code is freely available.

The expansion of Fiber-To-The-Home (FTTH) networks creates high costs due to expensive excavation procedures. Optimizing the planning process and minimizing the cost of the earth excavation work therefore lead to large savings. Mathematically, the FTTH network problem can be described as a minimum Steiner Tree problem. Even though the Steiner Tree problem has already been investigated intensively in the last decades, it might be further optimized with the help of new computing paradigms and emerging approaches. This work studies upcoming technologies, such as Quantum Annealing, Simulated Annealing and nature-inspired methods like Evolutionary Algorithms or slime-mold-based optimization. Additionally, we investigate partitioning and simplifying methods. Evaluated on several real-life problem instances, we could outperform a traditional, widely-used baseline (NetworkX Approximate Solver) on most of the domains. Prior partitioning of the initial graph and the presented slime-mold-based approach were especially valuable for a cost-efficient approximation. Quantum Annealing seems promising, but was limited by the number of available qubits.

We develop a generalized hybrid iterative approach for computing solutions to large-scale Bayesian inverse problems. We consider a hybrid algorithm based on the generalized Golub-Kahan bidiagonalization for computing Tikhonov regularized solutions to problems where explicit computation of the square root and inverse of the covariance kernel for the prior covariance matrix is not feasible. This is useful for large-scale problems where covariance kernels are defined on irregular grids or are only available via matrix-vector multiplication, e.g., those from the Mat\'{e}rn class. We show that iterates are equivalent to LSQR iterates applied to a directly regularized Tikhonov problem, after a transformation of variables, and we provide connections to a generalized singular value decomposition filtered solution. Our approach shares many benefits of standard hybrid methods such as avoiding semi-convergence and automatically estimating the regularization parameter. Numerical examples from image processing demonstrate the effectiveness of the described approaches.

Intelligent reflecting surfaces (IRSs) are emerging as promising enablers for the next generation of wireless communication systems, because of their ability to customize favorable radio propagation environments. However, with the conventional passive architecture, IRSs can only adjust the phase of the incident signals limiting the achievable beamforming gain. To fully unleash the potential of IRSs, in this paper, we consider a more general IRS architecture, i.e., active IRSs, which can adapt the phase and amplify the magnitude of the reflected incident signal simultaneously with the support of an additional power source. To realize green communication in active IRS-assisted multiuser systems, we jointly optimize the reflection matrix at the IRS and the beamforming vector at the base station (BS) for the minimization of the BS transmit power. The resource allocation algorithm design is formulated as an optimization problem taking into account the maximum power budget of the active IRS and the quality-of-service (QoS) requirements of the users. To handle the non-convex design problem, we develop a novel and computationally efficient algorithm based on the bilinear transformation and inner approximation methods. The proposed algorithm is guaranteed to converge to a locally optimal solution of the considered problem. Simulation results illustrate the effectiveness of the proposed scheme compared to the two baseline schemes. Moreover, the results unveil that deploying active IRSs is a promising approach to enhance the system performance compared to conventional passive IRSs, especially when strong direct links exist.

In this paper, we study smooth stochastic multi-level composition optimization problems, where the objective function is a nested composition of $T$ functions. We assume access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle. For solving this class of problems, we propose two algorithms using moving-average stochastic estimates, and analyze their convergence to an $\epsilon$-stationary point of the problem. We show that the first algorithm, which is a generalization of \cite{GhaRuswan20} to the $T$ level case, can achieve a sample complexity of $\mathcal{O}(1/\epsilon^6)$ by using mini-batches of samples in each iteration. By modifying this algorithm using linearized stochastic estimates of the function values, we improve the sample complexity to $\mathcal{O}(1/\epsilon^4)$. {\color{black}This modification not only removes the requirement of having a mini-batch of samples in each iteration, but also makes the algorithm parameter-free and easy to implement}. To the best of our knowledge, this is the first time that such an online algorithm designed for the (un)constrained multi-level setting, obtains the same sample complexity of the smooth single-level setting, under standard assumptions (unbiasedness and boundedness of the second moments) on the stochastic first-order oracle.

In this paper, by using some properties for linear algebra methods, the parity-check matrixs for twisted generalized Reed-Solomon codes with any given hook $h$ and twist $t$ are presented, and then a sufficient and necessary condition for that a twisted generalized Reed-Solomon code with dimension $h+t$ $(h\ge t)$ to be self-dual is given. Furthermore, several classes of self-dual codes with small singleton defect are constructed based on twisted generalized Reed-Solomon codes.

In this report, we present and compare the results of an improved fractional and integer order partial differential equation (PDE)-based binarization scheme. The improved model incorporates a diffusion term in addition to the edge and binary source terms from the previous formulation. Furthermore, logarithmic local contrast edge normalization and combined isotropic and anisotropic edge detection enables simultaneous bleed-through elimination with faded text restoration for degraded document images. Comparisons of results with state-of-the-art PDE methods show improved and superior results.

Interpretability is becoming increasingly important for predictive model analysis. Unfortunately, as remarked by many authors, there is still no consensus regarding this notion. The goal of this paper is to propose the definition of a score that allows to quickly compare interpretable algorithms. This definition consists of three terms, each one being quantitatively measured with a simple formula: predictivity, stability and simplicity. While predictivity has been extensively studied to measure the accuracy of predictive algorithms, stability is based on the Dice-Sorensen index for comparing two rule sets generated by an algorithm using two independent samples. The simplicity is based on the sum of the lengths of the rules derived from the predictive model. The proposed score is a weighted sum of the three terms mentioned above. We use this score to compare the interpretability of a set of rule-based algorithms and tree-based algorithms for the regression case and for the classification case.

A new conservative finite element solver for the three-dimensional steady magnetohydrodynamic (MHD) kinematics equations is presented.The solver utilizes magnetic vector potential and current density as solution variables, which are discretized by H(curl)-conforming edge-element and H(div)-conforming face element respectively. As a result, the divergence-free constraints of discrete current density and magnetic induction are both satisfied. Moreover the solutions also preserve the total magnetic helicity. The generated linear algebraic equation is a typical dual saddle-point problem that is ill-conditioned and indefinite. To efficiently solve it, we develop a block preconditioner based on constraint preconditioning framework and devise a preconditioned FGMRES solver. Numerical experiments verify the conservative properties, the convergence rate of the discrete solutions and the robustness of the preconditioner.

We consider the problem of describing the typical (possibly) non-linear code of minimum distance bounded from below over a large alphabet. We concentrate on block codes with the Hamming metric and on subspace codes with the injection metric. In sharp contrast with the behavior of linear block codes, we show that the typical non-linear code in the Hamming metric of cardinality $q^{n-d+1}$ is far from having minimum distance $d$, i.e., from being MDS. We also give more precise results about the asymptotic proportion of block codes with good distance properties within the set of codes having a certain cardinality. We then establish the analogous results for subspace codes with the injection metric, showing also an application to the theory of partial spreads in finite geometry.

北京阿比特科技有限公司