亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Scheduled Relaxation Jacobi (SRJ) method is a linear solver algorithm which greatly improves the convergence of the Jacobi iteration through the use of judiciously chosen relaxation factors (an SRJ scheme) which attenuate the solution error. Until now, the method has primarily been used to accelerate the solution of elliptic PDEs (e.g. Laplace, Poisson's equation) as the currently available schemes are restricted to solving this class of problems. The goal of this paper is to present a methodology for constructing SRJ schemes which are suitable for solving non-elliptic PDEs (or equivalent, nonsymmetric linear systems arising from the discretization of these PDEs), thereby extending the applicability of this method to a broader class of problems. These schemes are obtained by numerically solving a constrained minimization problem which guarantees the solution error will not grow as long as the linear system has eigenvalues which lie in certain regions of the complex plane. We demonstrate that these schemes are able to accelerate the convergence of standard Jacobi iteration for the nonsymmetric linear systems arising from discretization of the 1D and 2D steady advection-diffusion equations.

相關內容

We present two strategies for designing passivity preserving higher order discretization methods for Maxwell's equations in nonlinear Kerr-type media. Both approaches are based on variational approximation schemes in space and time. This allows to rigorously prove energy conservation or dissipation, and thus passivity, on the fully discrete level. For linear media, the proposed methods coincide with certain combinations of mixed finite element and implicit Runge-Kutta schemes. The order optimal convergence rates, which can thus be expected for linear problems, are also observed for nonlinear problems in the numerical tests.

This paper extends the high-order entropy stable (ES) adaptive moving mesh finite difference schemes developed in [14] to the two- and three-dimensional (multi-component) compressible Euler equations with the stiffened equation of state. The two-point entropy conservative (EC) flux is first constructed in the curvilinear coordinates. The high-order semi-discrete EC schemes are given with the aid of the two-point EC flux and the high-order discretization of the geometric conservation laws, and then the high-order semi-discrete ES schemes satisfying the entropy inequality are derived by adding the high-order dissipation term based on the multi-resolution weighted essentially non-oscillatory (WENO) reconstruction for the scaled entropy variables to the EC schemes. The explicit strong-stability-preserving Runge-Kutta methods are used for the time discretization and the mesh points are adaptively redistributed by iteratively solving the mesh redistribution equations with an appropriately chosen monitor function. Several 2D and 3D numerical tests are conducted on the parallel computer system with the MPI programming to validate the accuracy and the ability to capture effectively the localized structures of the proposed schemes.

In scheduling, we are given a set of jobs, together with a number of machines and our goal is to decide for every job, when and on which machine(s) it should be scheduled in order to minimize some objective function. Different machine models, job characteristics and objective functions result in a multitude of scheduling problems and many of them are NP-hard, even for a fixed number of identical machines. However, using pseudo-polynomial or approximation algorithms, we can still hope to solve some of these problems efficiently. In this work, we give conditional running time lower bounds for a large number of scheduling problems, indicating the optimality of some classical algorithms. In particular, we show that the dynamic programming algorithm by Lawler and Moore is probably optimal for $1||\sum w_jU_j$ and $Pm||C_{max}$. Moreover, the FPTAS by Gens and Levner for $1||\sum w_jU_j$ and the algorithm by Lee and Uzsoy for $P2||\sum w_jC_j$ are probably optimal as well. There is still small room for improvement for the $1|Rej\leq Q|\sum w_jU_j$ algorithm by Zhang et al. and the algorithm for $1||\sum T_j$ by Lawler. We also give a lower bound for $P2|any|C_{max}$ and improve the dynamic program by Du and Leung from $\mathcal{O}(nP^2)$ to $\mathcal{O}(nP)$ to match this lower bound. Here, $P$ is the sum of all processing times. The same idea also improves the algorithm for $P3|any|C_{max}$ by Du and Leung from $\mathcal{O}(nP^5)$ to $\mathcal{O}(nP^2)$. The lower bounds in this work all either rely on the (Strong) Exponential Time Hypothesis or the $(\min,+)$-conjecture. While our results suggest the optimality of some classical algorithms, they also motivate future research in cases where the best known algorithms do not quite match the lower bounds.

This paper studies the performance Newton's iteration applied with Anderson acceleration for solving the incompressible steady Navier-Stokes equations. We manifest that this method converges superlinearly with a good initial guess, and moreover, a large Anderson depth decelerates the convergence speed comparing to a small Anderson depth. We observe that the numerical tests confirm these analytical convergence results, and in addition, Anderson acceleration sometimes enlarges the domain of convergence for Newton's method.

We study the generalized load-balancing (GLB) problem, where we are given $n$ jobs, each of which needs to be assigned to one of $m$ unrelated machines with processing times $\{p_{ij}\}$. Under a job assignment $\sigma$, the load of each machine $i$ is $\psi_i(\mathbf{p}_{i}[\sigma])$ where $\psi_i:\mathbb{R}^n\rightarrow\mathbb{R}_{\geq0}$ is a symmetric monotone norm and $\mathbf{p}_{i}[\sigma]$ is the $n$-dimensional vector $\{p_{ij}\cdot \mathbf{1}[\sigma(j)=i]\}_{j\in [n]}$. Our goal is to minimize the generalized makespan $\phi(\mathsf{load}(\sigma))$, where $\phi:\mathbb{R}^m\rightarrow\mathbb{R}_{\geq0}$ is another symmetric monotone norm and $\mathsf{load}(\sigma)$ is the $m$-dimensional machine load vector. This problem significantly generalizes many classic optimization problems, e.g., makespan minimization, set cover, minimum-norm load-balancing, etc. We obtain a polynomial time randomized algorithm that achieves an approximation factor of $O(\log n)$, matching the lower bound of set cover up to constant factor. We achieve this by rounding a novel configuration LP relaxation with exponential number of variables. To approximately solve the configuration LP, we design an approximate separation oracle for its dual program. In particular, the separation oracle can be reduced to the norm minimization with a linear constraint (NormLin) problem and we devise a polynomial time approximation scheme (PTAS) for it, which may be of independent interest.

The aim of this work is to devise and analyse an accurate numerical scheme to solve Erd\'elyi-Kober fractional diffusion equation. This solution can be thought as the marginal pdf of the stochastic process called the generalized grey Brownian motion (ggBm). The ggBm includes some well-known stochastic processes: Brownian motion, fractional Brownian motion and grey Brownian motion. To obtain convergent numerical scheme we transform the fractional diffusion equation into its weak form and apply the discretization of the Erd\'elyi-Kober fractional derivative. We prove the stability of the solution of the semi-discrete problem and its convergence to the exact solution. Due to the singular in time term appearing in the main equation the proposed method converges slower than first order. Finally, we provide the numerical analysis of the full-discrete problem using orthogonal expansion in terms of Hermite functions.

In this paper, we introduce classically time-controlled quantum automata or CTQA, which is a reasonable modification of Moore-Crutchfield quantum finite automata that uses time-dependent evolution and a "scheduler" defining how long each Hamiltonian will run. Surprisingly enough, time-dependent evolution provides a significant change in the computational power of quantum automata with respect to a discrete quantum model. Indeed, we show that if a scheduler is not computationally restricted, then a CTQA can decide the Halting problem. In order to unearth the computational capabilities of CTQAs we study the case of a computationally restricted scheduler. In particular, we showed that depending on the type of restriction imposed on the scheduler, a CTQA can (i) recognize non-regular languages with cut-point, even in the presence of Karp-Lipton advice, and (ii) recognize non-regular promise languages with bounded-error. Furthermore, we study the cutpoint-union of cutpoint languages by introducing a new model of Moore-Crutchfield quantum finite automata with a rotating tape head. CTQA presents itself as a new model of computation that provides a different approach to a formal study of "classical control, quantum data" schemes in quantum computing.

In this paper, we propose a framework of the mutual information-maximizing (MIM) quantized decoding for low-density parity-check (LDPC) codes by using simple mappings and fixed-point additions. Our decoding method is generic in the sense that it can be applied to LDPC codes with arbitrary degree distributions, and can be implemented based on either the belief propagation (BP) algorithm or the min-sum (MS) algorithm. In particular, we propose the MIM density evolution (MIM-DE) to construct the lookup tables (LUTs) for the node updates. The computational complexity and memory requirements are discussed and compared to the LUT decoder variants. For applications with low-latency requirement, we consider the layered schedule to accelerate the convergence speed of decoding quasi-cyclic LDPC codes. In particular, we develop the layered MIM-DE to design the LUTs based on the MS algorithm, leading to the MIM layered quantized MS (MIM-LQMS) decoder. An optimization method is further introduced to reduce the memory requirement for storing the LUTs. Simulation results show that the MIM quantized decoders outperform the state-of-the-art LUT decoders in the waterfall region with both 3-bit and 4-bit precision over the additive white Gaussian noise channels. For small decoding iterations, the MIM quantized decoders also achieve a faster convergence speed compared to the benchmarks. Moreover, the 4-bit MIM-LQMS decoder can approach the error performance of the floating-point layered BP decoder within 0.3 dB in the moderate-to-high SNR regions, over both the AWGN channels and the fast fading channels.

Backward Stochastic Differential Equations (BSDEs) have been widely employed in various areas of social and natural sciences, such as the pricing and hedging of financial derivatives, stochastic optimal control problems, optimal stopping problems and gene expression. Most BSDEs cannot be solved analytically and thus numerical methods must be applied in order to approximate their solutions. There have been a variety of numerical methods proposed over the past few decades as well as many more currently being developed. For the most part, they exist in a complex and scattered manner with each requiring different and similar assumptions and conditions. The aim of the present work is thus to systematically survey various numerical methods for BSDEs, and in particular, compare and categorize them, for further developments and improvements. To achieve this goal, we focus primarily on the core features of each method on the basis of an exhaustive collection of 289 references: the main assumptions, the numerical algorithm itself, key convergence properties and advantages and disadvantages, in order to provide a full up-to-date coverage of numerical methods for BSDEs, with insightful summaries of each and a useful comparison and categorization.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

北京阿比特科技有限公司