亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper proposes high-order accurate well-balanced (WB) energy stable (ES) adaptive moving mesh finite difference schemes for the shallow water equations (SWEs) with non-flat bottom topography. To enable the construction of the ES schemes on moving meshes, a reformulation of the SWEs is introduced, with the bottom topography as an additional conservative variable that evolves in time. The corresponding energy inequality is derived based on a modified energy function, then the reformulated SWEs and energy inequality are transformed into curvilinear coordinates. A two-point energy conservative (EC) flux is constructed, and high-order EC schemes based on such a flux are proved to be WB that they preserve the lake at rest. Then high-order ES schemes are derived by adding suitable dissipation terms to the EC schemes, which are newly designed to maintain the WB and ES properties simultaneously. The adaptive moving mesh strategy is performed by iteratively solving the Euler-Lagrangian equations of a mesh adaptation functional. The fully-discrete schemes are obtained by using the explicit strong-stability preserving third-order Runge-Kutta method. Several numerical tests are conducted to validate the accuracy, WB and ES properties, shock-capturing ability, and high efficiency of the schemes.

相關內容

Patient specific brain mesh generation from MRI can be a time consuming task and require manual corrections, e.g., for meshing the ventricular system or defining subdomains. To address this issue, we consider an image registration approach. The idea is to use the registration of an input magnetic resonance image (MRI) to a respective target in order to obtain a new mesh from a high-quality template mesh. To obtain the transformation, we solve an optimization problem that is constrained by a linear hyperbolic transport equation. We use a higher-order discontinuous Galerkin finite element method for discretization and show that, under a restrictive assumption, the numerical upwind scheme can be derived from the continuous weak formulation of the transport equation. We present a numerical implementation that builds on the established finite element packages FEniCS and dolfin-adjoint. To demonstrate the efficacy of the proposed approach, numerical results for the registration of an input to a target MRI of two distinct individuals are presented. Moreover, it is shown that the registration transforms a manually crafted input mesh into a new mesh for the target subject whilst preserving mesh quality. Challenges of the algorithm with the complex cortical folding structure are discussed.

In this paper we discuss potentially practical ways to produce expander graphs with good spectral properties and a compact description. We focus on several classes of uniform and bipartite expander graphs defined as random Schreier graphs of the general linear group over the finite field of size two. We perform numerical experiments and show that such constructions produce spectral expanders that can be useful for practical applications. To find a theoretical explanation of the observed experimental results, we used the method of moments to prove upper bounds for the expected second largest eigenvalue of the random Schreier graphs used in our constructions. We focus on bounds for which it is difficult to study the asymptotic behaviour but it is possible to compute non-trivial conclusions for relatively small graphs with parameters from our numerical experiments (e.g., with less than 2^200 vertices and degree at least logarithmic in the number of vertices).

We prove that the sum of $t$ boolean-valued random variables sampled by a random walk on a regular expander converges in total variation distance to a discrete normal distribution at a rate of $O(\lambda/t^{1/2-o(1)})$, where $\lambda$ is the second largest eigenvalue of the random walk matrix in absolute value. To the best of our knowledge, among known Berry-Esseen bounds for Markov chains, our result is the first to show convergence in total variation distance, and is also the first to incorporate a linear dependence on expansion $\lambda$. In contrast, prior Markov chain Berry-Esseen bounds showed a convergence rate of $O(1/\sqrt{t})$ in weaker metrics such as Kolmogorov distance. Our result also improves upon prior work in the pseudorandomness literature, which showed that the total variation distance is $O(\lambda)$ when the approximating distribution is taken to be a binomial distribution. We achieve the faster $O(\lambda/t^{1/2-o(1)})$ convergence rate by generalizing the binomial distribution to discrete normals of arbitrary variance. We specifically construct discrete normals using a random walk on an appropriate 2-state Markov chain. Our bound can therefore be viewed as a regularity lemma that reduces the study of arbitrary expanders to a small class of particularly simple expanders.

We analyse an algorithm solving stochastic mean-payoff games, combining the ideas of relative value iteration and of Krasnoselskii-Mann damping. We derive parameterized complexity bounds for several classes of games satisfying irreducibility conditions. We show in particular that an $\epsilon$-approximation of the value of an irreducible concurrent stochastic game can be computed in a number of iterations in $O(|\log\epsilon|)$ where the constant in the $O(\cdot)$ is explicit, depending on the smallest non-zero transition probabilities. This should be compared with a bound in $O(|\epsilon|^{-1}|\log(\epsilon)|)$ obtained by Chatterjee and Ibsen-Jensen (ICALP 2014) for the same class of games, and to a $O(|\epsilon|^{-1})$ bound by Allamigeon, Gaubert, Katz and Skomra (ICALP 2022) for turn-based games. We also establish parameterized complexity bounds for entropy games, a class of matrix multiplication games introduced by Asarin, Cervelle, Degorre, Dima, Horn and Kozyakin. We derive these results by methods of variational analysis, establishing contraction properties of the relative Krasnoselskii-Mann iteration with respect to Hilbert's semi-norm.

Anderson acceleration (AA) is a popular method for accelerating fixed-point iterations, but may suffer from instability and stagnation. We propose a globalization method for AA to improve stability and achieve unified global and local convergence. Unlike existing AA globalization approaches that rely on safeguarding operations and might hinder fast local convergence, we adopt a nonmonotone trust-region framework and introduce an adaptive quadratic regularization together with a tailored acceptance mechanism. We prove global convergence and show that our algorithm attains the same local convergence as AA under appropriate assumptions. The effectiveness of our method is demonstrated in several numerical experiments.

In this paper, we propose an efficient quadratic interpolation formula utilizing solution gradients computed and stored at nodes and demonstrate its application to a third-order cell-centered finite-volume discretization on tetrahedral grids. The proposed quadratic formula is constructed based on an efficient formula of computing a projected derivative. It is efficient in that it completely eliminates the need to compute and store second derivatives of solution variables or any other quantities, which are typically required in upgrading a second-order cell-centered unstructured-grid finite-volume discretization to third-order accuracy. Moreover, a high-order flux quadrature formula, as required for third-order accuracy, can also be simplified by utilizing the efficient projected-derivative formula, resulting in a numerical flux at a face centroid plus a curvature correction not involving second derivatives of the flux. Similarly, a source term can be integrated over a cell to high-order in the form of a source term evaluated at the cell centroid plus a curvature correction, again, not requiring second derivatives of the source term. The discretization is defined as an approximation to an integral form of a conservation law but the numerical solution is defined as a point value at a cell center, leading to another feature that there is no need to compute and store geometric moments for a quadratic polynomial to preserve a cell average. Third-order accuracy and improved second-order accuracy are demonstrated and investigated for simple but illustrative test cases in three dimensions.

Piecewise constant curvature is a popular kinematics framework for continuum robots. Computing the model parameters from the desired end pose, known as the inverse kinematics problem, is fundamental in manipulation, tracking and planning tasks. In this paper, we propose an efficient multi-solution solver to address the inverse kinematics problem of 3-section constant-curvature robots by bridging both the theoretical reduction and numerical correction. We derive analytical conditions to simplify the original problem into a one-dimensional problem. Further, the equivalence of the two problems is formalised. In addition, we introduce an approximation with bounded error so that the one dimension becomes traversable while the remaining parameters analytically solvable. With the theoretical results, the global search and numerical correction are employed to implement the solver. The experiments validate the better efficiency and higher success rate of our solver than the numerical methods when one solution is required, and demonstrate the ability of obtaining multiple solutions with optimal path planning in a space with obstacles.

State-of-the-art machine-learning-based models are a popular choice for modeling and forecasting energy behavior in buildings because given enough data, they are good at finding spatiotemporal patterns and structures even in scenarios where the complexity prohibits analytical descriptions. However, their architecture typically does not hold physical correspondence to mechanistic structures linked with governing physical phenomena. As a result, their ability to successfully generalize for unobserved timesteps depends on the representativeness of the dynamics underlying the observed system in the data, which is difficult to guarantee in real-world engineering problems such as control and energy management in digital twins. In response, we present a framework that combines lumped-parameter models in the form of linear time-invariant (LTI) state-space models (SSMs) with unsupervised reduced-order modeling in a subspace-based domain adaptation (SDA) framework. SDA is a type of transfer-learning (TL) technique, typically adopted for exploiting labeled data from one domain to predict in a different but related target domain for which labeled data is limited. We introduce a novel SDA approach where instead of labeled data, we leverage the geometric structure of the LTI SSM governed by well-known heat transfer ordinary differential equations to forecast for unobserved timesteps beyond observed measurement data. Fundamentally, our approach geometrically aligns the physics-derived and data-derived embedded subspaces closer together. In this initial exploration, we evaluate the physics-based SDA framework on a demonstrative heat conduction scenario by varying the thermophysical properties of the source and target systems to demonstrate the transferability of mechanistic models from a physics-based domain to a data domain.

Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization. However, their optimization properties are less well understood. We take the first step towards analyzing GNN training by studying the gradient dynamics of GNNs. First, we analyze linearized GNNs and prove that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that we validate on real-world graphs. Second, we study what may affect the GNNs' training speed. Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution. Empirical results confirm that our theoretical results for linearized GNNs align with the training behavior of nonlinear GNNs. Our results provide the first theoretical support for the success of GNNs with skip connections in terms of optimization, and suggest that deep GNNs with skip connections would be promising in practice.

Normalization is known to help the optimization of deep neural networks. Curiously, different architectures require specialized normalization methods. In this paper, we study what normalization is effective for Graph Neural Networks (GNNs). First, we adapt and evaluate the existing methods from other domains to GNNs. Faster convergence is achieved with InstanceNorm compared to BatchNorm and LayerNorm. We provide an explanation by showing that InstanceNorm serves as a preconditioner for GNNs, but such preconditioning effect is weaker with BatchNorm due to the heavy batch noise in graph datasets. Second, we show that the shift operation in InstanceNorm results in an expressiveness degradation of GNNs for highly regular graphs. We address this issue by proposing GraphNorm with a learnable shift. Empirically, GNNs with GraphNorm converge faster compared to GNNs using other normalization. GraphNorm also improves the generalization of GNNs, achieving better performance on graph classification benchmarks.

北京阿比特科技有限公司