亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The method of harmonic balance (HB) is a spectrally accurate method used to obtain periodic steady state solutions to dynamical systems subjected to periodic perturbations. We adapt HB to solve for the stress response of the Giesekus model under large amplitude oscillatory shear (LAOS) deformation. HB transforms the system of differential equations to a set of nonlinear algebraic equations in the Fourier coefficients. Convergence studies find that the difference between the HB and true solutions decays exponentially with the number of harmonics ($H$) included in the ansatz as $e^{-m H}$. The decay coefficient $m$ decreases with increasing strain amplitude, and exhibits a "U" shaped dependence on applied frequency. The computational cost of HB increases slightly faster than linearly with $H$. The net result of rapid convergence and modest increase in computational cost with increasing $H$ implies that HB outperforms the conventional method of using numerical integration to solve differential constitutive equations under oscillatory shear. Numerical experiments find that HB is simultaneously about three orders of magnitude cheaper, and several orders of magnitude more accurate than numerical integration. Thus, it offers a compelling value proposition for parameter estimation or model selection.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 離散化 · 講稿 · Performer · Analysis ·
2023 年 3 月 17 日

We introduce and analyze a discontinuous Galerkin method for the numerical modelling of the equations of Multiple-Network Poroelastic Theory (MPET) in the dynamic formulation. The MPET model can comprehensively describe functional changes in the brain considering multiple scales of fluids. Concerning the spatial discretization, we employ a high-order discontinuous Galerkin method on polygonal and polyhedral grids and we derive stability and a priori error estimates. The temporal discretization is based on a coupling between a Newmark $\beta$-method for the momentum equation and a $\theta$-method for the pressure equations. After the presentation of some verification numerical tests, we perform a convergence analysis using an agglomerated mesh of a geometry of a brain slice. Finally we present a simulation in a three dimensional patient-specific brain reconstructed from magnetic resonance images. The model presented in this paper can be regarded as a preliminary attempt to model the perfusion in the brain.

The organiser of the UEFA Champions League, one of the most prestigious football tournaments in the world, faces a non-trivial mechanism design problem each autumn: how to choose a perfect matching in a balanced bipartite graph randomly. For the sake of credibility and transparency, the Round of 16 draw consists of some discrete uniform choices from two urns whose compositions are dynamically updated with computer assistance. Even though the adopted mechanism is unevenly distributed over all valid assignments, it resembles the fairest possible lottery according to a recent result. We challenge this finding by analysing the effect of reversing the order of the urns. The optimal draw procedure is shown to be primarily dependent on the lexicographic order of degree sequences for the two sets of teams. An example is provided where exchanging the urns can reduce unfairness by one-third on average and almost halve the worst bias for all pairs of teams. Nonetheless, the current policy of starting the draw with the runners-up remains the best option if the draw order should be determined before the national associations of the clubs are known.

The present study is an extension of the work done in Parareal convergence for oscillatory pdes with finite time-scale separation (2019), A. G. Peddle, T. Haut, and B. Wingate, [16], and An asymptotic parallel-in-time method for highly oscillatory pdes (2014), T. Haut and B. Wingate, [10], where a two-level Parareal method with averaging is examined. The method proposed in this paper is a multi-level Parareal method with arbitrarily many levels, which is not restricted to the two-level case. We give an asymptotic error estimate which reduces to the two-level estimate for the case when only two levels are considered. Introducing more than two levels has important consequences for the averaging procedure, as we choose separate averaging windows for each of the different levels, which is an additional new feature of the present study. The different averaging windows make the proposed method especially appropriate for multi-scale problems, because we can introduce a level for each intrinsic scale of the problem and adapt the averaging procedure such that we reproduce the behavior of the model on the particular scale resolved by the level. The computational complexity of the new method is investigated and the efficiency is studied on several examples.

The weak maximum principle of the isoparametric finite element method is proved for the Poisson equation under the Dirichlet boundary condition in a (possibly concave) curvilinear polyhedral domain with edge openings smaller than $\pi$, which include smooth domains and smooth deformations of convex polyhedra. The proof relies on the analysis of a dual elliptic problem with a discontinuous coefficient matrix arising from the isoparametric finite elements. Therefore, the standard $H^2$ elliptic regularity which is required in the proof of the weak maximum principle in the literature does not hold for this dual problem. To overcome this difficulty, we have decomposed the solution into a smooth part and a nonsmooth part, and estimated the two parts by $H^2$ and $W^{1,p}$ estimates, respectively. As an application of the weak maximum principle, we have proved a maximum-norm best approximation property of the isoparametric finite element method for the Poisson equation in a curvilinear polyhedron. The proof contains non-trivial modifications of Schatz's argument due to the non-conformity of the iso-parametric finite elements, which requires us to construct a globally smooth flow map which maps the curvilinear polyhedron to a perturbed larger domain on which we can establish the $W^{1,\infty}$ regularity estimate of the Poisson equation uniformly with respect to the perturbation.

The time-marching strategy, which propagates the solution from one time step to the next, is a natural strategy for solving time-dependent differential equations on classical computers, as well as for solving the Hamiltonian simulation problem on quantum computers. For more general linear differential equations, a time-marching based quantum solver can suffer from exponentially vanishing success probability with respect to the number of time steps and is thus considered impractical. We solve this problem by repeatedly invoking a technique called the uniform singular value amplification, and the overall success probability can be lower bounded by a quantity that is independent of the number of time steps. The success probability can be further improved using a compression gadget lemma. This provides a path of designing quantum differential equation solvers that is alternative to those based on quantum linear systems algorithms (QLSA). We demonstrate the performance of the time-marching strategy with a high-order integrator based on the truncated Dyson series. The complexity of the algorithm depends linearly on the amplification ratio, which quantifies the deviation from a unitary dynamics. We prove that the linear dependence on the amplification ratio attains the query complexity lower bound and thus cannot be improved in the worst case. This algorithm also surpasses existing QLSA based solvers in three aspects: (1) the coefficient matrix $A(t)$ does not need to be diagonalizable. (2) $A(t)$ can be non-smooth, and is only of bounded variation. (3) It can use fewer queries to the initial state. Finally, we demonstrate the time-marching strategy with a first-order truncated Magnus series, while retaining the aforementioned benefits. Our analysis also raises some open questions concerning the differences between time-marching and QLSA based methods for solving differential equations.

When solving compressible multi-material flow problems, an unresolved challenge is the computation of advective fluxes across material interfaces that separate drastically different thermodynamic states and relations. A popular idea in this regard is to locally construct bimaterial Riemann problems, and to apply their exact solutions in flux computation. For general equations of state, however, finding the exact solution of a Riemann problem is expensive as it requires nested loops. Multiplied by the large number of Riemann problems constructed during a simulation, the computational cost often becomes prohibitive. The work presented in this paper aims to accelerate the solution of bimaterial Riemann problems without introducing approximations or offline precomputation tasks. The basic idea is to exploit some special properties of the Riemann problem equations, and to recycle previous solutions as much as possible. Following this idea, four acceleration methods are developed, including (1) a change of integration variable through rarefaction fans, (2) storing and reusing integration trajectory data, (3) step size adaptation, and (4) constructing an R-tree on the fly to generate initial guesses. The performance of these acceleration methods are assessed using four example problems in underwater explosion, laser-induced cavitation, and hypervelocity impact. These problems exhibit strong shock waves, large interface deformation, contact of multiple (>2) interfaces, and interaction between gases and condensed matters. In these challenging cases, the solution of bimaterial Riemann problems is accelerated by 37 to 83 times. As a result, the total cost of advective flux computation, which includes the exact Riemann problem solution at material interfaces and the numerical flux calculation over the entire computational domain, is accelerated by 18 to 79 times.

We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses Bregman projections onto the solution space of a Newton equation. In the special case of euclidean projections, the method is known as nonlinear Kaczmarz method. Furthermore, if the component functions are nonnegative, we are in the setting of optimization under the interpolation assumption and the method reduces to SGD with the recently proposed stochastic Polyak step size. For general Bregman projections, our method is a stochastic mirror descent with a novel adaptive step size. We prove that in the convex setting each iteration of our method results in a smaller Bregman distance to exact solutions as compared to the standard Polyak step. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem needs to be solved in each iteration. This can typically be done with globalized Newton iterations. Convergence is proved in two classical settings of nonlinearity: for convex nonnegative functions and locally for functions which fulfill the tangential cone condition. Finally, we show examples in which the proposed method outperforms similar methods with the same memory requirements.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

Over the past few years, we have seen fundamental breakthroughs in core problems in machine learning, largely driven by advances in deep neural networks. At the same time, the amount of data collected in a wide array of scientific domains is dramatically increasing in both size and complexity. Taken together, this suggests many exciting opportunities for deep learning applications in scientific settings. But a significant challenge to this is simply knowing where to start. The sheer breadth and diversity of different deep learning techniques makes it difficult to determine what scientific problems might be most amenable to these methods, or which specific combination of methods might offer the most promising first approach. In this survey, we focus on addressing this central issue, providing an overview of many widely used deep learning models, spanning visual, sequential and graph structured data, associated tasks and different training methods, along with techniques to use deep learning with less data and better interpret these complex models --- two central considerations for many scientific use cases. We also include overviews of the full design process, implementation tips, and links to a plethora of tutorials, research summaries and open-sourced deep learning pipelines and pretrained models, developed by the community. We hope that this survey will help accelerate the use of deep learning across different scientific domains.

In this paper, we propose a deep reinforcement learning framework called GCOMB to learn algorithms that can solve combinatorial problems over large graphs. GCOMB mimics the greedy algorithm in the original problem and incrementally constructs a solution. The proposed framework utilizes Graph Convolutional Network (GCN) to generate node embeddings that predicts the potential nodes in the solution set from the entire node set. These embeddings enable an efficient training process to learn the greedy policy via Q-learning. Through extensive evaluation on several real and synthetic datasets containing up to a million nodes, we establish that GCOMB is up to 41% better than the state of the art, up to seven times faster than the greedy algorithm, robust and scalable to large dynamic networks.

北京阿比特科技有限公司