亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The relativistic hydrodynamics (RHD) equations have three crucial intrinsic physical constraints on the primitive variables: positivity of pressure and density, and subluminal fluid velocity. However, numerical simulations can violate these constraints, leading to nonphysical results or even simulation failure. Designing genuinely physical-constraint-preserving (PCP) schemes is very difficult, as the primitive variables cannot be explicitly reformulated using conservative variables due to relativistic effects. In this paper, we propose three efficient Newton--Raphson (NR) methods for robustly recovering primitive variables from conservative variables. Importantly, we rigorously prove that these NR methods are always convergent and PCP, meaning they preserve the physical constraints throughout the NR iterations. The discovery of these robust NR methods and their PCP convergence analyses are highly nontrivial and technical. As an application, we apply the proposed NR methods to design PCP finite volume Hermite weighted essentially non-oscillatory (HWENO) schemes for solving the RHD equations. Our PCP HWENO schemes incorporate high-order HWENO reconstruction, a PCP limiter, and strong-stability-preserving time discretization. We rigorously prove the PCP property of the fully discrete schemes using convex decomposition techniques. Moreover, we suggest the characteristic decomposition with rescaled eigenvectors and scale-invariant nonlinear weights to enhance the performance of the HWENO schemes in simulating large-scale RHD problems. Several demanding numerical tests are conducted to demonstrate the robustness, accuracy, and high resolution of the proposed PCP HWENO schemes and to validate the efficiency of our NR methods.

相關內容

This work proposes four novel hybrid quadrature schemes for the efficient and accurate evaluation of weakly singular boundary integrals (1/r kernel) on arbitrary smooth surfaces. Such integrals appear in boundary element analysis for several partial differential equations including the Stokes equation for viscous flow and the Helmholtz equation for acoustics. The proposed quadrature schemes apply a Duffy transform-based quadrature rule to surface elements containing the singularity and classical Gaussian quadrature to the remaining elements. Two of the four schemes additionally consider a special treatment for elements near to the singularity, where refined Gaussian quadrature and a new moment-fitting quadrature rule are used. The hybrid quadrature schemes are systematically studied on flat B-spline patches and on NURBS spheres considering two different sphere discretizations: An exact single-patch sphere with degenerate control points at the poles and an approximate discretization that consist of six patches with regular elements. The efficiency of the quadrature schemes is further demonstrated in boundary element analysis for Stokes flow, where steady problems with rotating and translating curved objects are investigated in convergence studies for both, mesh and quadrature refinement. Much higher convergence rates are observed for the proposed new schemes in comparison to classical schemes.

The abundance of observed data in recent years has increased the number of statistical augmentations to complex models across science and engineering. By augmentation we mean coherent statistical methods that incorporate measurements upon arrival and adjust the model accordingly. However, in this research area methodological developments tend to be central, with important assessments of model fidelity often taking second place. Recently, the statistical finite element method (statFEM) has been posited as a potential solution to the problem of model misspecification when the data are believed to be generated from an underlying partial differential equation system. Bayes nonlinear filtering permits data driven finite element discretised solutions that are updated to give a posterior distribution which quantifies the uncertainty over model solutions. The statFEM has shown great promise in systems subject to mild misspecification but its ability to handle scenarios of severe model misspecification has not yet been presented. In this paper we fill this gap, studying statFEM in the context of shallow water equations chosen for their oceanographic relevance. By deliberately misspecifying the governing equations, via linearisation, viscosity, and bathymetry, we systematically analyse misspecification through studying how the resultant approximate posterior distribution is affected, under additional regimes of decreasing spatiotemporal observational frequency. Results show that statFEM performs well with reasonable accuracy, as measured by theoretically sound proper scoring rules.

In this paper, we provide a novel framework for the analysis of generalization error of first-order optimization algorithms for statistical learning when the gradient can only be accessed through partial observations given by an oracle. Our analysis relies on the regularity of the gradient w.r.t. the data samples, and allows to derive near matching upper and lower bounds for the generalization error of multiple learning problems, including supervised learning, transfer learning, robust learning, distributed learning and communication efficient learning using gradient quantization. These results hold for smooth and strongly-convex optimization problems, as well as smooth non-convex optimization problems verifying a Polyak-Lojasiewicz assumption. In particular, our upper and lower bounds depend on a novel quantity that extends the notion of conditional standard deviation, and is a measure of the extent to which the gradient can be approximated by having access to the oracle. As a consequence, our analysis provides a precise meaning to the intuition that optimization of the statistical learning objective is as hard as the estimation of its gradient. Finally, we show that, in the case of standard supervised learning, mini-batch gradient descent with increasing batch sizes and a warm start can reach a generalization error that is optimal up to a multiplicative factor, thus motivating the use of this optimization scheme in practical applications.

Individuals socio-demographic and economic characteristics crucially shape the spread of an epidemic by largely determining the exposure level to the virus and the severity of the disease for those who got infected. While the complex interplay between individual characteristics and epidemic dynamics is widely recognized, traditional mathematical models often overlook these factors. In this study, we examine two important aspects of human behavior relevant to epidemics: contact patterns and vaccination uptake. Using data collected during the Covid-19 pandemic in Hungary, we first identify the dimensions along which individuals exhibit the greatest variation in their contact patterns and vaccination attitudes. We find that generally privileged groups of the population have higher number of contact and a higher vaccination uptake with respect to disadvantaged groups. Subsequently, we propose a data-driven epidemiological model that incorporates these behavioral differences. Finally, we apply our model to analyze the fourth wave of Covid-19 in Hungary, providing valuable insights into real-world scenarios. By bridging the gap between individual characteristics and epidemic spread, our research contributes to a more comprehensive understanding of disease dynamics and informs effective public health strategies.

Directed acyclic graphs (DAGs) are directed graphs in which there is no path from a vertex to itself. DAGs are an omnipresent data structure in computer science and the problem of counting the DAGs of given number of vertices and to sample them uniformly at random has been solved respectively in the 70's and the 00's. In this paper, we propose to explore a new variation of this model where DAGs are endowed with an independent ordering of the out-edges of each vertex, thus allowing to model a wide range of existing data structures. We provide efficient algorithms for sampling objects of this new class, both with or without control on the number of edges, and obtain an asymptotic equivalent of their number. We also show the applicability of our method by providing an effective algorithm for the random generation of classical labelled DAGs with a prescribed number of vertices and edges, based on a similar approach. This is the first known algorithm for sampling labelled DAGs with full control on the number of edges, and it meets a need in terms of applications, that had already been acknowledged in the literature.

In this paper, we consider the estimation of a low Tucker rank tensor from a number of noisy linear measurements. The general problem covers many specific examples arising from applications, including tensor regression, tensor completion, and tensor PCA/SVD. We consider an efficient Riemannian Gauss-Newton (RGN) method for low Tucker rank tensor estimation. Different from the generic (super)linear convergence guarantee of RGN in the literature, we prove the first local quadratic convergence guarantee of RGN for low-rank tensor estimation in the noisy setting under some regularity conditions and provide the corresponding estimation error upper bounds. A deterministic estimation error lower bound, which matches the upper bound, is provided that demonstrates the statistical optimality of RGN. The merit of RGN is illustrated through two machine learning applications: tensor regression and tensor SVD. Finally, we provide the simulation results to corroborate our theoretical findings.

The origins of proof-theoretic semantics lie in the question of what constitutes the meaning of the logical connectives and its response: the rules of inference that govern the use of the connective. However, what if we go a step further and ask about the meaning of a proof as a whole? In this paper we address this question and lay out a framework to distinguish sense and denotation of proofs. Two questions are central here. First of all, if we have two (syntactically) different derivations, does this always lead to a difference, firstly, in sense, and secondly, in denotation? The other question is about the relation between different kinds of proof systems (here: natural deduction vs. sequent calculi) with respect to this distinction. Do the different forms of representing a proof necessarily correspond to a difference in how the inferential steps are given? In our framework it will be possible to identify denotation as well as sense of proofs not only within one proof system but also between different kinds of proof systems. Thus, we give an account to distinguish a mere syntactic divergence from a divergence in meaning and a divergence in meaning from a divergence of proof objects analogous to Frege's distinction for singular terms and sentences.

In this paper, we consider a new approach for semi-discretization in time and spatial discretization of a class of semi-linear stochastic partial differential equations (SPDEs) with multiplicative noise. The drift term of the SPDEs is only assumed to satisfy a one-sided Lipschitz condition and the diffusion term is assumed to be globally Lipschitz continuous. Our new strategy for time discretization is based on the Milstein method from stochastic differential equations. We use the energy method for its error analysis and show a strong convergence order of nearly $1$ for the approximate solution. The proof is based on new H\"older continuity estimates of the SPDE solution and the nonlinear term. For the general polynomial-type drift term, there are difficulties in deriving even the stability of the numerical solutions. We propose an interpolation-based finite element method for spatial discretization to overcome the difficulties. Then we obtain $H^1$ stability, higher moment $H^1$ stability, $L^2$ stability, and higher moment $L^2$ stability results using numerical and stochastic techniques. The nearly optimal convergence orders in time and space are hence obtained by coupling all previous results. Numerical experiments are presented to implement the proposed numerical scheme and to validate the theoretical results.

Time is a crucial factor in modelling dynamic behaviours of intelligent agents: activities have a determined temporal duration in a real-world environment, and previous actions influence agents' behaviour. In this paper, we propose a language for modelling concurrent interaction between agents that also allows the specification of temporal intervals in which particular actions occur. Such a language exploits a timed version of Abstract Argumentation Frameworks to realise a shared memory used by the agents to communicate and reason on the acceptability of their beliefs with respect to a given time interval. An interleaving model on a single processor is used for basic computation steps, with maximum parallelism for time elapsing. Following this approach, only one of the enabled agents is executed at each moment. To demonstrate the capabilities of language, we also show how it can be used to model interactions such as debates and dialogue games taking place between intelligent agents. Lastly, we present an implementation of the language that can be accessed via a web interface. Under consideration in Theory and Practice of Logic Programming (TPLP).

We present an efficient matrix-free geometric multigrid method for the elastic Helmholtz equation, and a suitable discretization. Many discretization methods had been considered in the literature for the Helmholtz equations, as well as many solvers and preconditioners, some of which are adapted for the elastic version of the equation. However, there is very little work considering the reciprocity of discretization and a solver. In this work, we aim to bridge this gap. By choosing an appropriate stencil for re-discretization of the equation on the coarse grid, we develop a multigrid method that can be easily implemented as matrix-free, relying on stencils rather than sparse matrices. This is crucial for efficient implementation on modern hardware. Using two-grid local Fourier analysis, we validate the compatibility of our discretization with our solver, and tune a choice of weights for the stencil for which the convergence rate of the multigrid cycle is optimal. It results in a scalable multigrid preconditioner that can tackle large real-world 3D scenarios.

北京阿比特科技有限公司