亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Accurate evaluation of nearly singular integrals plays an important role in many boundary integral equation based numerical methods. In this paper, we propose a variant of singularity swapping method to accurately evaluate the layer potentials for arbitrarily close targets. Our method is based on the global trapezoidal rule and trigonometric interpolation, resulting in an explicit quadrature formula. The method achieves spectral accuracy for nearly singular integrals on closed analytic curves. In order to extract the singularity from the complexified distance function, an efficient root finding method is proposed based on contour integration. Through the change of variables, we also extend the quadrature method to integrals on the piecewise analytic curves. Numerical examples for Laplace's and Helmholtz equations show that high order accuracy can be achieved for arbitrarily close field evaluation.

相關內容

This paper presents an efficient algorithm for the sequential positioning, also called nested dissection, of two planes in an arbitrary polyhedron. Two planar interfaces are positioned such that the first plane truncates a given volume from this arbitrary polyhedron and the next plane truncates a second given volume from the residual polyhedron. This is a relevant task in the numerical simulation of three-phase flows when resorting to the geometric Volume-of-Fluid (VoF) method with a Piecewise Linear Interface Calculation (PLIC). An efficient algorithm for this task significantly speeds up the three-phase PLIC algorithm. The present study describes a method based on a recursive application of the Gaussian divergence theorem, where the fact that the truncated polyhedron shares multiple faces with the original polyhedron can be exploited to reduce the computational effort. A careful choice of the coordinate system origin for the volume computation allows for successive positioning of two planes without reestablishing polyhedron connectivity. Combined with a highly efficient root finding, this results in a significant performance gain in the reconstruction of the three-phase interface configurations. The performance of the new method is assessed in a series of carefully designed numerical experiments. Compared to a conventional decomposition-based approach, the number of iterations and, thus, of the required truncations was reduced by up to an order of magnitude. The PLIC positioning run-time was reduced by about 90% in our reference implementation. Integrated into the multi-phase flow solver Free Surface 3D (FS3D), an overall performance gain of about 20% was achieved. Allowing for simple integration into existing numerical schemes, the proposed algorithm is self-contained (example Fortran Module see //doi.org/10.18419/darus-2488), requiring no external decomposition libraries.

We consider isogeometric discretizations of the Poisson model problem, focusing on high polynomial degrees and strong hierarchical refinements. We derive a posteriori error estimates by equilibrated fluxes, i.e., vector-valued mapped piecewise polynomials lying in the $\boldsymbol{H}({\rm div})$ space which appropriately approximate the desired divergence constraint. Our estimates are constant-free in the leading term, locally efficient, and robust with respect to the polynomial degree. They are also robust with respect to the number of hanging nodes arising in adaptive mesh refinement employing hierarchical B-splines. Two partitions of unity are designed, one with larger supports corresponding to the mapped splines, and one with small supports corresponding to mapped piecewise multilinear finite element hat basis functions. The equilibration is only performed on the small supports, avoiding the higher computational price of equilibration on the large supports or even the solution of a global system. Thus, the derived estimates are also as inexpensive as possible. An abstract framework for such a setting is developed, whose application to a specific situation only requests a verification of a few clearly identified assumptions. Numerical experiments illustrate the theoretical developments.

Statistical analysis of extremes can be used to predict the probability of future extreme events, such as large rainfalls or devastating windstorms. The quality of these forecasts can be measured through scoring rules. Locally scale invariant scoring rules put equal importance on the forecasts at different locations regardless of differences in the prediction uncertainty. This can be an unnecessarily strict requirement when mostly concerned with extremes. We propose the concept of local tail-scale invariance, describing scoring rules fulfilling local scale invariance for large events. Moreover, a new version of the weighted Continuous Ranked Probability score (wCRPS) called the scaled wCRPS (swCRPS) that possesses this property is developed and studied. We show that the score is a suitable alternative to the wCRPS for scoring extreme value models over areas with varying scale of extreme events, and we derive explicit formulas of the score for the Generalised Extreme Value distribution. The scoring rules are compared through simulation, and their usage is illustrated in modelling of extreme water levels in the Great Lakes and annual maximum rainfalls in the Northeastern United States.

The AI community is increasingly focused on merging logic with deep learning to create Neuro-Symbolic (NeSy) paradigms and assist neural approaches with symbolic knowledge. A significant trend in the literature involves integrating axioms and facts in loss functions by grounding logical symbols with neural networks and operators with fuzzy semantics. Logic Tensor Networks (LTN) is one of the main representatives in this category, known for its simplicity, efficiency, and versatility. However, it has been previously shown that not all fuzzy operators perform equally when applied in a differentiable setting. Researchers have proposed several configurations of operators, trading off between effectiveness, numerical stability, and generalization to different formulas. This paper presents a configuration of fuzzy operators for grounding formulas end-to-end in the logarithm space. Our goal is to develop a configuration that is more effective than previous proposals, able to handle any formula, and numerically stable. To achieve this, we propose semantics that are best suited for the logarithm space and introduce novel simplifications and improvements that are crucial for optimization via gradient-descent. We use LTN as the framework for our experiments, but the conclusions of our work apply to any similar NeSy framework. Our findings, both formal and empirical, show that the proposed configuration outperforms the state-of-the-art and that each of our modifications is essential in achieving these results.

In clinical follow-up studies with a time-to-event end point, the difference in the restricted mean survival time (RMST) is a suitable substitute for the hazard ratio (HR). However, the RMST only measures the survival of patients over a period of time from the baseline and cannot reflect changes in life expectancy over time. Based on the RMST, we study the conditional restricted mean survival time (cRMST) by estimating life expectancy in the future according to the time that patients have survived, reflecting the dynamic survival status of patients during follow-up. In this paper, we introduce the estimation method of cRMST based on pseudo-observations, the construction of test statistics according to the difference in the cRMST (cRMSTd), and the establishment of the robust dynamic prediction model using the landmark method. Simulation studies are employed to evaluate the statistical properties of these methods, which are also applied to two real examples. The simulation results show that the estimation of the cRMST is accurate and the cRMSTd test performs well. In addition, the dynamic RMST model has high accuracy in coefficient estimation and better predictive performance than the static RMST model. The hypothesis test proposed in this paper has a wide range of applicability, and the dynamic RMST model can predict patients' life expectancy from any prediction time, considering the time-dependent covariates and time-varying effects of covariates.

An implicit variable-step BDF2 scheme is established for solving the space fractional Cahn-Hilliard equation, involving the fractional Laplacian, derived from a gradient flow in the negative order Sobolev space $H^{-\alpha}$, $\alpha\in(0,1)$. The Fourier pseudo-spectral method is applied for the spatial approximation. The proposed scheme inherits the energy dissipation law in the form of the modified discrete energy under the sufficient restriction of the time-step ratios. The convergence of the fully discrete scheme is rigorously provided utilizing the newly proved discrete embedding type convolution inequality dealing with the fractional Laplacian. Besides, the mass conservation and the unique solvability are also theoretically guaranteed. Numerical experiments are carried out to show the accuracy and the energy dissipation both for various interface widths. In particular, the multiple-time-scale evolution of the solution is captured by an adaptive time-stepping strategy in the short-to-long time simulation.

The effect of higher order continuity in the solution field by using NURBS basis function in isogeometric analysis (IGA) is investigated for an efficient mixed finite element formulation for elastostatic beams. It is based on the Hu-Washizu variational principle considering geometrical and material nonlinearities. Here we present a reduced degree of basis functions for the additional fields of the stress resultants and strains of the beam, which are allowed to be discontinuous across elements. This approach turns out to significantly improve the computational efficiency and the accuracy of the results. We consider a beam formulation with extensible directors, where cross-sectional strains are enriched to avoid Poisson locking by an enhanced assumed strain method. In numerical examples, we show the superior per degree-of-freedom accuracy of IGA over conventional finite element analysis, due to the higher order continuity in the displacement field. We further verify the efficient rotational coupling between beams, as well as the path-independence of the results.

The equilibrium configuration of a plasma in an axially symmetric reactor is described mathematically by a free boundary problem associated with the celebrated Grad--Shafranov equation. The presence of uncertainty in the model parameters introduces the need to quantify the variability in the predictions. This is often done by computing a large number of model solutions on a computational grid for an ensemble of parameter values and then obtaining estimates for the statistical properties of solutions. In this study, we explore the savings that can be obtained using multilevel Monte Carlo methods, which reduce costs by performing the bulk of the computations on a sequence of spatial grids that are coarser than the one that would typically be used for a simple Monte Carlo simulation. We examine this approach using both a set of uniformly refined grids and a set of adaptively refined grids guided by a discrete error estimator. Numerical experiments show that multilevel methods dramatically reduce the cost of simulation, with cost reductions typically on the order of 60 or more and possibly as large as 200. Adaptive gridding results in more accurate computation of geometric quantities such as x-points associated with the model.

The estimation of unknown parameters in simulations, also known as calibration, is crucial for practical management of epidemics and prediction of pandemic risk. A simple yet widely used approach is to estimate the parameters by minimizing the sum of the squared distances between actual observations and simulation outputs. It is shown in this paper that this method is inefficient, particularly when the epidemic models are developed based on certain simplifications of reality, also known as imperfect models which are commonly used in practice. To address this issue, a new estimator is introduced that is asymptotically consistent, has a smaller estimation variance than the least squares estimator, and achieves the semiparametric efficiency. Numerical studies are performed to examine the finite sample performance. The proposed method is applied to the analysis of the COVID-19 pandemic for 20 countries based on the SEIR (Susceptible-Exposed-Infectious-Recovered) model with both deterministic and stochastic simulations. The estimation of the parameters, including the basic reproduction number and the average incubation period, reveal the risk of disease outbreaks in each country and provide insights to the design of public health interventions.

Within the next decade the Laser Interferometer Space Antenna (LISA) is due to be launched, providing the opportunity to extract physics from stellar objects and systems, such as \textit{Extreme Mass Ratio Inspirals}, (EMRIs) otherwise undetectable to ground based interferometers and Pulsar Timing Arrays (PTA). Unlike previous sources detected by the currently available observational methods, these sources can \textit{only} be simulated using an accurate computation of the gravitational self-force. Whereas the field has seen outstanding progress in the frequency domain, metric reconstruction and self-force calculations are still an open challenge in the time domain. Such computations would not only further corroborate frequency domain calculations and models, but also allow for full self-consistent evolution of the orbit under the effect of the self-force. Given we have \textit{a priori} information about the local structure of the discontinuity at the particle, we will show how to construct discontinuous spatial and temporal discretisations by operating on discontinuous Lagrange and Hermite interpolation formulae and hence recover higher order accuracy. In this work we demonstrate how this technique in conjunction with well-suited gauge choice (hyperboloidal slicing) and numerical (discontinuous collocation with time symmetric) methods can provide a relatively simple method of lines numerical algorithm to the problem. This is the first of a series of papers studying the behaviour of a point-particle prescribing circular geodesic motion in Schwarzschild in the \textit{time domain}. In this work we describe the numerical machinery necessary for these computations and show not only our work is capable of highly accurate flux radiation measurements but it also shows suitability for evaluation of the necessary field and it's derivatives at the particle limit.

北京阿比特科技有限公司