亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work presents GALAEXI as a novel, energy-efficient flow solver for the simulation of compressible flows on unstructured meshes leveraging the parallel computing power of modern Graphics Processing Units (GPUs). GALAEXI implements the high-order Discontinuous Galerkin Spectral Element Method (DGSEM) using shock capturing with a finite-volume subcell approach to ensure the stability of the high-order scheme near shocks. This work provides details on the general code design, the parallelization strategy, and the implementation approach for the compute kernels with a focus on the element local mappings between volume and surface data due to the unstructured mesh. GALAEXI exhibits excellent strong scaling properties up to 1024 GPUs if each GPU is assigned a minimum of one million degrees of freedom degrees of freedom. To verify its implementation, a convergence study is performed that recovers the theoretical order of convergence of the implemented numerical schemes. Moreover, the solver is validated using both the incompressible and compressible formulation of the Taylor-Green-Vortex at a Mach number of 0.1 and 1.25, respectively. A mesh convergence study shows that the results converge to the high-fidelity reference solution and that the results match the original CPU implementation. Finally, GALAEXI is applied to a large-scale wall-resolved large eddy simulation of a linear cascade of the NASA Rotor 37. Here, the supersonic region and shocks at the leading edge are captured accurately and robustly by the implemented shock-capturing approach. It is demonstrated that GALAEXI requires less than half of the energy to carry out this simulation in comparison to the reference CPU implementation. This renders GALAEXI as a potent tool for accurate and efficient simulations of compressible flows in the realm of exascale computing and the associated new HPC architectures.

相關內容

We propose a method to generate statistically representative synthetic data from a given dataset. The main goal of our method is for the created data set to mimic the between feature correlations present in the original data, while also offering a tunable parameter to influence the privacy level. In particular, our method constructs a statistical map by using the empirical conditional distributions between the features of the original dataset. We describe in detail our algorithms used both in the construction of a statistical map and how to use this map to generate synthetic observations. This approach is tested in three different ways: with a hand calculated example; a manufactured dataset; and a real world energy-related dataset of consumption/production of households in Madeira Island. We test our method's performance by comparing the datasets using the on Pearson correlation matrix. The proposed methodology is general in the sense that it does not rely on the used test dataset. We expect it to be applicable in a much broader context than indicated here.

In the life testing experiment and reliability engineering doubly type-II censored scheme is an important sampling scheme. In the present commutation, we have considered estimating ordered scale parameters of two exponential distributions based on doubly type-II censored samples. For this estimation problem, we have considered a general scale invariant loss function. We have obtained several estimators using \cite{stein1964} techniques that improve upon the BAEE. Also we have obtained estimators which improve upon the restricted MLE. A class of improved estimators has been derived using Kubokawa's IERD approach. It is shown that the boundary estimator of this class is generalized Bayes. As an application, we have also obtained improved estimators with respect to three special loss functions, namely quadratic loss, entropy loss, and symmetric loss function. We have applied these results to special life testing sampling schemes.

Stochastic gradient descent (SGD) is a workhorse algorithm for solving large-scale optimization problems in data science and machine learning. Understanding the convergence of SGD is hence of fundamental importance. In this work we examine the SGD convergence (with various step sizes) when applied to unconstrained convex quadratic programming (essentially least-squares (LS) problems), and in particular analyze the error components respect to the eigenvectors of the Hessian. The main message is that the convergence depends largely on the corresponding eigenvalues (singular values of the coefficient matrix in the LS context), namely the components for the large singular values converge faster in the initial phase. We then show there is a phase transition in the convergence where the convergence speed of the components, especially those corresponding to the larger singular values, will decrease. Finally, we show that the convergence of the overall error (in the solution) tends to decay as more iterations are run, that is, the initial convergence is faster than the asymptote.

Generative diffusion models and many stochastic models in science and engineering naturally live in infinite dimensions before discretisation. To incorporate observed data for statistical and learning tasks, one needs to condition on observations. While recent work has treated conditioning linear processes in infinite dimensions, conditioning non-linear processes in infinite dimensions has not been explored. This paper conditions function valued stochastic processes without prior discretisation. To do so, we use an infinite-dimensional version of Girsanov's theorem to condition a function-valued stochastic process, leading to a stochastic differential equation (SDE) for the conditioned process involving the score. We apply this technique to do time series analysis for shapes of organisms in evolutionary biology, where we discretise via the Fourier basis and then learn the coefficients of the score function with score matching methods.

This study examines the effect that different feature selection methods have on models created with XGBoost, a popular machine learning algorithm with superb regularization methods. It shows that three different ways for reducing the dimensionality of features produces no statistically significant change in the prediction accuracy of the model. This suggests that the traditional idea of removing the noisy training data to make sure models do not overfit may not apply to XGBoost. But it may still be viable in order to reduce computational complexity.

We first present a simple recursive algorithm that generates cyclic rotation Gray codes for stamp foldings and semi-meanders, where consecutive strings differ by a stamp rotation. These are the first known Gray codes for stamp foldings and semi-meanders, and we thus solve an open problem posted by Sawada and Li in [Electron. J. Comb. 19(2), 2012]. We then introduce an iterative algorithm that generates the same rotation Gray codes for stamp foldings and semi-meanders. Both the recursive and iterative algorithms generate stamp foldings and semi-meanders in constant amortized time and $O(n)$-amortized time per string respectively, using a linear amount of memory.

In this paper, we propose high order numerical methods to solve a 2D advection diffusion equation, in the highly oscillatory regime. We use an integrator strategy that allows the construction of arbitrary high-order schemes {leading} to an accurate approximation of the solution without any time step-size restriction. This paper focuses on the multiscale challenges {in time} of the problem, that come from the velocity, an $\varepsilon-$periodic function, whose expression is explicitly known. $\varepsilon$-uniform third order in time numerical approximations are obtained. For the space discretization, this strategy is combined with high order finite difference schemes. Numerical experiments show that the proposed methods {achieve} the expected order of accuracy, and it is validated by several tests across diverse domains and boundary conditions. The novelty of the paper consists of introducing a numerical scheme that is high order accurate in space and time, with a particular attention to the dependency on a small parameter in the time scale. The high order in space is obtained enlarging the interpolation stencil already established in [44], and further refined in [46], with a special emphasis on the squared boundary, especially when a Dirichlet condition is assigned. In such case, we compute an \textit{ad hoc} Taylor expansion of the solution to ensure that there is no degradation of the accuracy order at the boundary. On the other hand, the high accuracy in time is obtained extending the work proposed in [19]. The combination of high-order accuracy in both space and time is particularly significant due to the presence of two small parameters-$\delta$ and $\varepsilon$-in space and time, respectively.

Macroscopic surface shapes, such as bumps and dents, as well as microscopic surface features, like texture, can be identified solely through lateral resistive force cues when a stylus moves across them. This perceptual phenomenon has been utilized to advance tactile presentation techniques for surface tactile displays. However, the effects on shape recognition when microscopic textures and macroscopic shapes coexist have not been thoroughly investigated. This study reveals that macroscopic surface shapes can be recognized independently of the presence of microscopic textures. These findings enhance our understanding of human perceptual properties and contribute to the development of tactile displays.

In the present work, strong approximation errors are analyzed for both the spatial semi-discretization and the spatio-temporal fully discretization of stochastic wave equations (SWEs) with cubic polynomial nonlinearities and additive noises. The fully discretization is achieved by the standard Galerkin ffnite element method in space and a novel exponential time integrator combined with the averaged vector ffeld approach. The newly proposed scheme is proved to exactly satisfy a trace formula based on an energy functional. Recovering the convergence rates of the scheme, however, meets essential difffculties, due to the lack of the global monotonicity condition. To overcome this issue, we derive the exponential integrability property of the considered numerical approximations, by the energy functional. Armed with these properties, we obtain the strong convergence rates of the approximations in both spatial and temporal direction. Finally, numerical results are presented to verify the previously theoretical findings.

Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets.

北京阿比特科技有限公司