亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In high performance computing environments, we observe an ongoing increase in the available numbers of cores. This development calls for re-emphasizing performance (scalability) analysis and speedup laws as suggested in the literature (e.g., Amdahl's law and Gustafson's law), with a focus on asymptotic performance. Understanding speedup and efficiency issues of algorithmic parallelism is useful for several purposes, including the optimization of system operations, temporal predictions on the execution of a program, and the analysis of asymptotic properties and the determination of speedup bounds. However, the literature is fragmented and shows a large diversity and heterogeneity of speedup models and laws. These phenomena make it challenging to obtain an overview of the models and their relationships, to identify the determinants of performance in a given algorithmic and computational context, and, finally, to determine the applicability of performance models and laws to a particular parallel computing setting. In this work, we provide a generic speedup (and thus also efficiency) model for homogeneous computing environments. Our approach generalizes many prominent models suggested in the literature and allows showing that they can be considered special cases of a unifying approach. The genericity of the unifying speedup model is achieved through parameterization. Considering combinations of parameter ranges, we identify six different asymptotic speedup cases and eight different asymptotic efficiency cases. Jointly applying these speedup and efficiency cases, we derive eleven scalability cases, from which we build a scalability typology. Researchers can draw upon our typology to classify their speedup model and to determine the asymptotic behavior when the number of parallel processing units increases. In addition, our results may be used to address various extensions of our setting.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌入式系統編譯器、體系結構和綜合國際會議。 Publisher:ACM。 SIT:

The distribution for the minimum of Brownian motion or the Cauchy process is well-known using the reflection principle. Here we consider the problem of finding the sample-by-sample minimum, which we call the online minimum search. We consider the possibility of the golden search method, but we show quantitatively that the bisection method is more efficient. In the bisection method there is a hierarchical parameter, which tunes the depth to which each sub-search is conducted, somewhat similarly to how a depth-first search works to generate a topological ordering on nodes. Finally, we consider the possibility of using harmonic measure, which is a novel idea that has so far been unexplored.

Quasiperiodic systems, related to irrational numbers, are space-filling structures without decay nor translation invariance. How to accurately recover these systems, especially for non-smooth cases, presents a big challenge in numerical computation. In this paper, we propose a new algorithm, finite points recovery (FPR) method, which is available for both smooth and non-smooth cases, to address this challenge. The FPR method first establishes a homomorphism between the lower-dimensional definition domain of the quasiperiodic function and the higher-dimensional torus, then recovers the global quasiperiodic system by employing interpolation technique with finite points in the definition domain without dimensional lifting. Furthermore, we develop accurate and efficient strategies of selecting finite points according to the arithmetic properties of irrational numbers. The corresponding mathematical theory, convergence analysis, and computational complexity analysis on choosing finite points are presented. Numerical experiments demonstrate the effectiveness and superiority of FPR approach in recovering both smooth quasiperiodic functions and piecewise constant Fibonacci quasicrystals. While existing spectral methods encounter difficulties in accurately recovering non-smooth quasiperiodic functions.

We introduce two iterative methods, GPBiLQ and GPQMR, for solving unsymmetric partitioned linear systems. The basic mechanism underlying GPBiLQ and GPQMR is a novel simultaneous tridiagonalization via biorthogonality that allows for short-recurrence iterative schemes. Similar to the biconjugate gradient method, it is possible to develop another method, GPBiCG, whose iterate (if it exists) can be obtained inexpensively from the GPBiLQ iterate. Whereas the iterate of GPBiCG may not exist, the iterates of GPBiLQ and GPQMR are always well defined as long as the biorthogonal tridiagonal reduction process does not break down. We discuss connections between the proposed methods and some existing methods, and give numerical experiments to illustrate the performance of the proposed methods.

Gradient-enhanced Kriging (GE-Kriging) is a well-established surrogate modelling technique for approximating expensive computational models. However, it tends to get impractical for high-dimensional problems due to the size of the inherent correlation matrix and the associated high-dimensional hyper-parameter tuning problem. To address these issues, a new method, called sliced GE-Kriging (SGE-Kriging), is developed in this paper for reducing both the size of the correlation matrix and the number of hyper-parameters. We first split the training sample set into multiple slices, and invoke Bayes' theorem to approximate the full likelihood function via a sliced likelihood function, in which multiple small correlation matrices are utilized to describe the correlation of the sample set rather than one large one. Then, we replace the original high-dimensional hyper-parameter tuning problem with a low-dimensional counterpart by learning the relationship between the hyper-parameters and the derivative-based global sensitivity indices. The performance of SGE-Kriging is finally validated by means of numerical experiments with several benchmarks and a high-dimensional aerodynamic modeling problem. The results show that the SGE-Kriging model features an accuracy and robustness that is comparable to the standard one but comes at much less training costs. The benefits are most evident for high-dimensional problems with tens of variables.

This work introduces UstanceBR, a multimodal corpus in the Brazilian Portuguese Twitter domain for target-based stance prediction. The corpus comprises 86.8 k labelled stances towards selected target topics, and extensive network information about the users who published these stances on social media. In this article we describe the corpus multimodal data, and a number of usage examples in both in-domain and zero-shot stance prediction based on text- and network-related information, which are intended to provide initial baseline results for future studies in the field.

The increased demand of cyber security professionals has also increased the development of new platforms and tools that help those professionals to improve their offensive skills. One of these platforms is HackTheBox, an online cyber security training platform that delivers a controlled and safe environment for those professionals to explore virtual machines in a Capture the Flag (CTF) competition style. Most of the tools used in a CTF, or even on real-world Penetration Testing (Pentest), were developed for specific reasons so each tool usually has different input and output formats. These different formats make it hard for cyber security professionals and CTF competitors to develop an attack graph. In order to help cyber security professionals and CTF competitors to discover, select and exploit an attack vector, this paper presents Shadow Blade, a tool to aid users to interact with their attack vectors.

In many jurisdictions, forensic evidence is presented in the form of categorical statements by forensic experts. Several large-scale performance studies have been performed that report error rates to elucidate the uncertainty associated with such categorical statements. There is growing scientific consensus that the likelihood ratio (LR) framework is the logically correct form of presentation for forensic evidence evaluation. Yet, results from the large-scale performance studies have not been cast in this framework. Here, I show how to straightforwardly calculate an LR for any given categorical statement using data from the performance studies. This number quantifies how much more we should believe the hypothesis of same source vs different source, when provided a particular expert witness statement. LRs are reported for categorical statements resulting from the analysis of latent fingerprints, bloodstain patterns, handwriting, footwear and firearms. The highest LR found for statements of identification was 376 (fingerprints), the lowest found for statements of exclusion was 1/28 (handwriting). The LRs found may be more insightful for those used to this framework than the various error rates reported previously. An additional advantage of using the LR in this way is the relative simplicity; there are no decisions necessary on what error rate to focus on or how to handle inconclusive statements. The values found are closer to 1 than many would have expected. One possible explanation for this mismatch is that we undervalue numerical LRs. Finally, a note of caution: the LR values reported here come from a simple calculation that does not do justice to the nuances of the large-scale studies and their differences to casework, and should be treated as ball-park figures rather than definitive statements on the evidential value of whole forensic scientific fields.

This research explores the reliability of deep learning, specifically Long Short-Term Memory (LSTM) networks, for estimating the Hurst parameter in fractional stochastic processes. The study focuses on three types of processes: fractional Brownian motion (fBm), fractional Ornstein-Uhlenbeck (fOU) process, and linear fractional stable motions (lfsm). The work involves a fast generation of extensive datasets for fBm and fOU to train the LSTM network on a large volume of data in a feasible time. The study analyses the accuracy of the LSTM network's Hurst parameter estimation regarding various performance measures like RMSE, MAE, MRE, and quantiles of the absolute and relative errors. It finds that LSTM outperforms the traditional statistical methods in the case of fBm and fOU processes; however, it has limited accuracy on lfsm processes. The research also delves into the implications of training length and valuation sequence length on the LSTM's performance. The methodology is applied by estimating the Hurst parameter in Li-ion battery degradation data and obtaining confidence bounds for the estimation. The study concludes that while deep learning methods show promise in parameter estimation of fractional processes, their effectiveness is contingent on the process type and the quality of training data.

The coupling effects in multiphysics processes are often neglected in designing multiscale methods. The coupling may be described by a non-positive definite operator, which in turn brings significant challenges in multiscale simulations. In the paper, we develop a regularized coupling multiscale method based on the generalized multiscale finite element method (GMsFEM) to solve coupled thermomechanical problems, and it is referred to as the coupling generalized multiscale finite element method (CGMsFEM). The method consists of defining the coupling multiscale basis functions through local regularized coupling spectral problems in each coarse-grid block, which can be implemented by a novel design of two relaxation parameters. Compared to the standard GMsFEM, the proposed method can not only accurately capture the multiscale coupling correlation effects of multiphysics problems but also greatly improve computational efficiency with fewer multiscale basis functions. In addition, the convergence analysis is also established, and the optimal error estimates are derived, where the upper bound of errors is independent of the magnitude of the relaxation coefficient. Several numerical examples for periodic, random microstructure, and random material coefficients are presented to validate the theoretical analysis. The numerical results show that the CGMsFEM shows better robustness and efficiency than uncoupled GMsFEM.

Starting from the Kirchhoff-Huygens representation and Duhamel's principle of time-domain wave equations, we propose novel butterfly-compressed Hadamard integrators for self-adjoint wave equations in both time and frequency domain in an inhomogeneous medium. First, we incorporate the leading term of Hadamard's ansatz into the Kirchhoff-Huygens representation to develop a short-time valid propagator. Second, using the Fourier transform in time, we derive the corresponding Eulerian short-time propagator in frequency domain; on top of this propagator, we further develop a time-frequency-time (TFT) method for the Cauchy problem of time-domain wave equations. Third, we further propose the time-frequency-time-frequency (TFTF) method for the corresponding point-source Helmholtz equation, which provides Green's functions of the Helmholtz equation for all angular frequencies within a given frequency band. Fourth, to implement TFT and TFTF methods efficiently, we introduce butterfly algorithms to compress oscillatory integral kernels at different frequencies. As a result, the proposed methods can construct wave field beyond caustics implicitly and advance spatially overturning waves in time naturally with quasi-optimal computational complexity and memory usage. Furthermore, once constructed the Hadamard integrators can be employed to solve both time-domain wave equations with various initial conditions and frequency-domain wave equations with different point sources. Numerical examples for two-dimensional wave equations illustrate the accuracy and efficiency of the proposed methods.

北京阿比特科技有限公司