亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Fiber metal laminates (FML) are of high interest for lightweight structures as they combine the advantageous material properties of metals and fiber-reinforced polymers. However, low-velocity impacts can lead to complex internal damage. Therefore, structural health monitoring with guided ultrasonic waves (GUW) is a methodology to identify such damage. Numerical simulations form the basis for corresponding investigations, but experimental validation of dispersion diagrams over a wide frequency range is hardly found in the literature. In this work the dispersive relation of GUWs is experimentally determined for an FML made of carbon fiber-reinforced polymer and steel. For this purpose, multi-frequency excitation signals are used to generate GUWs and the resulting wave field is measured via laser scanning vibrometry. The data are processed by means of a non-uniform discrete 2d Fourier transform and analyzed in the frequency-wavenumber domain. The experimental data are in excellent agreement with data from a numerical solution of the analytical framework. In conclusion, this work presents a highly automatable method to experimentally determine dispersion diagrams of GUWs in FML over large frequency ranges with high accuracy.

相關內容

> The Metal framework supports GPU-accelerated advanced 3D graphics rendering and data-parallel computation workloads. Metal provides a modern and streamlined API for fine-grain, low-level control of the organization, processing, and submission of graphics and computation commands and the management of the associated data and resources for these commands. A primary goal of Metal is to minimize the CPU overhead necessary for executing these GPU workloads.

This paper proposes a flexible framework for inferring large-scale time-varying and time-lagged correlation networks from multivariate or high-dimensional non-stationary time series with piecewise smooth trends. Built on a novel and unified multiple-testing procedure of time-lagged cross-correlation functions with a fixed or diverging number of lags, our method can accurately disclose flexible time-varying network structures associated with complex functional structures at all time points. We broaden the applicability of our method to the structure breaks by developing difference-based nonparametric estimators of cross-correlations, achieve accurate family-wise error control via a bootstrap-assisted procedure adaptive to the complex temporal dynamics, and enhance the probability of recovering the time-varying network structures using a new uniform variance reduction technique. We prove the asymptotic validity of the proposed method and demonstrate its effectiveness in finite samples through simulation studies and empirical applications.

The complexity and dynamism of microservices pose significant challenges to system reliability, and thereby, automated troubleshooting is crucial. Effective root cause localization after anomaly detection is crucial for ensuring the reliability of microservice systems. However, two significant issues rest in existing approaches: (1) Microservices generate traces, system logs, and key performance indicators (KPIs), but existing approaches usually consider traces only, failing to understand the system fully as traces cannot depict all anomalies; (2) Troubleshooting microservices generally contains two main phases, i.e., anomaly detection and root cause localization. Existing studies regard these two phases as independent, ignoring their close correlation. Even worse, inaccurate detection results can deeply affect localization effectiveness. To overcome these limitations, we propose Eadro, the first end-to-end framework to integrate anomaly detection and root cause localization based on multi-source data for troubleshooting large-scale microservices. The key insights of Eadro are the anomaly manifestations on different data sources and the close connection between detection and localization. Thus, Eadro models intra-service behaviors and inter-service dependencies from traces, logs, and KPIs, all the while leveraging the shared knowledge of the two phases via multi-task learning. Experiments on two widely-used benchmark microservices demonstrate that Eadro outperforms state-of-the-art approaches by a large margin. The results also show the usefulness of integrating multi-source data. We also release our code and data to facilitate future research.

Deep reinforcement learning repeatedly succeeds in closed, well-defined domains such as games (Chess, Go, StarCraft). The next frontier is real-world scenarios, where setups are numerous and varied. For this, agents need to learn the underlying rules governing the environment, so as to robustly generalise to conditions that differ from those they were trained on. Model-based reinforcement learning algorithms, such as the highly successful MuZero, aim to accomplish this by learning a world model. However, leveraging a world model has not consistently shown greater generalisation capabilities compared to model-free alternatives. In this work, we propose improving the data efficiency and generalisation capabilities of MuZero by explicitly incorporating the symmetries of the environment in its world-model architecture. We prove that, so long as the neural networks used by MuZero are equivariant to a particular symmetry group acting on the environment, the entirety of MuZero's action-selection algorithm will also be equivariant to that group. We evaluate Equivariant MuZero on procedurally-generated MiniPacman and on Chaser from the ProcGen suite: training on a set of mazes, and then testing on unseen rotated versions, demonstrating the benefits of equivariance. Further, we verify that our performance improvements hold even when only some of the components of Equivariant MuZero obey strict equivariance, which highlights the robustness of our construction.

We present ATC, a C++ library for advanced Tucker-based lossy compression of dense multidimensional numerical data in a shared-memory parallel setting, based on the sequentially truncated higher-order singular value decomposition (ST-HOSVD) and bit plane truncation. Several techniques are proposed to improve speed, memory usage, error control and compression rate. First, a hybrid truncation scheme is described which combines Tucker rank truncation and TTHRESH quantization [Ballester-Ripoll et al., IEEE Trans. Visual. Comput. Graph., 2020]. We derive a novel expression to approximate the error of truncated Tucker decompositions in the case of core and factor perturbations. Furthermore, we parallelize the quantization and encoding scheme and adjust this phase to improve error control. Moreover, implementation aspects are described, such as an ST-HOSVD procedure using only a single transposition. We also discuss several usability features of ATC, including the presence of multiple interfaces, extensive data type support and integrated downsampling of the decompressed data. Numerical results show that ATC maintains state-of-the-art Tucker compression rates, while providing average speed-up factors of 2.2-3.5 and halving memory usage. Furthermore, our compressor provides precise error control, only deviating 1.4% from the requested error on average. Finally, ATC often achieves higher compression than non-Tucker-based compressors in the high-error domain.

Motivated by a real-world application, we model and solve a complex staff scheduling problem. Tasks are to be assigned to workers for supervision. Multiple tasks can be covered in parallel by a single worker, with worker shifts being flexible within availabilities. Each worker has a different skill set, enabling them to cover different tasks. Tasks require assignment according to priority and skill requirements. The objective is to maximize the number of assigned tasks weighted by their priorities, while minimizing assignment penalties. We develop an adaptive large neighborhood search (ALNS) algorithm, relying on tailored destroy and repair operators. It is tested on benchmark instances derived from real-world data and compared to optimal results obtained by means of a commercial MIP-solver. Furthermore, we analyze the impact of considering three additional alternative objective functions. When applied to large-scale company data, the developed ALNS outperforms the previously applied solution approach.

We address the weak numerical solution of stochastic differential equations driven by independent Brownian motions (SDEs for short). This paper develops a new methodology to design adaptive strategies for determining automatically the step-sizes of the numerical schemes that compute the mean values of smooth functions of the solutions of SDEs. First, we introduce a general method for constructing variable step-size weak schemes for SDEs, which is based on controlling the match between the first conditional moments of the increments of the numerical integrator and the ones corresponding to an additional weak approximation. To this end, we use certain local discrepancy functions that do not involve sampling random variables. Precise directions for designing suitable discrepancy functions and for selecting starting step-sizes are given. Second, we introduce a variable step-size Euler scheme, together with a variable step-size second order weak scheme via extrapolation. Finally, numerical simulations are presented to show the potential of the introduced variable step-size strategy and the adaptive scheme to overcome known instability problems of the conventional fixed step-size schemes in the computation of diffusion functional expectations.

Over the past decade, predictive language modeling for code has proven to be a valuable tool for enabling new forms of automation for developers. More recently, we have seen the advent of general purpose "large language models", based on neural transformer architectures, that have been trained on massive datasets of human written text spanning code and natural language. However, despite the demonstrated representational power of such models, interacting with them has historically been constrained to specific task settings, limiting their general applicability. Many of these limitations were recently overcome with the introduction of ChatGPT, a language model created by OpenAI and trained to operate as a conversational agent, enabling it to answer questions and respond to a wide variety of commands from end-users. The introduction of models, such as ChatGPT, has already spurred fervent discussion from educators, ranging from fear that students could use these AI tools to circumvent learning, to excitement about the new types of learning opportunities that they might unlock. However, given the nascent nature of these tools, we currently lack fundamental knowledge related to how well they perform in different educational settings, and the potential promise (or danger) that they might pose to traditional forms of instruction. As such, in this paper, we examine how well ChatGPT performs when tasked with solving common questions in a popular software testing curriculum. Our findings indicate that ChatGPT can provide correct or partially correct answers in 44% of cases, provide correct or partially correct explanations of answers in 57% of cases, and that prompting the tool in a shared question context leads to a marginally higher rate of correct answers. Based on these findings, we discuss the potential promise, and dangers related to the use of ChatGPT by students and instructors.

The Morse-Smale complex is a well studied topological structure that represents the gradient flow behavior between critical points of a scalar function. It supports multi-scale topological analysis and visualization of feature-rich scientific data. Several parallel algorithms have been proposed towards the fast computation of the 3D Morse-Smale complex. Its computation continues to pose significant algorithmic challenges. In particular, the non-trivial structure of the connections between the saddle critical points are not amenable to parallel computation. This paper describes a fine grained parallel algorithm for computing the Morse-Smale complex and a GPU implementation gMSC. The algorithm first determines the saddle-saddle reachability via a transformation into a sequence of vector operations, and next computes the paths between saddles by transforming it into a sequence of matrix operations. Computational experiments show that the method achieves up to 8.6x speedup over pyms3d and 6x speedup over TTK, the current shared memory implementations. The paper also presents a comprehensive experimental analysis of different steps of the algorithm and reports on their contribution towards runtime performance. Finally, it introduces a CPU based data parallel algorithm for simplifying the Morse-Smale complex via iterative critical point pair cancellation.

The potential of Model Predictive Control in buildings has been shown many times, being successfully used to achieve various goals, such as minimizing energy consumption or maximizing thermal comfort. However, mass deployment has thus far failed, in part because of the high engineering cost of obtaining and maintaining a sufficiently accurate model. This can be addressed by using adaptive data-driven approaches. The idea of using behavioral systems theory for this purpose has recently found traction in the academic community. In this study, we compare variations thereof with different amounts of data used, different regularization weights, and different methods of data selection. Autoregressive models with exogenous inputs (ARX) are used as a well-established reference. All methods are evaluated by performing iterative system identification on two long-term data sets from real occupied buildings, neither of which include artificial excitation for the purpose of system identification. We find that: (1) Sufficient prediction accuracy is achieved with all methods. (2) The ARX models perform slightly better, while having the additional advantages of fewer tuning parameters and faster computation. (3) Adaptive and non-adaptive schemes perform similarly. (4) The regularization weights of the behavioral systems theory methods show the expected trade-off characteristic with an optimal middle value. (5) Using the most recent data yields better performance than selecting data with similar weather as the day to be predicted. (6) More data improves the model performance.

Parallel-in-time integration has been the focus of intensive research efforts over the past two decades due to the advent of massively parallel computer architectures and the scaling limits of purely spatial parallelization. Various iterative parallel-in-time (PinT) algorithms have been proposed, like Parareal, PFASST, MGRIT, and Space-Time Multi-Grid (STMG). These methods have been described using different notations, and the convergence estimates that are available are difficult to compare. We describe Parareal, PFASST, MGRIT and STMG for the Dahlquist model problem using a common notation and give precise convergence estimates using generating functions. This allows us, for the first time, to directly compare their convergence. We prove that all four methods eventually converge super-linearly, and also compare them numerically. The generating function framework provides further opportunities to explore and analyze existing and new methods.

北京阿比特科技有限公司