亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Widely present in the primary circuit of Nuclear Power Plants (NPP), Dissimilar Metal Welds (DMW) are inspected using Ultrasonic nondestructive Testing (UT) techniques to ensure the integrity of the structure and detect defects such as Stress Corrosion Cracking (SCC).In a previous collaborative research, CRIEPI and CEA have worked on the understanding of the propagation of ultrasonic waves in complex materials. Indeed, the ultrasonic propagation can be disturbed due to the anisotropic and inhomogeneous properties of the medium and the interpretation of inspection results can then be difficult. An analytical model, based on a dynamic ray theory, developed by CEA-LIST and implemented in the CIVA software had been used to predict the ultrasonic propagation in a DMW. The model evaluates the ray trajectories, the travel-time and the computation of the amplitude along the ray tube in a medium described thanks to a continuously varying description of its physical properties. In this study, the weld had been described by an analytical law of the crystallographic orientation. The simulated results of the detection of calibrated notches located in the buttering and the weld had been compared with experimental data and had shown a good agreement.The new collaborative program presented in this paper aims at detecting a real SCC defect located close to the root of the DMW. Thus, simulations have been performed for a DMW described with an analytical law and a smooth cartography of the crystallographic orientation. Furthermore, advanced ultrasonic testing methods have been used to inspect the specimen and detect the real SCC defect. Experimental and simulated results of the mock-up inspection have been compared.

相關內容

如今,服務業占據了IT行業的主要部分。公司越來越喜歡專注于其核心專業領域,并使用IT服務來滿足其所有外圍需求。服務計算是一門新的科學,其目的是研究和更好地理解這個高度流行的產業的基礎。它涵蓋了利用計算和信息技術來建模、創建、操作和管理業務服務的科學和技術。SCC 2019也將為構建這一重要科學的支柱和塑造服務計算的未來做出貢獻。 官網鏈接: · Learning · Branch · 情景 · Agent ·
2024 年 5 月 3 日

Formalizing creativity-related concepts has been a long-term goal of Computational Creativity. To the same end, we explore Formal Learning Theory in the context of creativity. We provide an introduction to the main concepts of this framework and a re-interpretation of terms commonly found in creativity discussions, proposing formal definitions for novelty and transformational creativity. This formalisation marks the beginning of a research branch we call Formal Creativity Theory, exploring how learning can be included as preparation for exploratory behaviour and how learning is a key part of transformational creative behaviour. By employing these definitions, we argue that, while novelty is neither necessary nor sufficient for transformational creativity in general, when using an inspiring set, rather than a sequence of experiences, an agent actually requires novelty for transformational creativity to occur.

We present an approach for the efficient implementation of self-adjusting multi-rate Runge-Kutta methods and we extend the previously available stability analyses of these methods to the case of an arbitrary number of sub-steps for the active components. We propose a physically motivated model problem that can be used to assess the stability of different multi-rate versions of standard Runge-Kutta methods and the impact of different interpolation methods for the latent variables. Finally, we present the results of several numerical experiments, performed with implementations of the proposed methods in the framework of the \textit{OpenModelica} open-source modelling and simulation software, which demonstrate the efficiency gains deriving from the use of the proposed multi-rate approach for physical modelling problems with multiple time scales.

Magnetic sounding using data collected from the Juno mission can be used to provide constraints on Jupiter's interior. However, inwards continuation of reconstructions assuming zero electrical conductivity and a representation in spherical harmonics are limited by the enhancement of noise at small scales. Here we describe new reconstructions of Jupiter's internal magnetic field based on physics-informed neural networks and either the first 33 (PINN33) or the first 50 (PINN50) of Juno's orbits. The method can resolve local structures, and allows for weak ambient electrical currents. Our models are not hampered by noise amplification at depth, and offer a much clearer picture of the interior structure. We estimate that the dynamo boundary is at a fractional radius of 0.8. At this depth, the magnetic field is arranged into longitudinal bands, and strong local features such as the great blue spot appear to be rooted in neighbouring structures of oppositely signed flux.

We consider the problem of the discrete-time approximation of the solution of a one-dimensional SDE with piecewise locally Lipschitz drift and continuous diffusion coefficients with polynomial growth. In this paper, we study the strong convergence of a (semi-explicit) exponential-Euler scheme previously introduced in Bossy et al. (2021). We show the usual 1/2 rate of convergence for the exponential-Euler scheme when the drift is continuous. When the drift is discontinuous, the convergence rate is penalised by a factor {$\epsilon$} decreasing with the time-step. We examine the case of the diffusion coefficient vanishing at zero, which adds a positivity preservation condition and a convergence analysis that exploits the negative moments and exponential moments of the scheme with the help of change of time technique introduced in Berkaoui et al. (2008). Asymptotic behaviour and theoretical stability of the exponential scheme, as well as numerical experiments, are also presented.

We consider the problem of linearly ordered (LO) coloring of hypergraphs. A hypergraph has an LO coloring if there is a vertex coloring, using a set of ordered colors, so that (i) no edge is monochromatic, and (ii) each edge has a unique maximum color. It is an open question as to whether or not a 2-LO colorable 3-uniform hypergraph can be LO colored with 3 colors in polynomial time. Nakajima and Zivn\'{y} recently gave a polynomial-time algorithm to color such hypergraphs with $\widetilde{O}(n^{1/3})$ colors and asked if SDP methods can be used directly to obtain improved bounds. Our main result is to show how to use SDP-based rounding methods to produce an LO coloring with $\widetilde{O}(n^{1/5})$ colors for such hypergraphs. We first show that we can reduce the problem to cases with highly structured SDP solutions, which we call balanced hypergraphs. Then we show how to apply classic SDP-rounding tools in this case. We believe that the reduction to balanced hypergraphs is novel and could be of independent interest.

The Hierarchy Of Time-Surfaces (HOTS) algorithm, a neuromorphic approach for feature extraction from event data, presents promising capabilities but faces challenges in accuracy and compatibility with neuromorphic hardware. In this paper, we introduce Sup3r, a Semi-Supervised algorithm aimed at addressing these challenges. Sup3r enhances sparsity, stability, and separability in the HOTS networks. It enables end-to-end online training of HOTS networks replacing external classifiers, by leveraging semi-supervised learning. Sup3r learns class-informative patterns, mitigates confounding features, and reduces the number of processed events. Moreover, Sup3r facilitates continual and incremental learning, allowing adaptation to data distribution shifts and learning new tasks without forgetting. Preliminary results on N-MNIST demonstrate that Sup3r achieves comparable accuracy to similarly sized Artificial Neural Networks trained with back-propagation. This work showcases the potential of Sup3r to advance the capabilities of HOTS networks, offering a promising avenue for neuromorphic algorithms in real-world applications.

Natural Language Processing (NLP) has been revolutionized by the use of Pre-trained Language Models (PLMs) such as BERT. Despite setting new records in nearly every NLP task, PLMs still face a number of challenges including poor interpretability, weak reasoning capability, and the need for a lot of expensive annotated data when applied to downstream tasks. By integrating external knowledge into PLMs, \textit{\underline{K}nowledge-\underline{E}nhanced \underline{P}re-trained \underline{L}anguage \underline{M}odels} (KEPLMs) have the potential to overcome the above-mentioned limitations. In this paper, we examine KEPLMs systematically through a series of studies. Specifically, we outline the common types and different formats of knowledge to be integrated into KEPLMs, detail the existing methods for building and evaluating KEPLMS, present the applications of KEPLMs in downstream tasks, and discuss the future research directions. Researchers will benefit from this survey by gaining a quick and comprehensive overview of the latest developments in this field.

Pre-trained Language Models (PLMs) have achieved great success in various Natural Language Processing (NLP) tasks under the pre-training and fine-tuning paradigm. With large quantities of parameters, PLMs are computation-intensive and resource-hungry. Hence, model pruning has been introduced to compress large-scale PLMs. However, most prior approaches only consider task-specific knowledge towards downstream tasks, but ignore the essential task-agnostic knowledge during pruning, which may cause catastrophic forgetting problem and lead to poor generalization ability. To maintain both task-agnostic and task-specific knowledge in our pruned model, we propose ContrAstive Pruning (CAP) under the paradigm of pre-training and fine-tuning. It is designed as a general framework, compatible with both structured and unstructured pruning. Unified in contrastive learning, CAP enables the pruned model to learn from the pre-trained model for task-agnostic knowledge, and fine-tuned model for task-specific knowledge. Besides, to better retain the performance of the pruned model, the snapshots (i.e., the intermediate models at each pruning iteration) also serve as effective supervisions for pruning. Our extensive experiments show that adopting CAP consistently yields significant improvements, especially in extremely high sparsity scenarios. With only 3% model parameters reserved (i.e., 97% sparsity), CAP successfully achieves 99.2% and 96.3% of the original BERT performance in QQP and MNLI tasks. In addition, our probing experiments demonstrate that the model pruned by CAP tends to achieve better generalization ability.

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).

北京阿比特科技有限公司