亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Underground gas storage (UGS) is a worldwide well-established technology that is becoming even more important to cope with seasonal peaks of gas consumption due to the growing uncertainties of the energy market. Safety issues concerning the reactivation of pre-existing faults might arise if the target reservoir is located in a faulted basin, where human activities can trigger (micro-)seismicity events. In the Netherlands, it has been observed that fault activation can occur somehow "unexpectedly" after the primary production (PP), i.e., during cushion gas injection (CGI) and UGS cycles, when the stress regime should be in the unloading/reloading path. To understand the physical mechanisms responsible for such occurrences, a 3D mathematical model coupling frictional contact mechanics in faulted porous rocks with fluid flow is developed, implemented and tested. The final aim of this two-part work is to define a safe operational bandwidth for the pore pressure range for UGS activities in the faulted reservoirs of the Rotliegend formation. Part I of this work concerns the development of the mathematical and numerical model of frictional contact mechanics and flow in faulted porous rocks. A mixed discretization of the governing PDEs under frictional contact constraints along the faults is used. A slip-weakening constitutive law governing the fault macroscopic behavior is also presented. The model is tested in the setting of an ideal reservoir located in the Rotliegend formation. The analyses point out how fault reactivation during PP can lead to a stress redistribution, giving rise to a new equilibrium configuration. When the fault is reloaded in the opposite direction during the CGI and/or UGS stages, further activation events can occur even if the stress range does not exceed either the undisturbed initial value or the maximum strength ever experienced by the formation.

相關內容

Thanks to their dependency structure, non-parametric Hidden Markov Models (HMMs) are able to handle model-based clustering without specifying group distributions. The aim of this work is to study the Bayes risk of clustering when using HMMs and to propose associated clustering procedures. We first give a result linking the Bayes risk of classification and the Bayes risk of clustering, which we use to identify the key quantity determining the difficulty of the clustering task. We also give a proof of this result in the i.i.d. framework, which might be of independent interest. Then we study the excess risk of the plugin classifier. All these results are shown to remain valid in the online setting where observations are clustered sequentially. Simulations illustrate our findings.

Photoplethysmography is a non-invasive optical technique that measures changes in blood volume within tissues. It is commonly and increasingly used for in a variety of research and clinical application to assess vascular dynamics and physiological parameters. Yet, contrary to heart rate variability measures, a field which has seen the development of stable standards and advanced toolboxes and software, no such standards and open tools exist for continuous photoplethysmogram (PPG) analysis. Consequently, the primary objective of this research was to identify, standardize, implement and validate key digital PPG biomarkers. This work describes the creation of a standard Python toolbox, denoted pyPPG, for long-term continuous PPG time series analysis recorded using a standard finger-based transmission pulse oximeter. The improved PPG peak detector had an F1-score of 88.19% for the state-of-the-art benchmark when evaluated on 2,054 adult polysomnography recordings totaling over 91 million reference beats. This algorithm outperformed the open-source original Matlab implementation by ~5% when benchmarked on a subset of 100 randomly selected MESA recordings. More than 3,000 fiducial points were manually annotated by two annotators in order to validate the fiducial points detector. The detector consistently demonstrated high performance, with a mean absolute error of less than 10 ms for all fiducial points. Based on these fiducial points, pyPPG engineers a set of 74 PPG biomarkers. Studying the PPG time series variability using pyPPG can enhance our understanding of the manifestations and etiology of diseases. This toolbox can also be used for biomarker engineering in training data-driven models. pyPPG is available on physiozoo.org

AI generated content (AIGC) presents considerable challenge to educators around the world. Instructors need to be able to detect such text generated by large language models, either with the naked eye or with the help of some tools. There is also growing need to understand the lexical, syntactic and stylistic features of AIGC. To address these challenges in English language teaching, we first present ArguGPT, a balanced corpus of 4,038 argumentative essays generated by 7 GPT models in response to essay prompts from three sources: (1) in-class or homework exercises, (2) TOEFL and (3) GRE writing tasks. Machine-generated texts are paired with roughly equal number of human-written essays with three score levels matched in essay prompts. We then hire English instructors to distinguish machine essays from human ones. Results show that when first exposed to machine-generated essays, the instructors only have an accuracy of 61% in detecting them. But the number rises to 67% after one round of minimal self-training. Next, we perform linguistic analyses of these essays, which show that machines produce sentences with more complex syntactic structures while human essays tend to be lexically more complex. Finally, we test existing AIGC detectors and build our own detectors using SVMs and RoBERTa. Results suggest that a RoBERTa fine-tuned with the training set of ArguGPT achieves above 90% accuracy in both essay- and sentence-level classification. To the best of our knowledge, this is the first comprehensive analysis of argumentative essays produced by generative large language models. Machine-authored essays in ArguGPT and our models will be made publicly available at //github.com/huhailinguist/ArguGPT

Saliency maps have become one of the most widely used interpretability techniques for convolutional neural networks (CNN) due to their simplicity and the quality of the insights they provide. However, there are still some doubts about whether these insights are a trustworthy representation of what CNNs use to come up with their predictions. This paper explores how rescuing the sign of the gradients from the saliency map can lead to a deeper understanding of multi-class classification problems. Using both pretrained and trained from scratch CNNs we unveil that considering the sign and the effect not only of the correct class, but also the influence of the other classes, allows to better identify the pixels of the image that the network is really focusing on. Furthermore, how occluding or altering those pixels is expected to affect the outcome also becomes clearer.

Surface defect inspection is of great importance for industrial manufacture and production. Though defect inspection methods based on deep learning have made significant progress, there are still some challenges for these methods, such as indistinguishable weak defects and defect-like interference in the background. To address these issues, we propose a transformer network with multi-stage CNN (Convolutional Neural Network) feature injection for surface defect segmentation, which is a UNet-like structure named CINFormer. CINFormer presents a simple yet effective feature integration mechanism that injects the multi-level CNN features of the input image into different stages of the transformer network in the encoder. This can maintain the merit of CNN capturing detailed features and that of transformer depressing noises in the background, which facilitates accurate defect detection. In addition, CINFormer presents a Top-K self-attention module to focus on tokens with more important information about the defects, so as to further reduce the impact of the redundant background. Extensive experiments conducted on the surface defect datasets DAGM 2007, Magnetic tile, and NEU show that the proposed CINFormer achieves state-of-the-art performance in defect detection.

Promoting sustainable transportation options is increasingly crucial in the pursuit of environmentally friendly and efficient campus mobility systems. Among these options, bike-sharing programs have garnered substantial attention for their capacity to mitigate traffic congestion, decrease carbon emissions, and enhance overall campus sustainability. However, improper selection of bike-sharing sites has led to the growing problems of unsustainable practices in campus, including the disorderly parking and indiscriminate placement of bike-sharing. To this end, this paper proposes a novel sustainable development-oriented campus bike-sharing site evaluation model integrating the improved Delphi and fuzzy comprehensive evaluation approaches. Fourteen evaluation metrics are firstly selected from four dimensions: the user features, implementation and usage characteristics of parking spots, environmental sustainability, and social sustainability, through the combination of expert experience and the improved Delphi method. Then, the analytic hierarchy process and the entropy weight method are employed to determine the weights of the evaluation indices, ensuring a robust and objective assessment framework. The fuzzy comprehensive evaluation method is finally implemented to evaluate the quality of location selection. South Campus of Henan Polytechnic University is selected as a case study using the proposed evaluation system. This work contributes to the existing body of knowledge by presenting a comprehensive location selection evaluation system for campus bike-sharing, informed by the principles of sustainable development.

We present NeuralLabeling, a labeling approach and toolset for annotating a scene using either bounding boxes or meshes and generating segmentation masks, affordance maps, 2D bounding boxes, 3D bounding boxes, 6DOF object poses, depth maps and object meshes. NeuralLabeling uses Neural Radiance Fields (NeRF) as renderer, allowing labeling to be performed using 3D spatial tools while incorporating geometric clues such as occlusions, relying only on images captured from multiple viewpoints as input. To demonstrate the applicability of NeuralLabeling to a practical problem in robotics, we added ground truth depth maps to 30000 frames of transparent object RGB and noisy depth maps of glasses placed in a dishwasher captured using an RGBD sensor, yielding the Dishwasher30k dataset. We show that training a simple deep neural network with supervision using the annotated depth maps yields a higher reconstruction performance than training with the previously applied weakly supervised approach.

Owing to effective and flexible data acquisition, unmanned aerial vehicle (UAV) has recently become a hotspot across the fields of computer vision (CV) and remote sensing (RS). Inspired by recent success of deep learning (DL), many advanced object detection and tracking approaches have been widely applied to various UAV-related tasks, such as environmental monitoring, precision agriculture, traffic management. This paper provides a comprehensive survey on the research progress and prospects of DL-based UAV object detection and tracking methods. More specifically, we first outline the challenges, statistics of existing methods, and provide solutions from the perspectives of DL-based models in three research topics: object detection from the image, object detection from the video, and object tracking from the video. Open datasets related to UAV-dominated object detection and tracking are exhausted, and four benchmark datasets are employed for performance evaluation using some state-of-the-art methods. Finally, prospects and considerations for the future work are discussed and summarized. It is expected that this survey can facilitate those researchers who come from remote sensing field with an overview of DL-based UAV object detection and tracking methods, along with some thoughts on their further developments.

We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.

Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a task-agnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.

北京阿比特科技有限公司