亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In response to the transformation towards Industry 5.0, there is a growing call for manufacturing systems that prioritize environmental sustainability, alongside the emerging application of digital tools. Extended Reality (XR) - including Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) - is one of the technologies identified as an enabler for Industry 5.0. XR could potentially also be a driver for more sustainable manufacturing: however, its potential environmental benefits have received limited attention. This paper aims to explore the current manufacturing applications and research within the field of XR technology connected to the environmental sustainability principle. The objectives of this paper are two-fold: (1) Identify the currently explored use cases of XR technology in literature and research, addressing environmental sustainability in manufacturing; (2) Provide guidance and references for industry and companies to use cases, toolboxes, methodologies, and workflows for implementing XR in environmental sustainable manufacturing practices. Based on the categorization of sustainability indicators, developed by the National Institute of Standards and Technology (NIST), the authors analyzed and mapped the current literature, with criteria of pragmatic XR use cases for manufacturing. The exploration resulted in a mapping of the current applications and use cases of XR technology within manufacturing that has the potential to drive environmental sustainability. The results are presented as stated use-cases with reference to the literature, contributing as guidance and inspiration for future researchers or implementations in industry, using XR as a driver for environmental sustainability. Furthermore, the authors open up the discussion for future work and research to increase the attention of XR as a driver for environmental sustainability.

相關內容

Retinopathy of prematurity (ROP) is a severe condition affecting premature infants, leading to abnormal retinal blood vessel growth, retinal detachment, and potential blindness. While semi-automated systems have been used in the past to diagnose ROP-related plus disease by quantifying retinal vessel features, traditional machine learning (ML) models face challenges like accuracy and overfitting. Recent advancements in deep learning (DL), especially convolutional neural networks (CNNs), have significantly improved ROP detection and classification. The i-ROP deep learning (i-ROP-DL) system also shows promise in detecting plus disease, offering reliable ROP diagnosis potential. This research comprehensively examines the contemporary progress and challenges associated with using retinal imaging and artificial intelligence (AI) to detect ROP, offering valuable insights that can guide further investigation in this domain. Based on 89 original studies in this field (out of 1487 studies that were comprehensively reviewed), we concluded that traditional methods for ROP diagnosis suffer from subjectivity and manual analysis, leading to inconsistent clinical decisions. AI holds great promise for improving ROP management. This review explores AI's potential in ROP detection, classification, diagnosis, and prognosis.

Connected and autonomous vehicles (CAVs) will greatly impact the lives of individuals with visual impairments, but how they differ in expectations compared to sighted individuals is not clear. The present research reports results based on survey responses from 114 visually impaired participants and 117 panel recruited participants without visual impairments, from Germany. Their attitudes towards autonomous vehicles and their expectations for consequences of wide-spread adoption of CAVs are assessed. Results indicate significantly more positive CAV attitudes in participants with visual impairments compared to those without visual impairments. Mediation analyses indicate that visually impaired individuals' more positive CAV attitudes (compared to sighted individuals') are largely explained by higher hopes for independence, and more optimistic expectations regarding safety and sustainability. Policy makers should ensure accessibility without sacrificing goals for higher safety and lower ecological impact to make CAVs an acceptable inclusive mobility solution.

We present an information-theoretic lower bound for the problem of parameter estimation with time-uniform coverage guarantees. Via a new a reduction to sequential testing, we obtain stronger lower bounds that capture the hardness of the time-uniform setting. In the case of location model estimation, logistic regression, and exponential family models, our $\Omega(\sqrt{n^{-1}\log \log n})$ lower bound is sharp to within constant factors in typical settings.

Quality assessment, including inspecting the images for artifacts, is a critical step during MRI data acquisition to ensure data quality and downstream analysis or interpretation success. This study demonstrates a deep learning model to detect rigid motion in T1-weighted brain images. We leveraged a 2D CNN for three-class classification and tested it on publicly available retrospective and prospective datasets. Grad-CAM heatmaps enabled the identification of failure modes and provided an interpretation of the model's results. The model achieved average precision and recall metrics of 85% and 80% on six motion-simulated retrospective datasets. Additionally, the model's classifications on the prospective dataset showed a strong inverse correlation (-0.84) compared to average edge strength, an image quality metric indicative of motion. This model is part of the ArtifactID tool, aimed at inline automatic detection of Gibbs ringing, wrap-around, and motion artifacts. This tool automates part of the time-consuming QA process and augments expertise on-site, particularly relevant in low-resource settings where local MR knowledge is scarce.

Signal processing in the time-frequency plane has a long history and remains a field of methodological innovation. For instance, detection and denoising based on the zeros of the spectrogram have been proposed since 2015, contrasting with a long history of focusing on larger values of the spectrogram. Yet, unlike neighboring fields like optimization and machine learning, time-frequency signal processing lacks widely-adopted benchmarking tools. In this work, we contribute an open-source, Python-based toolbox termed MCSM-Benchs for benchmarking multi-component signal analysis methods, and we demonstrate our toolbox on three time-frequency benchmarks. First, we compare different methods for signal detection based on the zeros of the spectrogram, including unexplored variations of previously proposed detection tests. Second, we compare zero-based denoising methods to both classical and novel methods based on large values and ridges of the spectrogram. Finally, we compare the denoising performance of these methods against typical spectrogram thresholding strategies, in terms of post-processing artifacts commonly referred to as musical noise. At a low level, the obtained results provide new insight on the assessed approaches, and in particular research directions to further develop zero-based methods. At a higher level, our benchmarks exemplify the benefits of using a public, collaborative, common framework for benchmarking.

We show that a previously introduced key exchange based on a congruence-simple semiring action is not secure by providing an attack that reveals the shared key from the distributed public information for any of such semirings

We investigate pointwise estimation of the function-valued velocity field of a second-order linear SPDE. Based on multiple spatially localised measurements, we construct a weighted augmented MLE and study its convergence properties as the spatial resolution of the observations tends to zero and the number of measurements increases. By imposing H\"older smoothness conditions, we recover the pointwise convergence rate known to be minimax-optimal in the linear regression framework. The optimality of the rate in the current setting is verified by adapting the lower bound ansatz based on the RKHS of local measurements to the nonparametric situation.

Artificial intelligence (AI) advances and the rapid adoption of generative AI tools like ChatGPT present new opportunities and challenges for higher education. While substantial literature discusses AI in higher education, there is a lack of a systemic approach that captures a holistic view of the AI transformation of higher education institutions (HEIs). To fill this gap, this article, taking a complex systems approach, develops a causal loop diagram (CLD) to map the causal feedback mechanisms of AI transformation in a typical HEI. Our model accounts for the forces that drive the AI transformation and the consequences of the AI transformation on value creation in a typical HEI. The article identifies and analyzes several reinforcing and balancing feedback loops, showing how, motivated by AI technology advances, the HEI invests in AI to improve student learning, research, and administration. The HEI must take measures to deal with academic integrity problems and adapt to changes in available jobs due to AI, emphasizing AI-complementary skills for its students. However, HEIs face a competitive threat and several policy traps that may lead to decline. HEI leaders need to become systems thinkers to manage the complexity of the AI transformation and benefit from the AI feedback loops while avoiding the associated pitfalls. We also discuss long-term scenarios, the notion of HEIs influencing the direction of AI, and directions for future research on AI transformation.

We adopt the integral definition of the fractional Laplace operator and study an optimal control problem on Lipschitz domains that involves a fractional elliptic partial differential equation (PDE) as state equation and a control variable that enters the state equation as a coefficient; pointwise constraints on the control variable are considered as well. We establish the existence of optimal solutions and analyze first and, necessary and sufficient, second order optimality conditions. Regularity estimates for optimal variables are also analyzed. We develop two finite element discretization strategies: a semidiscrete scheme in which the control variable is not discretized, and a fully discrete scheme in which the control variable is discretized with piecewise constant functions. For both schemes, we analyze the convergence properties of discretizations and derive error estimates.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司