亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work conducts a comprehensive exploration into the proficiency of OpenAI's ChatGPT-4 in sourcing scientific references within an array of research disciplines. Our in-depth analysis encompasses a wide scope of fields including Computer Science (CS), Mechanical Engineering (ME), Electrical Engineering (EE), Biomedical Engineering (BME), and Medicine, as well as their more specialized sub-domains. Our empirical findings indicate a significant variance in ChatGPT-4's performance across these disciplines. Notably, the validity rate of suggested articles in CS, BME, and Medicine surpasses 65%, whereas in the realms of ME and EE, the model fails to verify any article as valid. Further, in the context of retrieving articles pertinent to niche research topics, ChatGPT-4 tends to yield references that align with the broader thematic areas as opposed to the narrowly defined topics of interest. This observed disparity underscores the pronounced variability in accuracy across diverse research fields, indicating the potential requirement for model refinement to enhance its functionality in academic research. Our investigation offers valuable insights into the current capacities and limitations of AI-powered tools in scholarly research, thereby emphasizing the indispensable role of human oversight and rigorous validation in leveraging such models for academic pursuits.

相關內容

《工程》是中國工程院(CAE)于2015年推出的國際開放存取期刊。其目的是提供一個高水平的平臺,傳播和分享工程研發的前沿進展、當前主要研究成果和關鍵成果;報告工程科學的進展,討論工程發展的熱點、興趣領域、挑戰和前景,在工程中考慮人與環境的福祉和倫理道德,鼓勵具有深遠經濟和社會意義的工程突破和創新,使之達到國際先進水平,成為新的生產力,從而改變世界,造福人類,創造新的未來。 期刊鏈接: · · 情景 · 易處理的 · 稀疏連接 ·
2023 年 8 月 8 日

We study a generalization of the classic Spanning Tree problem that allows for a non-uniform failure model. More precisely, edges are either \emph{safe} or \emph{unsafe} and we assume that failures only affect unsafe edges. In Unweighted Flexible Graph Connectivity we are given an undirected graph $G = (V,E)$ in which the edge set $E$ is partitioned into a set $S$ of safe edges and a set $U$ of unsafe edges and the task is to find a set $T$ of at most $k$ edges such that $T - \{u\}$ is connected and spans $V$ for any unsafe edge $u \in T$. Unweighted Flexible Graph Connectivity generalizes both Spanning Tree and Hamiltonian Cycle. We study Unweighted Flexible Graph Connectivity in terms of fixed-parameter tractability (FPT). We show an almost complete dichotomy on which parameters lead to fixed-parameter tractability and which lead to hardness. To this end, we obtain FPT-time algorithms with respect to the vertex deletion distance to cluster graphs and with respect to the treewidth. By exploiting the close relationship to Hamiltonian Cycle, we show that FPT-time algorithms for many smaller parameters are unlikely under standard parameterized complexity assumptions. Regarding problem-specific parameters, we observe that Unweighted Flexible Graph Connectivity} admits an FPT-time algorithm when parameterized by the number of unsafe edges. Furthermore, we investigate a below-upper-bound parameter for the number of edges of a solution. We show that this parameter also leads to an FPT-time algorithm.

Online Food Recommendation Service (OFRS) has remarkable spatiotemporal characteristics and the advantage of being able to conveniently satisfy users' needs in a timely manner. There have been a variety of studies that have begun to explore its spatiotemporal properties, but a comprehensive and in-depth analysis of the OFRS spatiotemporal features is yet to be conducted. Therefore, this paper studies the OFRS based on three questions: how spatiotemporal features play a role; why self-attention cannot be used to model the spatiotemporal sequences of OFRS; and how to combine spatiotemporal features to improve the efficiency of OFRS. Firstly, through experimental analysis, we systemically extracted the spatiotemporal features of OFRS, identified the most valuable features and designed an effective combination method. Secondly, we conducted a detailed analysis of the spatiotemporal sequences, which revealed the shortcomings of self-attention in OFRS, and proposed a more optimized spatiotemporal sequence method for replacing self-attention. In addition, we also designed a Dynamic Context Adaptation Model to further improve the efficiency and performance of OFRS. Through the offline experiments on two large datasets and online experiments for a week, the feasibility and superiority of our model were proven.

Retrofitting and thermographic survey (TS) companies in Scotland collaborate with social housing providers to tackle fuel poverty. They employ ground-level infrared (IR) camera-based-TSs (GIRTSs) for collecting thermal images to identi-fy the heat loss sources resulting from poor insulation. However, this identifica-tion process is labor-intensive and time-consuming, necessitating extensive data processing. To automate this, an AI-driven approach is necessary. Therefore, this study proposes a deep learning (DL)-based segmentation framework using the Mask Region Proposal Convolutional Neural Network (Mask RCNN) to validate its applicability to these thermal images. The objective of the framework is to au-tomatically identify, and crop heat loss sources caused by weak insulation, while also eliminating obstructive objects present in those images. By doing so, it min-imizes labor-intensive tasks and provides an automated, consistent, and reliable solution. To validate the proposed framework, approximately 2500 thermal imag-es were collected in collaboration with industrial TS partner. Then, 1800 repre-sentative images were carefully selected with the assistance of experts and anno-tated to highlight the target objects (TO) to form the final dataset. Subsequently, a transfer learning strategy was employed to train the dataset, progressively aug-menting the training data volume and fine-tuning the pre-trained baseline Mask RCNN. As a result, the final fine-tuned model achieved a mean average precision (mAP) score of 77.2% for segmenting the TO, demonstrating the significant po-tential of proposed framework in accurately quantifying energy loss in Scottish homes.

We propose a study of the constitution of meaning in human-computer interaction based on Turing and Wittgenstein's definitions of thought, understanding, and decision. We show by the comparative analysis of the conceptual similarities and differences between the two authors that the common sense between humans and machines is co-constituted in and from action and that it is precisely in this co-constitution that lies the social value of their interaction. This involves problematizing human-machine interaction around the question of what it means to "follow a rule" to define and distinguish the interpretative modes and decision-making behaviors of each. We conclude that the mutualization of signs that takes place through the human-machine dialogue is at the foundation of the constitution of a computerized society.

The combination of Visual Guidance and Extended Reality (XR) technology holds the potential to greatly improve the performance of human workforces in numerous areas, particularly industrial environments. Focusing on virtual assembly tasks and making use of different forms of supportive visualisations, this study investigates the potential of XR Visual Guidance. Set in a web-based immersive environment, our results draw from a heterogeneous pool of 199 participants. This research is designed to significantly differ from previous exploratory studies, which yielded conflicting results on user performance and associated human factors. Our results clearly show the advantages of XR Visual Guidance based on an over 50\% reduction in task completion times and mistakes made; this may further be enhanced and refined using specific frameworks and other forms of visualisations/Visual Guidance. Discussing the role of other factors, such as cognitive load, motivation, and usability, this paper also seeks to provide concrete avenues for future research and practical takeaways for practitioners.

This thesis delves into the intricate world of Deep Neural Networks (DNNs), focusing on the exciting concept of the Lottery Ticket Hypothesis (LTH). The LTH posits that within extensive DNNs, smaller, trainable subnetworks termed "winning tickets", can achieve performance comparable to the full model. A key process in LTH, Iterative Magnitude Pruning (IMP), incrementally eliminates minimal weights, emulating stepwise learning in DNNs. Once we identify these winning tickets, we further investigate their "universality". In other words, we check if a winning ticket that works well for one specific problem could also work well for other, similar problems. We also bridge the divide between the IMP and the Renormalisation Group (RG) theory in physics, promoting a more rigorous understanding of IMP.

Variational inference has recently emerged as a popular alternative to the classical Markov chain Monte Carlo (MCMC) in large-scale Bayesian inference. The core idea is to trade statistical accuracy for computational efficiency. In this work, we study these statistical and computational trade-offs in variational inference via a case study in inferential model selection. Focusing on Gaussian inferential models (or variational approximating families) with diagonal plus low-rank precision matrices, we initiate a theoretical study of the trade-offs in two aspects, Bayesian posterior inference error and frequentist uncertainty quantification error. From the Bayesian posterior inference perspective, we characterize the error of the variational posterior relative to the exact posterior. We prove that, given a fixed computation budget, a lower-rank inferential model produces variational posteriors with a higher statistical approximation error, but a lower computational error; it reduces variance in stochastic optimization and, in turn, accelerates convergence. From the frequentist uncertainty quantification perspective, we consider the precision matrix of the variational posterior as an uncertainty estimate, which involves an additional statistical error originating from the sampling uncertainty of the data. As a consequence, for small datasets, the inferential model need not be full-rank to achieve optimal estimation error (even with unlimited computation budget).

Numerical methods for Inverse Kinematics (IK) employ iterative, linear approximations of the IK until the end-effector is brought from its initial pose to the desired final pose. These methods require the computation of the Jacobian of the Forward Kinematics (FK) and its inverse in the linear approximation of the IK. Despite all the successful implementations reported in the literature, Jacobian-based IK methods can still fail to preserve certain useful properties if an improper matrix inverse, e.g. Moore-Penrose (MP), is employed for incommensurate robotic systems. In this paper, we propose a systematic, robust and accurate numerical solution for the IK problem using the Mixed (MX) Generalized Inverse (GI) applied to any type of Jacobians (e.g., analytical, numerical or geometric) derived for any commensurate and incommensurate robot. This approach is robust to whether the system is under-determined (less than 6 DoF) or over-determined (more than 6 DoF). We investigate six robotics manipulators with various Degrees of Freedom (DoF) to demonstrate that commonly used GI's fail to guarantee the same system behaviors when the units are varied for incommensurate robotics manipulators. In addition, we evaluate the proposed methodology as a global IK solver and compare against well-known IK methods for redundant manipulators. Based on the experimental results, we conclude that the right choice of GI is crucial in preserving certain properties of the system (i.e. unit-consistency).

We conducted ethnographic research with 31 misinformation creators and consumers in Brazil and the US before, during, and after a major election to understand the consumption and production of election and medical misinformation. This study contributes to research on misinformation ecosystems by focusing on poorly understood small players, or "micro-influencers", who create misinformation in peer-to-peer networks. We detail four key tactics that micro-influencers use. First, they typically disseminate misleading "gray area" content rather than falsifiable claims, using subtle aesthetic and rhetorical tactics to evade moderation. Second, they post in small, closed groups where members feel safe and predisposed to trust content. Third, they explicitly target misinformation consumers' emotional and social needs. Finally, they post a high volume of short, repetitive content to plant seeds of doubt and build trust in influencers as unofficial experts. We discuss the implications these micro-influencers have for misinformation interventions and platforms' efforts to moderate misinformation.

Graph Neural Networks (GNNs) have gained momentum in graph representation learning and boosted the state of the art in a variety of areas, such as data mining (\emph{e.g.,} social network analysis and recommender systems), computer vision (\emph{e.g.,} object detection and point cloud learning), and natural language processing (\emph{e.g.,} relation extraction and sequence learning), to name a few. With the emergence of Transformers in natural language processing and computer vision, graph Transformers embed a graph structure into the Transformer architecture to overcome the limitations of local neighborhood aggregation while avoiding strict structural inductive biases. In this paper, we present a comprehensive review of GNNs and graph Transformers in computer vision from a task-oriented perspective. Specifically, we divide their applications in computer vision into five categories according to the modality of input data, \emph{i.e.,} 2D natural images, videos, 3D data, vision + language, and medical images. In each category, we further divide the applications according to a set of vision tasks. Such a task-oriented taxonomy allows us to examine how each task is tackled by different GNN-based approaches and how well these approaches perform. Based on the necessary preliminaries, we provide the definitions and challenges of the tasks, in-depth coverage of the representative approaches, as well as discussions regarding insights, limitations, and future directions.

北京阿比特科技有限公司