亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Public managers lack feedback on the effectiveness of public investments, policies, and programs instituted to build and use research capacity. Numerous reports rank countries on global performance on innovation and competitiveness, but the highly globalized data does not distinguish country contributions from global ones. We suggest improving upon global reports by removing globalized measures and combining a reliable set of national indicators into an index. We factor analyze 14 variables for 172 countries from 2013 to 2021. Two factors emerge, one for raw or core research capacity and the other indicating the wider context of governance. Analysis shows convergent validity within the two factors and divergent validity between them. Nations rank differently between capacity, governance context, and the product of the two. Ranks also vary as a function of the chosen aggregation method. Finally, as a test of the predictive validity of the capacity index, a regression analysis was implemented predicting national citation strength. Policymakers and analysts may find stronger feedback from this approach to quantifying national research strength.

相關內容

This study employs counterfactual explanations to explore "what if?" scenarios in medical research, with the aim of expanding our understanding beyond existing boundaries. Specifically, we focus on utilizing MRI features for diagnosing pediatric posterior fossa brain tumors as a case study. The field of artificial intelligence and explainability has witnessed a growing number of studies and increasing scholarly interest. However, the lack of human-friendly interpretations in explaining the outcomes of machine learning algorithms has significantly hindered the acceptance of these methods by clinicians in their clinical practice. To address this, our approach incorporates counterfactual explanations, providing a novel way to examine alternative decision-making scenarios. These explanations offer personalized and context-specific insights, enabling the validation of predictions and clarification of variations under diverse circumstances. Importantly, our approach maintains both statistical and clinical fidelity, allowing for the examination of distinct tumor features through alternative realities. Additionally, we explore the potential use of counterfactuals for data augmentation and evaluate their feasibility as an alternative approach in medical research. The results demonstrate the promising potential of counterfactual explanations to enhance trust and acceptance of AI-driven methods in clinical settings.

Collecting traffic volume data is a vital but costly piece of transportation engineering and urban planning. In recent years, efforts have been made to estimate traffic volumes using passively collected probe data that contain spatiotemporal information. However, the feasibility and underlying principles of traffic volume estimation based on probe data without pseudonyms have not been examined thoroughly. In this paper, we present the exact distribution of the estimated probe traffic volume passing through a road segment based on probe point data without trajectory reconstruction. The distribution of the estimated probe traffic volume can exhibit multimodality, without necessarily being line-symmetric with respect to the actual probe traffic volume. As more probes are present, the distribution approaches a normal distribution. The conformity of the distribution was demonstrated through numerical and microscopic traffic simulations. Theoretically, with a well-calibrated probe penetration rate, traffic volumes in a road segment can be estimated using probe point data with high precision even at a low probe penetration rate. Furthermore, sometimes there is a local optimum cordon length that maximises estimation precision. The theoretical variance of the estimated probe traffic volume can address heteroscedasticity in the modelling of traffic volume estimates.

This study compares the performance of a causal and a predictive model in modeling travel mode choice in three neighborhoods in Chicago. A causal discovery algorithm and a causal inference technique were used to extract the causal relationships in the mode choice decision making process and to estimate the quantitative causal effects between the variables both directly from observational data. The model results reveal that trip distance and vehicle ownership are the direct causes of mode choice in the three neighborhoods. Artificial neural network models were estimated to predict mode choice. Their accuracy was over 70%, and the SHAP values obtained measure the importance of each variable. We find that both the causal and predictive modeling approaches are useful for the purpose they serve. We also note that the study of mode choice behavior through causal modeling is mostly unexplored, yet it could transform our understanding of the mode choice behavior. Further research is needed to realize the full potential of these techniques in modeling mode choice.

With the increased deployment of machine learning models in various real-world applications, researchers and practitioners alike have emphasized the need for explanations of model behaviour. To this end, two broad strategies have been outlined in prior literature to explain models. Post hoc explanation methods explain the behaviour of complex black-box models by highlighting features that are critical to model predictions; however, prior work has shown that these explanations may not be faithful, and even more concerning is our inability to verify them. Specifically, it is nontrivial to evaluate if a given attribution is correct with respect to the underlying model. Inherently interpretable models, on the other hand, circumvent these issues by explicitly encoding explanations into model architecture, meaning their explanations are naturally faithful and verifiable, but they often exhibit poor predictive performance due to their limited expressive power. In this work, we aim to bridge the gap between the aforementioned strategies by proposing Verifiability Tuning (VerT), a method that transforms black-box models into models that naturally yield faithful and verifiable feature attributions. We begin by introducing a formal theoretical framework to understand verifiability and show that attributions produced by standard models cannot be verified. We then leverage this framework to propose a method to build verifiable models and feature attributions out of fully trained black-box models. Finally, we perform extensive experiments on semi-synthetic and real-world datasets, and show that VerT produces models that (1) yield explanations that are correct and verifiable and (2) are faithful to the original black-box models they are meant to explain.

The growing adoption of the Internet of Things (IoT) has brought a significant increase in attacks targeting those devices. Machine learning (ML) methods have shown promising results for intrusion detection; however, the scarcity of IoT datasets remains a limiting factor in developing ML-based security systems for IoT scenarios. Static datasets get outdated due to evolving IoT architectures and threat landscape; meanwhile, the testbeds used to generate them are rarely published. This paper presents the Gotham testbed, a reproducible and flexible security testbed extendable to accommodate new emulated devices, services or attackers. Gotham is used to build an IoT scenario composed of 100 emulated devices communicating via MQTT, CoAP and RTSP protocols, among others, in a topology composed of 30 switches and 10 routers. The scenario presents three threat actors, including the entire Mirai botnet lifecycle and additional red-teaming tools performing DoS, scanning, and attacks targeting IoT protocols. The testbed has many purposes, including a cyber range, testing security solutions, and capturing network and application data to generate datasets. We hope that researchers can leverage and adapt Gotham to include other devices, state-of-the-art attacks and topologies to share scenarios and datasets that reflect the current IoT settings and threat landscape.

Towards safe autonomous driving (AD), we consider the problem of learning models that accurately capture the diversity and tail quantiles of human driver behavior probability distributions, in interaction with an AD vehicle. Such models, which predict drivers' continuous actions from their states, are particularly relevant for closing the gap between AD agent simulations and reality. To this end, we adapt two flexible quantile learning frameworks for this setting that avoid strong distributional assumptions: (1) quantile regression (based on the titled absolute loss), and (2) autoregressive quantile flows (a version of normalizing flows). Training happens in a behavior cloning-fashion. We use the highD dataset consisting of driver trajectories on several highways. We evaluate our approach in a one-step acceleration prediction task, and in multi-step driver simulation rollouts. We report quantitative results using the tilted absolute loss as metric, give qualitative examples showing that realistic extremal behavior can be learned, and discuss the main insights.

Large language models of artificial intelligence (AI), such as ChatGPT, find remarkable but controversial applicability in science and research. This paper reviews epistemological challenges, ethical and integrity risks in science conduct in the advent of generative AI. This is with the aim to lay new timely foundations for a high-quality research ethics review. The role of AI language models as a research instrument and subject is scrutinized along with ethical implications for scientists, participants and reviewers. New emerging practices for research ethics review are discussed, concluding with ten recommendations that shape a response for a more responsible research conduct in the era of AI.

Autonomous driving has achieved a significant milestone in research and development over the last decade. There is increasing interest in the field as the deployment of self-operating vehicles on roads promises safer and more ecologically friendly transportation systems. With the rise of computationally powerful artificial intelligence (AI) techniques, autonomous vehicles can sense their environment with high precision, make safe real-time decisions, and operate more reliably without human interventions. However, intelligent decision-making in autonomous cars is not generally understandable by humans in the current state of the art, and such deficiency hinders this technology from being socially acceptable. Hence, aside from making safe real-time decisions, the AI systems of autonomous vehicles also need to explain how these decisions are constructed in order to be regulatory compliant across many jurisdictions. Our study sheds a comprehensive light on developing explainable artificial intelligence (XAI) approaches for autonomous vehicles. In particular, we make the following contributions. First, we provide a thorough overview of the present gaps with respect to explanations in the state-of-the-art autonomous vehicle industry. We then show the taxonomy of explanations and explanation receivers in this field. Thirdly, we propose a framework for an architecture of end-to-end autonomous driving systems and justify the role of XAI in both debugging and regulating such systems. Finally, as future research directions, we provide a field guide on XAI approaches for autonomous driving that can improve operational safety and transparency towards achieving public approval by regulators, manufacturers, and all engaged stakeholders.

Meta-learning has gained wide popularity as a training framework that is more data-efficient than traditional machine learning methods. However, its generalization ability in complex task distributions, such as multimodal tasks, has not been thoroughly studied. Recently, some studies on multimodality-based meta-learning have emerged. This survey provides a comprehensive overview of the multimodality-based meta-learning landscape in terms of the methodologies and applications. We first formalize the definition of meta-learning and multimodality, along with the research challenges in this growing field, such as how to enrich the input in few-shot or zero-shot scenarios and how to generalize the models to new tasks. We then propose a new taxonomy to systematically discuss typical meta-learning algorithms combined with multimodal tasks. We investigate the contributions of related papers and summarize them by our taxonomy. Finally, we propose potential research directions for this promising field.

Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference. We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute: self-supervised pretraining and high-resource machine translation. We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps. Moreover, this acceleration in convergence typically outpaces the additional computational overhead of using larger models. Therefore, the most compute-efficient training strategy is to counterintuitively train extremely large models but stop after a small number of iterations. This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models. However, we show that large models are more robust to compression techniques such as quantization and pruning than small models. Consequently, one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models.

北京阿比特科技有限公司