亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

[Background] Hackathons are increasingly gaining prominence in Software Engineering (SE) education, lauded for their ability to elevate students' skill sets. [Objective] This paper investigates whether hackathons can impact the motivation of SE students. [Method] We conducted an evaluative case study assessing students' motivations before and after a hackathon, combining quantitative analysis using the Academic Motivation Scale (AMS) and qualitative coding of open-ended responses. [Results] Pre-hackathon findings reveal a diverse range of motivations with an overall acceptance, while post-hackathon responses highlight no statistically significant shift in participants' perceptions. Qualitative findings uncovered themes related to networking, team dynamics, and skill development. From a practical perspective, our findings highlight the potential of hackathons to impact participants' motivation. [Conclusion] While our study enhances the comprehension of hackathons as a motivational tool, it also underscores the need for further exploration of psychometric dimensions in SE educational research.

相關內容

The amount of information in satisfiability problem (SAT) is considered. SAT can be polynomial-time solvable when the solving algorithm holds an exponential amount of information. It is also established that SAT Kolmogorov complexity is constant. It is argued that the amount of information in SAT grows at least exponentially with the size of the input instance. The amount of information in SAT is compared with the amount of information in the fixed code algorithms and generated over runtime.

Inverse problems, particularly those governed by Partial Differential Equations (PDEs), are prevalent in various scientific and engineering applications, and uncertainty quantification (UQ) of solutions to these problems is essential for informed decision-making. This second part of a two-paper series builds upon the foundation set by the first part, which introduced CUQIpy, a Python software package for computational UQ in inverse problems using a Bayesian framework. In this paper, we extend CUQIpy's capabilities to solve PDE-based Bayesian inverse problems through a general framework that allows the integration of PDEs in CUQIpy, whether expressed natively or using third-party libraries such as FEniCS. CUQIpy offers concise syntax that closely matches mathematical expressions, streamlining the modeling process and enhancing the user experience. The versatility and applicability of CUQIpy to PDE-based Bayesian inverse problems are demonstrated on examples covering parabolic, elliptic and hyperbolic PDEs. This includes problems involving the heat and Poisson equations and application case studies in electrical impedance tomography and photo-acoustic tomography, showcasing the software's efficiency, consistency, and intuitive interface. This comprehensive approach to UQ in PDE-based inverse problems provides accessibility for non-experts and advanced features for experts.

With advancement of medicine, alternative exposures or interventions are emerging with respect to a common outcome, and there are needs to formally test the difference in the associations of multiple exposures. We propose a duplication method-based multivariate Wald test in the Cox proportional hazard regression analyses to test the difference in the associations of multiple exposures with a same outcome. The proposed method applies to linear or categorical exposures. To illustrate our method, we applied our method to compare the associations between alignment to two different dietary patterns, either as continuous or quartile exposures, and incident chronic diseases, defined as a composite of CVD, cancer, and diabetes, in the Health Professional Follow-up Study. Relevant sample codes in R that implement the proposed approach are provided. The proposed duplication-method-based approach offers a flexible, formal statistical test of multiple exposures for the common outcome with minimal assumptions.

We present a general refinement of the Cauchy-Schwarz inequality over complete inner product spaces and show that it can be of interest for some statistical applications. This generalizes and simplifies previous results on the same subject.

The integration of generative Artificial Intelligence (AI) chatbots in higher education institutions (HEIs) is reshaping the educational landscape, offering opportunities for enhanced student support, and administrative and research efficiency. This study explores the future implications of generative AI chatbots in HEIs, aiming to understand their potential impact on teaching and learning, and research processes. Utilizing a narrative literature review (NLR) methodology, this study synthesizes existing research on generative AI chatbots in higher education from diverse sources, including academic databases and scholarly publications. The findings highlight the transformative potential of generative AI chatbots in streamlining administrative tasks, enhancing student learning experiences, and supporting research activities. However, challenges such as academic integrity concerns, user input understanding, and resource allocation pose significant obstacles to the effective integration of generative AI chatbots in HEIs. This study underscores the importance of proactive measures to address ethical considerations, provide comprehensive training for stakeholders, and establish clear guidelines for the responsible use of generative AI chatbots in higher education. By navigating these challenges, and leveraging the benefits of generative AI technologies, HEIs can harness the full potential of generative AI chatbots to create a more efficient, effective, inclusive, and innovative educational environment.

Widely present in the primary circuit of Nuclear Power Plants (NPP), Dissimilar Metal Welds (DMW) are inspected using Ultrasonic nondestructive Testing (UT) techniques to ensure the integrity of the structure and detect defects such as Stress Corrosion Cracking (SCC).In a previous collaborative research, CRIEPI and CEA have worked on the understanding of the propagation of ultrasonic waves in complex materials. Indeed, the ultrasonic propagation can be disturbed due to the anisotropic and inhomogeneous properties of the medium and the interpretation of inspection results can then be difficult. An analytical model, based on a dynamic ray theory, developed by CEA-LIST and implemented in the CIVA software had been used to predict the ultrasonic propagation in a DMW. The model evaluates the ray trajectories, the travel-time and the computation of the amplitude along the ray tube in a medium described thanks to a continuously varying description of its physical properties. In this study, the weld had been described by an analytical law of the crystallographic orientation. The simulated results of the detection of calibrated notches located in the buttering and the weld had been compared with experimental data and had shown a good agreement.The new collaborative program presented in this paper aims at detecting a real SCC defect located close to the root of the DMW. Thus, simulations have been performed for a DMW described with an analytical law and a smooth cartography of the crystallographic orientation. Furthermore, advanced ultrasonic testing methods have been used to inspect the specimen and detect the real SCC defect. Experimental and simulated results of the mock-up inspection have been compared.

Ever since the seminal work of R. A. Fisher and F. Yates, factorial designs have been an important experimental tool to simultaneously estimate the effects of multiple treatment factors. In factorial designs, the number of treatment combinations grows exponentially with the number of treatment factors, which motivates the forward selection strategy based on the sparsity, hierarchy, and heredity principles for factorial effects. Although this strategy is intuitive and has been widely used in practice, its rigorous statistical theory has not been formally established. To fill this gap, we establish design-based theory for forward factor selection in factorial designs based on the potential outcome framework. We not only prove a consistency property for the factor selection procedure but also discuss statistical inference after factor selection. In particular, with selection consistency, we quantify the advantages of forward selection based on asymptotic efficiency gain in estimating factorial effects. With inconsistent selection in higher-order interactions, we propose two strategies and investigate their impact on subsequent inference. Our formulation differs from the existing literature on variable selection and post-selection inference because our theory is based solely on the physical randomization of the factorial design and does not rely on a correctly specified outcome model.

The rapid integration of Generative AI (GenAI) and Large Language Models (LLMs) in sectors such as education and healthcare have marked a significant advancement in technology. However, this growth has also led to a largely unexplored aspect: their security vulnerabilities. As the ecosystem that includes both offline and online models, various tools, browser plugins, and third-party applications continues to expand, it significantly widens the attack surface, thereby escalating the potential for security breaches. These expansions in the 6G and beyond landscape provide new avenues for adversaries to manipulate LLMs for malicious purposes. We focus on the security aspects of LLMs from the viewpoint of potential adversaries. We aim to dissect their objectives and methodologies, providing an in-depth analysis of known security weaknesses. This will include the development of a comprehensive threat taxonomy, categorizing various adversary behaviors. Also, our research will concentrate on how LLMs can be integrated into cybersecurity efforts by defense teams, also known as blue teams. We will explore the potential synergy between LLMs and blockchain technology, and how this combination could lead to the development of next-generation, fully autonomous security solutions. This approach aims to establish a unified cybersecurity strategy across the entire computing continuum, enhancing overall digital security infrastructure.

The ability to learn and compose functions is foundational to efficient learning and reasoning in humans, enabling flexible generalizations such as creating new dishes from known cooking processes. Beyond sequential chaining of functions, existing linguistics literature indicates that humans can grasp more complex compositions with interacting functions, where output production depends on context changes induced by different function orderings. Extending the investigation into the visual domain, we developed a function learning paradigm to explore the capacity of humans and neural network models in learning and reasoning with compositional functions under varied interaction conditions. Following brief training on individual functions, human participants were assessed on composing two learned functions, in ways covering four main interaction types, including instances in which the application of the first function creates or removes the context for applying the second function. Our findings indicate that humans can make zero-shot generalizations on novel visual function compositions across interaction conditions, demonstrating sensitivity to contextual changes. A comparison with a neural network model on the same task reveals that, through the meta-learning for compositionality (MLC) approach, a standard sequence-to-sequence Transformer can mimic human generalization patterns in composing functions.

The use of Artificial Intelligence (AI) based on data-driven algorithms has become ubiquitous in today's society. Yet, in many cases and especially when stakes are high, humans still make final decisions. The critical question, therefore, is whether AI helps humans make better decisions as compared to a human alone or AI an alone. We introduce a new methodological framework that can be used to answer experimentally this question with no additional assumptions. We measure a decision maker's ability to make correct decisions using standard classification metrics based on the baseline potential outcome. We consider a single-blinded experimental design, in which the provision of AI-generated recommendations is randomized across cases with a human making final decisions. Under this experimental design, we show how to compare the performance of three alternative decision-making systems--human-alone, human-with-AI, and AI-alone. We apply the proposed methodology to the data from our own randomized controlled trial of a pretrial risk assessment instrument. We find that AI recommendations do not improve the classification accuracy of a judge's decision to impose cash bail. Our analysis also shows that AI-alone decisions generally perform worse than human decisions with or without AI assistance. Finally, AI recommendations tend to impose cash bail on non-white arrestees more often than necessary when compared to white arrestees.

北京阿比特科技有限公司