亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Contemporary practices such as InnerSource and DevOps promote software reuse. This study investigates the implications of using contemporary practices on software reuse. In particular, we investigate the costs, benefits, challenges, and potential improvements in contemporary reuse at Ericsson. We performed the study in two phases: a) the initial data collection based on a combination of data collection methods (e.g., interviews, discussions, company portals), and b) a follow-up group discussion after a year to understand the status of the challenges and improvements identified in the first phase. Our results indicate that developing reusable assets resulted in upfront costs, such as additional effort in ensuring compliance. Furthermore, development with reuse also resulted in additional effort, for example, in integrating and understanding reusable assets. Ericsson perceived the additional effort as an investment resulting in long-term benefits such as improved quality, productivity, customer experience, and way of working. Ericsson's main challenge was increased pressure on the producers of reusable assets, which was mitigated by scaling the InnerSource adoption. InnerSource success is evident from the increase in the contributions to reusable assets. In addition, Ericsson implemented measures such as automating the compliance check, which enhanced the maturity of reusable assets and resulted in increased reuse.

相關內容

ACM SIGACCESS Conference on Computers and Accessibility是為殘疾人和老年人提供與計算機相關的設計、評估、使用和教育研究的首要論壇。我們歡迎提交原始的高質量的有關計算和可訪問性的主題。今年,ASSETS首次將其范圍擴大到包括關于計算機無障礙教育相關主題的原創高質量研究。官網鏈接: · MoDELS · state-of-the-art · 語言模型化 · Better ·
2023 年 11 月 10 日

This study compares the performance of (1) fine-tuned models and (2) extremely large language models on the task of check-worthy claim detection. For the purpose of the comparison we composed a multilingual and multi-topical dataset comprising texts of various sources and styles. Building on this, we performed a benchmark analysis to determine the most general multilingual and multi-topical claim detector. We chose three state-of-the-art models in the check-worthy claim detection task and fine-tuned them. Furthermore, we selected three state-of-the-art extremely large language models without any fine-tuning. We made modifications to the models to adapt them for multilingual settings and through extensive experimentation and evaluation. We assessed the performance of all the models in terms of accuracy, recall, and F1-score in in-domain and cross-domain scenarios. Our results demonstrate that despite the technological progress in the area of natural language processing, the models fine-tuned for the task of check-worthy claim detection still outperform the zero-shot approaches in a cross-domain settings.

In this paper, we develop an arbitrary-order locking-free enriched Galerkin method for the linear elasticity problem using the stress-displacement formulation in both two and three dimensions. The method is based on the mixed discontinuous Galerkin method in [30], but with a different stress approximation space that enriches the arbitrary order continuous Galerkin space with some piecewise symmetric-matrix valued polynomials. We prove that the method is well-posed and provide a parameter-robust error estimate, which confirms the locking-free property of the EG method. We present some numerical examples in two and three dimensions to demonstrate the effectiveness of the proposed method.

We present a framework for the efficient computation of optimal Bayesian decisions under intractable likelihoods, by learning a surrogate model for the expected utility (or its distribution) as a function of the action and data spaces. We leverage recent advances in simulation-based inference and Bayesian optimization to develop active learning schemes to choose where in parameter and action spaces to simulate. This allows us to learn the optimal action in as few simulations as possible. The resulting framework is extremely simulation efficient, typically requiring fewer model calls than the associated posterior inference task alone, and a factor of $100-1000$ more efficient than Monte-Carlo based methods. Our framework opens up new capabilities for performing Bayesian decision making, particularly in the previously challenging regime where likelihoods are intractable, and simulations expensive.

This study explores the intersection of information technology-based self-monitoring (ITSM) and emotional responses in chronic care. It critiques the lack of theoretical depth in current ITSM research and proposes a dynamic emotion process theory to understand ITSM's impact on users' emotions. Utilizing computational grounded theory and machine learning analysis of hypertension app reviews, the research seeks to extend emotion theory by examining ITSM stimuli and their influence on emotional episodes, moving beyond discrete emotion models towards a continuous, nuanced understanding of emotional responses.

Bayesian inference and kernel methods are well established in machine learning. The neural network Gaussian process in particular provides a concept to investigate neural networks in the limit of infinitely wide hidden layers by using kernel and inference methods. Here we build upon this limit and provide a field-theoretic formalism which covers the generalization properties of infinitely wide networks. We systematically compute generalization properties of linear, non-linear, and deep non-linear networks for kernel matrices with heterogeneous entries. In contrast to currently employed spectral methods we derive the generalization properties from the statistical properties of the input, elucidating the interplay of input dimensionality, size of the training data set, and variability of the data. We show that data variability leads to a non-Gaussian action reminiscent of a ($\varphi^3+\varphi^4$)-theory. Using our formalism on a synthetic task and on MNIST we obtain a homogeneous kernel matrix approximation for the learning curve as well as corrections due to data variability which allow the estimation of the generalization properties and exact results for the bounds of the learning curves in the case of infinitely many training data points.

The expanding hardware diversity in high performance computing adds enormous complexity to scientific software development. Developers who aim to write maintainable software have two options: 1) To use a so-called data locality abstraction that handles portability internally, thereby, performance-productivity becomes a trade off. Such abstractions usually come in the form of libraries, domain-specific languages, and run-time systems. 2) To use generic programming where performance, productivity and portability are subject to software design. In the direction of the second, this work describes a design approach that allows the integration of low-level and verbose programming tools into high-level generic algorithms based on template meta-programming in C++. This enables the development of performance-portable applications targeting host-device computer architectures, such as CPUs and GPUs. With a suitable design in place, the extensibility of generic algorithms to new hardware becomes a well defined procedure that can be developed in isolation from other parts of the code. That allows scientific software to be maintainable and efficient in a period of diversifying hardware in HPC. As proof of concept, a finite-difference modelling algorithm for the acoustic wave equation is developed and benchmarked using roofline model analysis on Intel Xeon Gold 6248 CPU, Nvidia Tesla V100 GPU, and AMD MI100 GPU.

This study investigates the influence of varying illumination levels on architectural experiences by employing a comprehensive approach that combines self-reported assessments and neurophysiological measurements. Thirty participants were exposed to nine distinct illumination conditions in a controlled virtual reality environment. Subjective assessments, collected through questionnaires in which participants were asked to rate how pleasant, interesting, exciting, calming, complex, bright and spacious they found the space. Objective measurements of brain activity were collected by electroencephalogram (EEG). Data analysis demonstrated that illumination levels significantly influenced cognitive engagement and different architectural experience indicators. This alignment between subjective assessment and EEG data underscores the relationship between illuminance and architectural experiences. The study bridges the gap between quantitative and qualitative assessments, providing a deeper understanding of the intricate connection between lighting conditions and human responses. These findings contribute to the enhancement of environmental design based on neuroscientific insights, emphasizing the critical role of well-considered daylighting design in positively influencing occupants' cognitive and emotional states within built environments.

Causal investigations in observational studies pose a great challenge in scientific research where randomized trials or intervention-based studies are not feasible. Leveraging Shannon's seminal work on information theory, we consider a framework of asymmetry where any causal link between putative cause and effect must be explained through a mechanism governing the cause as well as a generative process yielding an effect of the cause. Under weak assumptions, this framework enables the assessment of whether X is a stronger predictor of Y or vice-versa. Under stronger identifiability assumptions our framework is able to distinguish between cause and effect using observational data. We establish key statistical properties of this framework. Our proposed methodology relies on scalable non-parametric density estimation using fast Fourier transformation. The resulting estimation method is manyfold faster than the classical bandwidth-based density estimation while maintaining comparable mean integrated squared error rates. We investigate key asymptotic properties of our methodology and introduce a data-splitting technique to facilitate inference. The key attraction of our framework is its inference toolkit, which allows researchers to quantify uncertainty in causal discovery findings. We illustrate the performance of our methodology through simulation studies as well as multiple real data examples.

The optimization of open-loop shallow geothermal systems, which includes both design and operational aspects, is an important research area aimed at improving their efficiency and sustainability and the effective management of groundwater as a shallow geothermal resource. This paper investigates various approaches to address optimization problems arising from these research and implementation questions about GWHP systems. The identified optimization approaches are thoroughly analyzed based on criteria such as computational cost and applicability. Moreover, a novel classification scheme is introduced that categorizes the approaches according to the types of groundwater simulation model and the optimization algorithm used. Simulation models are divided into two types: numerical and simplified (analytical or data-driven) models, while optimization algorithms are divided into gradient-based and derivative-free algorithms. Finally, a comprehensive review of existing approaches in the literature is provided, highlighting their strengths and limitations and offering recommendations for both the use of existing approaches and the development of new, improved ones in this field.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司