亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The intersection of ageism and sexism can create a hostile environment for veteran software developers belonging to marginalized genders. In this study, we conducted 14 interviews to examine the experiences of people at this intersection, primarily women, in order to discover the strategies they employed in order to successfully remain in the field. We identified 283 codes, which fell into three main categories: Strategies, Experiences, and Perception. Several strategies we identified, such as (Deliberately) Not Trying to Look Younger, were not previously described in the software engineering literature. We found that, in some companies, older women developers are recognized as having particular value, further strengthening the known benefits of diversity in the workforce. Based on the experiences and strategies, we suggest organizations employing software developers to consider the benefits of hiring veteran women software developers. For example, companies can draw upon the life experiences of older women developers in order to better understand the needs of customers from a similar demographic. While we recognize that many of the strategies employed by our study participants are a response to systemic issues, we still consider that, in the short-term, there is benefit in describing these strategies for developers who are experiencing such issues today.

相關內容

Natural interaction between multiple users within a shared virtual environment (VE) relies on each other's awareness of the current position of the interaction partners. This, however, cannot be warranted when users employ noncontinuous locomotion techniques, such as teleportation, which may cause confusion among bystanders. In this paper, we pursue two approaches to create a pleasant experience for both the moving user and the bystanders observing that movement. First, we will introduce a Smart Avatar system that delivers continuous full-body human representations for noncontinuous locomotion in shared virtual reality (VR) spaces. Smart Avatars imitate their assigned user's real-world movements when close-by and autonomously navigate to their user when the distance between them exceeds a certain threshold, i.e., after the user teleports. As part of the Smart Avatar system, we implemented four avatar transition techniques and compared them to conventional avatar locomotion in a user study, revealing significant positive effects on the observer's spatial awareness, as well as pragmatic and hedonic quality scores. Second, we introduce the concept of Stuttered Locomotion, which can be applied to any continuous locomotion method. By converting a continuous movement into short-interval teleport steps, we provide the merits of non-continuous locomotion for the moving user while observers can easily keep track of their path. Thus, while the experience for observers is similarly positive as with continuous motion, a user study confirmed that Stuttered Locomotion can significantly reduce the occurrence of cybersickness symptoms for the moving user, making it an attractive choice for shared VEs. We will discuss the potential of Smart Avatars and Stuttered Locomotion for shared VR experiences, both when applied individually and in combination.

In multivariate time series analysis, the coherence measures the linear dependency between two-time series at different frequencies. However, real data applications often exhibit nonlinear dependency in the frequency domain. Conventional coherence analysis fails to capture such dependency. The quantile coherence, on the other hand, characterizes nonlinear dependency by defining the coherence at a set of quantile levels based on trigonometric quantile regression. Although quantile coherence is a more powerful tool, its estimation remains challenging due to the high level of noise. This paper introduces a new estimation technique for quantile coherence. The proposed method is semi-parametric, which uses the parametric form of the spectrum of the vector autoregressive (VAR) model as an approximation to the quantile spectral matrix, along with nonparametric smoothing across quantiles. For each fixed quantile level, we obtain the VAR parameters from the quantile periodograms, then, using the Durbin-Levinson algorithm, we calculate the preliminary estimate of quantile coherence using the VAR parameters. Finally, we smooth the preliminary estimate of quantile coherence across quantiles using a nonparametric smoother. Numerical results show that the proposed estimation method outperforms nonparametric methods. We show that quantile coherence-based bivariate time series clustering has advantages over the ordinary VAR coherence. For applications, the identified clusters of financial stocks by quantile coherence with a market benchmark are shown to have an intriguing and more accurate structure of diversified investment portfolios that may be used by investors to make better decisions.

Interacting with other human road users is one of the most challenging tasks for autonomous vehicles. To generate congruent driving behaviors, the awareness and understanding of sociality, which includes implicit social customs and individualized social preferences of human drivers, are required. To understand and quantify the complex sociality in driving interactions, we propose a Virtual-Game-based Interaction Model (VGIM) that is explicitly parameterized by a social preference measurement, Interaction Preference Value (IPV), which is designed to capture the driver's relative preference for individual rewards over group rewards. A method for identifying IPV from observed driving trajectory is also provided. Then, we analyze human drivers' IPV with driving data recorded in a typical interactive driving scenario, the unprotected left turn. The results show that (1) human drivers express varied social preferences in executing different tasks (turning left or going straight); (2) competitive actions are strategically conducted by human drivers in order to coordinate with others. Finally, we implement the humanlike IPV expressing strategy with a rule-based method and embed it into VGIM and optimization-based motion planners. Controlled simulation experiments are conducted, and the results demonstrate that (1) IPV identification could improve the motion prediction performance in interactive driving scenarios and (2) dynamic IPV expressing strategy extracted from human driving data makes it possible to reproduce humanlike coordination patterns in the driving interaction.

The advancements in the software industry, along with the changing technologies, methods, and conditions, have particularly brought forth a perspective that prioritizes the improvement of all stages of the software development lifecycle by approaching the process through automation. In particular, methods such as agile methodologies and DevOps, which focus on collaboration, automation, and efficient software production, have become crucial for the software industry. In particular, the understanding of utilizing principles such as distribution management, collaboration, parallel development, and end-to-end automation in agile software development, and DevOps techniques has emerged. In this study, one of these areas, software configuration management, and the integration of modern software development practices such as agile and DevOps are addressed. The aim of this study is to examine the differences and benefits that innovative methods bring to the software configuration management field when compared to traditional methods. To this end, a project is taken as a basis, and with the integration of DevOps and agile methodologies, improvements are made and the results are compared with the previous state. As a result of monitoring software configuration management with the integration of DevOps and agile methodologies, improvements are seen in the build and deployment time, automated report generation, more accurate and fault-free version management, completely controlling the software system, working time and workforce efficiency.

Deaf or hard-of-hearing (DHH) speakers typically have atypical speech caused by deafness. With the growing support of speech-based devices and software applications, more work needs to be done to make these devices inclusive to everyone. To do so, we analyze the use of openly-available automatic speech recognition (ASR) tools with a DHH Japanese speaker dataset. As these out-of-the-box ASR models typically do not perform well on DHH speech, we provide a thorough analysis of creating personalized ASR systems. We collected a large DHH speaker dataset of four speakers totaling around 28.05 hours and thoroughly analyzed the performance of different training frameworks by varying the training data sizes. Our findings show that 1000 utterances (or 1-2 hours) from a target speaker can already significantly improve the model performance with minimal amount of work needed, thus we recommend researchers to collect at least 1000 utterances to make an efficient personalized ASR system. In cases where 1000 utterances is difficult to collect, we also discover significant improvements in using previously proposed data augmentation techniques such as intermediate fine-tuning when only 200 utterances are available.

This paper introduces general methodologies for constructing closed-form solutions to several important partial differential equations (PDEs) with polynomial right-hand sides in two and three spatial dimensions. The covered equations include the isotropic and anisotropic Poisson, Helmholtz, Stokes, and elastostatic equations, as well as the time-harmonic linear elastodynamic and Maxwell equations. Polynomial solutions have recently regained significance in the development of numerical techniques for evaluating volume integral operators and have potential applications in certain kinds of Trefftz finite element methods. Our approach to all of these PDEs relates the particular solution to polynomial solutions of the Poisson and Helmholtz polynomial particular solutions, solutions that can in turn be obtained, respectively, from expansions using homogeneous polynomials and the Neumann series expansion of the operator $(k^2+\Delta)^{-1}$. No matrix inversion is required to compute the solution. The method naturally incorporates divergence constraints on the solution, such as in the case of Maxwell and Stokes flow equations. This work is accompanied by a freely available Julia library, \texttt{PolynomialSolutions.jl}, which implements the proposed methodology in a non-symbolic format and efficiently constructs and provides access to rapid evaluation of the desired solution.

This paper introduces a novel data-driven approach to address challenges faced by city policymakers concerning the distribution of public funds. Providing budgeting processes for improving quality of life based on objective (data-driven) evidence has been so far a missing element in policy-making. This paper focuses on a case study of 1,204 citizens in the city of Aarau, Switzerland, and analyzes survey data containing insightful indicators that can impact the legitimacy of decision-making. Our approach is twofold. On the one hand, we aim to optimize the legitimacy of policymakers' decisions by identifying the level of investment in neighborhoods and projects that offer the greatest return in legitimacy. To do so, we introduce a new context-independent legitimacy metric for policymakers. This metric allows us to distinguish decisive vs. indecisive collective preferences for neighborhoods or projects on which to invest, enabling policymakers to prioritize impactful bottom-up consultations and participatory initiatives (e.g., participatory budgeting). The metric also allows policymakers to identify the optimal number of investments in various project sectors and neighborhoods (in terms of legitimacy gain). On the other hand, we aim to offer guidance to policymakers concerning which satisfaction and participation factors influence citizens' quality of life through an accurate classification model and an evaluation of relocations. By doing so, policymakers may be able to further refine their strategy, making targeted investments with significant benefits to citizens' quality of life. These findings are expected to provide transformative insights for practicing direct democracy in Switzerland and a blueprint for policy-making to adopt worldwide.

The optimal branch number of MDS matrices makes them a preferred choice for designing diffusion layers in many block ciphers and hash functions. However, in lightweight cryptography, Near-MDS (NMDS) matrices with sub-optimal branch numbers offer a better balance between security and efficiency as a diffusion layer, compared to MDS matrices. In this paper, we study NMDS matrices, exploring their construction in both recursive and nonrecursive settings. We provide several theoretical results and explore the hardware efficiency of the construction of NMDS matrices. Additionally, we make comparisons between the results of NMDS and MDS matrices whenever possible. For the recursive approach, we study the DLS matrices and provide some theoretical results on their use. Some of the results are used to restrict the search space of the DLS matrices. We also show that over a field of characteristic 2, any sparse matrix of order $n\geq 4$ with fixed XOR value of 1 cannot be an NMDS when raised to a power of $k\leq n$. Following that, we use the generalized DLS (GDLS) matrices to provide some lightweight recursive NMDS matrices of several orders that perform better than the existing matrices in terms of hardware cost or the number of iterations. For the nonrecursive construction of NMDS matrices, we study various structures, such as circulant and left-circulant matrices, and their generalizations: Toeplitz and Hankel matrices. In addition, we prove that Toeplitz matrices of order $n>4$ cannot be simultaneously NMDS and involutory over a field of characteristic 2. Finally, we use GDLS matrices to provide some lightweight NMDS matrices that can be computed in one clock cycle. The proposed nonrecursive NMDS matrices of orders 4, 5, 6, 7, and 8 can be implemented with 24, 50, 65, 96, and 108 XORs over $\mathbb{F}_{2^4}$, respectively.

In recent years, Artificial intelligence products and services have been offered potential users as pilots. The acceptance intention towards artificial intelligence is greatly influenced by the experience with current AI products and services, expectations for AI, and past experiences with ICT technology. This study aims to explore the factors that impact AI acceptance intention and understand the process of its formation. The analysis results of this study reveal that AI experience and past ICT experience affect AI acceptance intention in two ways. Through the direct path, higher AI experience and ICT experience are associated with a greater intention to accept AI. Additionally, there is an indirect path where AI experience and ICT experience contribute to increased expectations for AI, and these expectations, in turn, elevate acceptance intention. Based on the findings, several recommendations are suggested for companies and public organizations planning to implement artificial intelligence in the future. It is crucial to manage the user experience of ICT services and pilot AI products and services to deliver positive experiences. It is essential to provide potential AI users with specific information about the features and benefits of AI products and services. This will enable them to develop realistic expectations regarding AI technology.

This paper presents a comprehensive and practical guide for practitioners and end-users working with Large Language Models (LLMs) in their downstream natural language processing (NLP) tasks. We provide discussions and insights into the usage of LLMs from the perspectives of models, data, and downstream tasks. Firstly, we offer an introduction and brief summary of current GPT- and BERT-style LLMs. Then, we discuss the influence of pre-training data, training data, and test data. Most importantly, we provide a detailed discussion about the use and non-use cases of large language models for various natural language processing tasks, such as knowledge-intensive tasks, traditional natural language understanding tasks, natural language generation tasks, emergent abilities, and considerations for specific tasks.We present various use cases and non-use cases to illustrate the practical applications and limitations of LLMs in real-world scenarios. We also try to understand the importance of data and the specific challenges associated with each NLP task. Furthermore, we explore the impact of spurious biases on LLMs and delve into other essential considerations, such as efficiency, cost, and latency, to ensure a comprehensive understanding of deploying LLMs in practice. This comprehensive guide aims to provide researchers and practitioners with valuable insights and best practices for working with LLMs, thereby enabling the successful implementation of these models in a wide range of NLP tasks. A curated list of practical guide resources of LLMs, regularly updated, can be found at \url{//github.com/Mooler0410/LLMsPracticalGuide}.

北京阿比特科技有限公司