亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many systems and services rely on timing assumptions for performance and availability to perform critical aspects of their operation, such as various timeouts for failure detectors or optimizations to concurrency control mechanisms. Many such assumptions rely on the ability of different components to communicate on time -- a delay in communication may trigger the failure detector or cause the system to enter a less-optimized execution mode. Unfortunately, these timing assumptions are often set with little regard to actual communication guarantees of the underlying infrastructure -- in particular, the variability of communication delays between processes in different nodes/servers. The higher communication variability holds especially true for systems deployed in the public cloud since the cloud is a utility shared by many users and organizations, making it prone to higher performance variance due to noisy neighbor syndrome. In this work, we present Cloud Latency Tester (CLT), a simple tool that can help measure the variability of communication delays between nodes to help engineers set proper values for their timing assumptions. We also provide our observational analysis of running CLT in three major cloud providers and share the lessons we learned.

相關內容

Modern multi-centre randomized controlled trials (MCRCTs) collect massive amounts of tabular data, and are monitored intensively for irregularities by humans. We began by empirically evaluating 6 modern machine learning-based outlier detection algorithms on the task of identifying irregular data in 838 datasets from 7 real-world MCRCTs with a total of 77,001 patients from over 44 countries. Our results reinforce key findings from prior work in the outlier detection literature on data from other domains. Existing algorithms often succeed at identifying irregularities without any supervision, with at least one algorithm exhibiting positive performance 70.6% of the time. However, performance across datasets varies substantially with no single algorithm performing consistently well, motivating new techniques for unsupervised model selection or other means of aggregating potentially discordant predictions from multiple candidate models. We propose the Meta-learned Probabilistic Ensemble (MePE), a simple algorithm for aggregating the predictions of multiple unsupervised models, and show that it performs favourably compared to recent meta-learning approaches for outlier detection model selection. While meta-learning shows promise, small ensembles outperform all forms of meta-learning on average, a negative result that may guide the application of current outlier detection approaches in healthcare and other real-world domains.

Prediction markets are long known for prediction accuracy. This study systematically explores the fundamental properties of prediction markets, addressing questions about their information aggregation process and the factors contributing to their remarkable efficacy. We propose a novel multivariate utility (MU) based mechanism that unifies several existing automated market-making schemes. Using this mechanism, we establish the convergence results for markets comprised of risk-averse traders who have heterogeneous beliefs and repeatedly interact with the market maker. We demonstrate that the resulting limiting wealth distribution aligns with the Pareto efficient frontier defined by the utilities of all market participants. With the help of this result, we establish analytical and numerical results for the limiting price in different market models. Specifically, we show that the limiting price converges to the geometric mean of agent beliefs in exponential utility-based markets. In risk-measure-based markets, we construct a family of risk measures that satisfy the convergence criteria and prove that the price can converge to a unique level represented by the weighted power mean of agent beliefs. In broader markets with Constant Relative Risk Aversion (CRRA) utilities, we reveal that the limiting price can be characterized by systems of equations that encapsulate agent beliefs, risk parameters, and wealth. Despite the potential impact of traders' trading sequences on the limiting price, we establish a price invariance result for markets with a large trader population. Using this result, we propose an efficient approximation scheme for the limiting price.

For approximate inference in high-dimensional generalized linear models (GLMs), the performance of an estimator may significantly degrade when mismatch exists between the postulated model and the ground truth. In mismatched GLMs with rotation-invariant measurement matrices, Kabashima et al. proved vector approximate message passing (VAMP) computes exactly the optimal estimator if the replica symmetry (RS) ansatz is valid, but it becomes inappropriate if RS breaking (RSB) appears. Although the one-step RSB (1RSB) saddle point equations were given for the optimal estimator, the question remains: how to achieve the 1RSB prediction? This paper answers the question by proposing a new algorithm, vector approximate survey propagation (VASP). VASP derives from a reformulation of Kabashima's extremum conditions, which later links the theoretical equations to survey propagation in vector form and finally the algorithm. VASP has a complexity as low as VAMP, while embracing VAMP as a special case. The SE derived for VASP can capture precisely the per-iteration behavior of the simulated algorithm, and the SE's fixed point equations perfectly match Kabashima's 1RSB prediction, which indicates VASP can achieve the optimal performance even in a model-mismatched setting with 1RSB. Simulation results confirm VASP outperforms many state-of-the-art algorithms.

Many NLP researchers are experiencing an existential crisis triggered by the astonishing success of ChatGPT and other systems based on large language models (LLMs). After such a disruptive change to our understanding of the field, what is left to do? Taking a historical lens, we look for guidance from the first era of LLMs, which began in 2005 with large $n$-gram models for machine translation. We identify durable lessons from the first era, and more importantly, we identify evergreen problems where NLP researchers can continue to make meaningful contributions in areas where LLMs are ascendant. Among these lessons, we discuss the primacy of hardware advancement in shaping the availability and importance of scale, as well as the urgent challenge of quality evaluation, both automated and human. We argue that disparities in scale are transient and that researchers can work to reduce them; that data, rather than hardware, is still a bottleneck for many meaningful applications; that meaningful evaluation informed by actual use is still an open problem; and that there is still room for speculative approaches.

The accurate evaluation of the quality of driving behavior is crucial for optimizing and implementing autonomous driving technology in practice. However, there is no comprehensive understanding of good driving behaviors currently. In this paper, we sought to understand driving behaviors from the perspectives of both drivers and passengers. We invited 10 expert drivers and 14 novice drivers to complete a 5.7-kilometer urban road driving task. After the experiments, we conducted semi-structured interviews with 24 drivers and 48 of their passengers (two passengers per driver). Through the analysis of interview data, we found passengers' assessing logic of driving behaviors, divers' considerations and efforts to achieve good driving, and gaps between these perspectives. Our research provided insights into a systematic evaluation of autonomous driving and the design implications for future autonomous vehicles.

We propose a Holistic Return on Ethics (HROE) framework for understanding the return on organizational investments in artificial intelligence (AI) ethics efforts. This framework is useful for organizations that wish to quantify the return for their investment decisions. The framework identifies the direct economic returns of such investments, the indirect paths to return through intangibles associated with organizational reputation, and real options associated with capabilities. The holistic framework ultimately provides organizations with the competency to employ and justify AI ethics investments.

The pursuit of fairness in machine learning models has emerged as a critical research challenge in different applications ranging from bank loan approval to face detection. Despite the widespread adoption of artificial intelligence algorithms across various domains, concerns persist regarding the presence of biases and discrimination within these models. To address this pressing issue, this study introduces a novel method called "The Fairness Stitch (TFS)" to enhance fairness in deep learning models. This method combines model stitching and training jointly, while incorporating fairness constraints. In this research, we assess the effectiveness of our proposed method by conducting a comprehensive evaluation of two well-known datasets, CelebA and UTKFace. We systematically compare the performance of our approach with the existing baseline method. Our findings reveal a notable improvement in achieving a balanced trade-off between fairness and performance, highlighting the promising potential of our method to address bias-related challenges and foster equitable outcomes in machine learning models. This paper poses a challenge to the conventional wisdom of the effectiveness of the last layer in deep learning models for de-biasing.

Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent's experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture--observation, planning, and reflection--each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

Compared with cheap addition operation, multiplication operation is of much higher computation complexity. The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values. In this paper, we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce computation costs. In AdderNets, we take the $\ell_1$-norm distance between filters and input feature as the output response. The influence of this new similarity measure on the optimization of neural network have been thoroughly analyzed. To achieve a better performance, we develop a special back-propagation approach for AdderNets by investigating the full-precision gradient. We then propose an adaptive learning rate strategy to enhance the training procedure of AdderNets according to the magnitude of each neuron's gradient. As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset without any multiplication in convolution layer.

北京阿比特科技有限公司