亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Open software development provides software organizations access to infinite online resource supply. The resource supply is a pool of unknown workers who work from different location and time zone and are interested in performing various type of tasks. Improper task execution in such dynamic and competitive environment leads to zero task registration, zero task submissions or low qualified submissions due to unforeseen reasons such as uncertainty in workers' behavior and performance. Therefore, to ensure effectiveness of open software development, there is a need for improved understanding and visibility into characteristics associated with attracting reliable workers in making qualified submissions and reducing task failure.

相關內容

In this paper, we enable automated property verification of deliberative components in robot control architectures. We focus on formalizing the execution context of Behavior Trees (BTs) to provide a scalable, yet formally grounded, methodology to enable runtime verification and prevent unexpected robot behaviors. To this end, we consider a message-passing model that accommodates both synchronous and asynchronous composition of parallel components, in which BTs and other components execute and interact according to the communication patterns commonly adopted in robotic software architectures. We introduce a formal property specification language to encode requirements and build runtime monitors. We performed a set of experiments, both on simulations and on the real robot, demonstrating the feasibility of our approach in a realistic application and its integration in a typical robot software architecture. We also provide an OS-level virtualization environment to reproduce the experiments in the simulated scenario.

Deep learning recommendation models (DLRMs) are used across many business-critical services at Facebook and are the single largest AI application in terms of infrastructure demand in its data-centers. In this paper we discuss the SW/HW co-designed solution for high-performance distributed training of large-scale DLRMs. We introduce a high-performance scalable software stack based on PyTorch and pair it with the new evolution of Zion platform, namely ZionEX. We demonstrate the capability to train very large DLRMs with up to 12 Trillion parameters and show that we can attain 40X speedup in terms of time to solution over previous systems. We achieve this by (i) designing the ZionEX platform with dedicated scale-out network, provisioned with high bandwidth, optimal topology and efficient transport (ii) implementing an optimized PyTorch-based training stack supporting both model and data parallelism (iii) developing sharding algorithms capable of hierarchical partitioning of the embedding tables along row, column dimensions and load balancing them across multiple workers; (iv) adding high-performance core operators while retaining flexibility to support optimizers with fully deterministic updates (v) leveraging reduced precision communications, multi-level memory hierarchy (HBM+DDR+SSD) and pipelining. Furthermore, we develop and briefly comment on distributed data ingestion and other supporting services that are required for the robust and efficient end-to-end training in production environments.

Developing sustainable scientific software for the needs of the scientific community requires expertise in both software engineering and domain science. This can be challenging due to the unique needs of scientific software, the insufficient resources for modern software engineering practices in the scientific community, and the complexity of evolving scientific contexts for developers. These difficulties can be reduced if scientists and developers collaborate. We present a case study wherein scientists from the SuperNova Early Warning System collaborated with software developers from the Scalable Cyberinfrastructure for Multi-Messenger Astrophysics project. The collaboration addressed the difficulties of scientific software development, but presented additional risks to each team. For the scientists, there was a concern of relying on external systems and lacking control in the development process. For the developers, there was a risk in supporting the needs of an user-group while maintaining core development. We mitigated these issues by utilizing an Agile Scrum framework to orchestrate the collaboration. This promoted communication and cooperation, ensuring that the scientists had an active role in development while allowing the developers to quickly evaluate and implement the scientists' software requirements. While each system was still in an early stage, the collaboration provided benefits for each group: the scientists kick-started their development by using an existing platform, and the developers utilized the scientists' use-case to improve their systems. This case study suggests that scientists and software developers can avoid some difficulties of scientific computing by collaborating and can address emergent concerns using Agile Scrum methods.

Simulation experiments are typically conducted repeatedly during the model development process, for example, to re-validate if a behavioral property still holds after several model changes. Approaches for automatically reusing and generating simulation experiments can support modelers in conducting simulation studies in a more systematic and effective manner. They rely on explicit experiment specifications and, so far, on user interaction for initiating the reuse. Thereby, they are constrained to support the reuse of simulation experiments in a specific setting. Our approach now goes one step further by automatically identifying and adapting the experiments to be reused for a variety of scenarios. To achieve this, we exploit provenance graphs of simulation studies, which provide valuable information about the previous modeling and experimenting activities, and contain meta-information about the different entities that were used or produced during the simulation study. We define provenance patterns and associate them with a semantics, which allows us to interpret the different activities, and construct transformation rules for provenance graphs. Our approach is implemented in a Reuse and Adapt framework for Simulation Experiments (RASE) which can interface with various modeling and simulation tools. In the case studies, we demonstrate the utility of our framework for a) the repeated sensitivity analysis of an agent-based model of migration routes, and b) the cross-validation of two models of a cell signaling pathway.

The construction of effective Recommender Systems (RS) is a complex process, mainly due to the nature of RSs which involves large scale software-systems and human interactions. Iterative development processes require deep understanding of a current baseline as well as the ability to estimate the impact of changes in multiple variables of interest. Simulations are well suited to address both challenges and potentially leading to a high velocity construction process, a fundamental requirement in commercial contexts. Recently, there has been significant interest in RS Simulation Platforms, which allow RS developers to easily craft simulated environments where their systems can be analysed. In this work we discuss how simulations help to increase velocity, we look at the literature around RS Simulation Platforms, analyse strengths and gaps and distill a set of guiding principles for the design of RS Simulation Platforms that we believe will maximize the velocity of iterative RS construction processes.

Recent developments in the mobile app industry have resulted in various types of mobile apps, each targeting a different need and a specific audience. Consequently, users access distinct apps to complete their information need tasks. This leads to the use of various apps not only separately, but also collaboratively in the same session to achieve a single goal. Recent work has argued the need for a unified mobile search system that would act as metasearch on users' mobile devices. The system would identify the target apps for the user's query, submit the query to the apps, and present the results to the user in a unified way. In this work, we aim to deepen our understanding of user behavior while accessing information on their mobile phones by conducting an extensive analysis of various aspects related to the search process. In particular, we study the effect of task type and user demographics on their behavior in interacting with mobile apps. Our findings reveal trends and patterns that can inform the design of a more effective mobile information access environment.

Edge-exchangeable probabilistic network models generate edges as an i.i.d.~sequence from a discrete measure, providing a simple means for statistical inference of latent network properties. The measure is often constructed using the self-product of a realization from a Bayesian nonparametric (BNP) discrete prior; but unlike in standard BNP models, the self-product measure prior is not conjugate the likelihood, hindering the development of exact simulation and inference algorithms. Approximation via finite truncation of the discrete measure is a straightforward alternative, but incurs an unknown approximation error. In this paper, we develop methods for forward simulation and posterior inference in random self-product-measure models based on truncation, and provide theoretical guarantees on the quality of the results as a function of the truncation level. The techniques we present are general and extend to the broader class of discrete Bayesian nonparametric models.

The difficulty in specifying rewards for many real-world problems has led to an increased focus on learning rewards from human feedback, such as demonstrations. However, there are often many different reward functions that explain the human feedback, leaving agents with uncertainty over what the true reward function is. While most policy optimization approaches handle this uncertainty by optimizing for expected performance, many applications demand risk-averse behavior. We derive a novel policy gradient-style robust optimization approach, PG-BROIL, that optimizes a soft-robust objective that balances expected performance and risk. To the best of our knowledge, PG-BROIL is the first policy optimization algorithm robust to a distribution of reward hypotheses which can scale to continuous MDPs. Results suggest that PG-BROIL can produce a family of behaviors ranging from risk-neutral to risk-averse and outperforms state-of-the-art imitation learning algorithms when learning from ambiguous demonstrations by hedging against uncertainty, rather than seeking to uniquely identify the demonstrator's reward function.

State-of-the-art recommendation algorithms -- especially the collaborative filtering (CF) based approaches with shallow or deep models -- usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely neglected recently due to the availability of vast amount of data, and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users' historical behaviors. A great challenge for using knowledge bases for recommendation is how to integrated large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements on knowledge base embedding sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge. In this work, we propose to reason over knowledge base embeddings for personalized recommendation. Specifically, we propose a knowledge base representation learning approach to embed heterogeneous entities for recommendation. Experimental results on real-world dataset verified the superior performance of our approach compared with state-of-the-art baselines.

Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Our results show that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually.

北京阿比特科技有限公司