亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present IOHexperimenter, the experimentation module of the IOHprofiler project, which aims at providing an easy-to-use and highly customizable toolbox for benchmarking iterative optimization heuristics such as evolutionary and genetic algorithms, local search algorithms, Bayesian optimization techniques, etc. IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking pipeline that uses other components of IOHprofiler such as IOHanalyzer, the module for interactive performance analysis and visualization. IOHexperimenter provides an efficient interface between optimization problems and their solvers while allowing for granular logging of the optimization process. These logs are fully compatible with existing tools for interactive data analysis, which significantly speeds up the deployment of a benchmarking pipeline. The main components of IOHexperimenter are the environment to build customized problem suites and the various logging options that allow users to steer the granularity of the data records.

相關內容

This study introduces an ability-based method for personalized keyboard generation, wherein an individual's own movement and human-computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user's movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants without motor impairments using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual's movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user's motor abilities when designing virtual interfaces.

Intelligent techniques are urged to achieve automatic allocation of the computing resource in Open Radio Access Network (O-RAN), to save computing resource, increase utilization rate of them and decrease the delay. However, the existing problem formulation to solve this resource allocation problem is unsuitable as it defines the capacity utility of resource in an inappropriate way and tends to cause much delay. Moreover, the existing problem has only been attempted to be solved based on greedy search, which is not ideal as it could get stuck into local optima. Considering those, a new formulation that better describes the problem is proposed. In addition, as a well-known global search meta heuristic approach, an evolutionary algorithm (EA) is designed tailored for solving the new problem formulation, to find a resource allocation scheme to proactively and dynamically deploy the computing resource for processing upcoming traffic data. Experimental studies carried out on several real-world datasets and newly generated artificial datasets with more properties beyond the real-world datasets have demonstrated the significant superiority over a baseline greedy algorithm under different parameter settings. Moreover, experimental studies are taken to compare the proposed EA and two variants, to indicate the impact of different algorithm choices.

Graph transformation formalisms have proven to be suitable tools for the modelling of chemical reactions. They are well established in theoretical studies and increasingly also in practical applications in chemistry. The latter is made feasible via the development of programming frameworks which makes the formalisms executable. The application of such frameworks to large networks of chemical reactions, however, poses unique computational challenges. One such characteristic is the inherent combinatorial nature of the graphs involved. The graphs consist of many connected components, representing individual molecules. While the existing methods for implementing graph transformations can be applied to such graphs, the combinatorics of constructing graph matches quickly becomes a computational bottleneck as the size of the chemical reaction network grows. In this contribution, we develop a new method of enumerating graph transformation rule applications, designed to improve performance in these scenarios. The method is based on constructing graph matches component-wise, in an iterative fashion, allowing redundant applications to be detected early and pruned. We further extend the algorithm with an efficient heuristic, based on local symmetries of the graphs, which allow us to detect and discard isomorphic applications early. Finally, we conduct chemical network generation experiments on real-life as well as synthetic data and compare against state-of-the-art in the field.

This paper presents a benchmarking study of some of the state-of-the-art reinforcement learning algorithms used for solving two simulated vision-based robotics problems. The algorithms considered in this study include soft actor-critic (SAC), proximal policy optimization (PPO), interpolated policy gradients (IPG), and their variants with Hindsight Experience replay (HER). The performances of these algorithms are compared against PyBullet's two simulation environments known as KukaDiverseObjectEnv and RacecarZEDGymEnv respectively. The state observations in these environments are available in the form of RGB images and the action space is continuous, making them difficult to solve. A number of strategies are suggested to provide intermediate hindsight goals required for implementing HER algorithm on these problems which are essentially single-goal environments. In addition, a number of feature extraction architectures are proposed to incorporate spatial and temporal attention in the learning process. Through rigorous simulation experiments, the improvement achieved with these components are established. To the best of our knowledge, such a benchmarking study is not available for the above two vision-based robotics problems making it a novel contribution in the field.

Comprehensive benchmarking of clustering algorithms is rendered difficult by two key factors: (i)~the elusiveness of a unique mathematical definition of this unsupervised learning approach and (ii)~dependencies between the generating models or clustering criteria adopted by some clustering algorithms and indices for internal cluster validation. Consequently, there is no consensus regarding the best practice for rigorous benchmarking, and whether this is possible at all outside the context of a given application. Here, we argue that synthetic datasets must continue to play an important role in the evaluation of clustering algorithms, but that this necessitates constructing benchmarks that appropriately cover the diverse set of properties that impact clustering algorithm performance. Through our framework, HAWKS, we demonstrate the important role evolutionary algorithms play to support flexible generation of such benchmarks, allowing simple modification and extension. We illustrate two possible uses of our framework: (i)~the evolution of benchmark data consistent with a set of hand-derived properties and (ii)~the generation of datasets that tease out performance differences between a given pair of algorithms. Our work has implications for the design of clustering benchmarks that sufficiently challenge a broad range of algorithms, and for furthering insight into the strengths and weaknesses of specific approaches.

In this paper, we study the problem of designing experiments that are conducted on a set of units such as users or groups of users in an online marketplace, for multiple time periods such as weeks or months. These experiments are particularly useful to study the treatments that have causal effects on both current and future outcomes (instantaneous and lagged effects). The design problem involves selecting a treatment time for each unit, before or during the experiment, in order to most precisely estimate the instantaneous and lagged effects, post experimentation. This optimization of the treatment decisions can directly minimize the opportunity cost of the experiment by reducing its sample size requirement. The optimization is an NP-hard integer program for which we provide a near-optimal solution, when the design decisions are performed all at the beginning (fixed-sample-size designs). Next, we study sequential experiments that allow adaptive decisions during the experiments, and also potentially early stop the experiments, further reducing their cost. However, the sequential nature of these experiments complicates both the design phase and the estimation phase. We propose a new algorithm, PGAE, that addresses these challenges by adaptively making treatment decisions, estimating the treatment effects, and drawing valid post-experimentation inference. PGAE combines ideas from Bayesian statistics, dynamic programming, and sample splitting. Using synthetic experiments on real data sets from multiple domains, we demonstrate that our proposed solutions for fixed-sample-size and sequential experiments reduce the opportunity cost of the experiments by over 50% and 70%, respectively, compared to benchmarks.

We present a comprehensive benchmark of JSON-compatible binary serialization specifications using the SchemaStore open-source test suite collection of over 400 JSON documents matching their respective schemas and representative of their use across industries. We benchmark a set of schema-driven (ASN.1, Apache Avro, Microsoft Bond, Cap'n Proto, FlatBuffers, Protocol Buffers, and Apache Thrift) and schema-less (BSON, CBOR, FlexBuffers, MessagePack, Smile, and UBJSON) JSON-compatible binary serialization specifications. Existing literature on benchmarking JSON-compatible binary serialization specifications demonstrates extensive gaps when it comes to binary serialization specifications coverage, reproducibility and representativity, the role of data compression in binary serialization and the choice and use of obsolete versions of binary serialization specifications. We introduce a tiered taxonomy for JSON documents consisting of 36 categories classified as Tier 1, Tier 2 and Tier 3 as a common basis to class JSON documents based on their size, type of content, characteristics of their structure and redundancy criteria. We built and published a free-to-use online tool to automatically categorize JSON documents according to our taxonomy that generates related summary statistics. In the interest of fairness and transparency, we adhere to reproducible software development standards and publicly host the benchmark software and results on GitHub.

Benchmarking and performance analysis play an important role in understanding the behaviour of iterative optimization heuristics (IOHs) such as local search algorithms, genetic and evolutionary algorithms, Bayesian optimization algorithms, etc. This task, however, involves manual setup, execution, and analysis of the experiment on an individual basis, which is laborious and can be mitigated by a generic and well-designed platform. For this purpose, we propose IOHanalyzer, a new user-friendly tool for the analysis, comparison, and visualization of performance data of IOHs. Implemented in R and C++, IOHanalyzer is fully open source. It is available on CRAN and GitHub. IOHanalyzer provides detailed statistics about fixed-target running times and about fixed-budget performance of the benchmarked algorithms with a real-valued codomain, single-objective optimization tasks. Performance aggregation over several benchmark problems is possible, for example in the form of empirical cumulative distribution functions. Key advantages of IOHanalyzer over other performance analysis packages are its highly interactive design, which allows users to specify the performance measures, ranges, and granularity that are most useful for their experiments, and the possibility to analyze not only performance traces, but also the evolution of dynamic state parameters. IOHanalyzer can directly process performance data from the main benchmarking platforms, including the COCO platform, Nevergrad, the SOS platform, and IOHexperimenter. An R programming interface is provided for users preferring to have a finer control over the implemented functionalities.

In many applications, such as recommender systems, online advertising, and product search, click-through rate (CTR) prediction is a critical task, because its accuracy has a direct impact on both platform revenue and user experience. In recent years, with the prevalence of deep learning, CTR prediction has been widely studied in both academia and industry, resulting in an abundance of deep CTR models. Unfortunately, there is still a lack of a standardized benchmark and uniform evaluation protocols for CTR prediction. This leads to the non-reproducible and even inconsistent experimental results among these studies. In this paper, we present an open benchmark (namely FuxiCTR) for reproducible research and provide a rigorous comparison of different models for CTR prediction. Specifically, we ran over 4,600 experiments for a total of more than 12,000 GPU hours in a uniform framework to re-evaluate 24 existing models on two widely-used datasets, Criteo and Avazu. Surprisingly, our experiments show that many models have smaller differences than expected and sometimes are even inconsistent with what reported in the literature. We believe that our benchmark could not only allow researchers to gauge the effectiveness of new models conveniently, but also share some good practices to fairly compare with the state of the arts. We will release all the code and benchmark settings.

Sentiment Analysis (SA) is a major field of study in natural language processing, computational linguistics and information retrieval. Interest in SA has been constantly growing in both academia and industry over the recent years. Moreover, there is an increasing need for generating appropriate resources and datasets in particular for low resource languages including Persian. These datasets play an important role in designing and developing appropriate opinion mining platforms using supervised, semi-supervised or unsupervised methods. In this paper, we outline the entire process of developing a manually annotated sentiment corpus, SentiPers, which covers formal and informal written contemporary Persian. To the best of our knowledge, SentiPers is a unique sentiment corpus with such a rich annotation in three different levels including document-level, sentence-level, and entity/aspect-level for Persian. The corpus contains more than 26000 sentences of users opinions from digital product domain and benefits from special characteristics such as quantifying the positiveness or negativity of an opinion through assigning a number within a specific range to any given sentence. Furthermore, we present statistics on various components of our corpus as well as studying the inter-annotator agreement among the annotators. Finally, some of the challenges that we faced during the annotation process will be discussed as well.

北京阿比特科技有限公司