亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Evolutionary computation-based neural architecture search (ENAS) is a popular technique for automating architecture design of deep neural networks. Despite its groundbreaking applications, there is no theoretical study for ENAS. The expected hitting time (EHT) is one of the most important theoretical issues, since it implies the average computational time complexity. This paper proposes a general method by integrating theory and experiment for estimating the EHT of ENAS algorithms, which includes common configuration, search space partition, transition probability estimation, population distribution fitting, and hitting time analysis. By exploiting the proposed method, we consider the ($\lambda$+$\lambda$)-ENAS algorithms with different mutation operators and estimate the lower bounds of the EHT. Furthermore, we study the EHT on the NAS-Bench-101 problem, and the results demonstrate the validity of the proposed method. To the best of our knowledge, this work is the first attempt to establish a theoretical foundation for ENAS algorithms.

相關內容

Web search engines arguably form the most popular data-driven systems in contemporary society. They wield a considerable power by functioning as gatekeepers of the Web, with most user journeys on the Web beginning with them. Starting from the late 1990s, search engines have been dominated by the paradigm of link-based web search. In this paper, we critically analyze the political economy of the paradigm of link-based web search, drawing upon insights and methodologies from critical political economy. We draw several insights on how link-based web search has led to phenomena that favor capital through long-term structural changes on the Web, and how it has led to accentuating unpaid digital labor and ecologically unsustainable practices, among several others. We show how contemporary observations on the degrading quality of link-based web search can be traced back to the internal contradictions with the paradigm, and how such socio-technical phenomena may lead to a disutility of the link-based web search model. Our contribution is primarily on enhancing the understanding of the political economy of link-based web search, and laying bare the phenomena at work, and implicitly catalyze the search for alternative models.

Software engineers develop, fine-tune, and deploy deep learning (DL) models using a variety of development frameworks and runtime environments. DL model converters move models between frameworks and to runtime environments. Conversion errors compromise model quality and disrupt deployment. However, the failure characteristics of DL model converters are unknown, adding risk when using DL interoperability technologies. This paper analyzes failures in DL model converters. We survey software engineers about DL interoperability tools, use cases, and pain points (N=92). Then, we characterize failures in model converters associated with the main interoperability tool, ONNX (N=200 issues in PyTorch and TensorFlow). Finally, we formulate and test two hypotheses about structural causes for the failures we studied. We find that the node conversion stage of a model converter accounts for ~75% of the defects and 33% of reported failure are related to semantically incorrect models. The cause of semantically incorrect models is elusive, but models with behaviour inconsistencies share operator sequences. Our results motivate future research on making DL interoperability software simpler to maintain, extend, and validate. Research into behavioural tolerances and architectural coverage metrics could be fruitful.

The computer program "Histropy" is an interactive Python program for the quantification of selected features of two-dimensional (2D) images/patterns (in either JPG/JPEG, PNG, GIF, BMP, or baseline TIF/TIFF formats) using calculations based on the pixel intensities in this data, their histograms, and user-selected sections of those histograms. The histograms of these images display pixel-intensity values along the x-axis (of a 2D Cartesian plot), with the frequency of each intensity value within the image represented along the y-axis. The images need to be of 8-bit information depth and can be of arbitrary size. Histropy generates an image's histogram surrounded by a graphical user interface that allows one to select any range of image-pixel intensity levels, i.e. sections along the histograms' x-axis, using either the computer mouse or numerical text entries. The program subsequently calculates the (so-called Monkey Model) Shannon entropy and root-mean-square contrast for the selected section and displays them as part of what we call a "histogram-workspace-plot." To support the visual identification of small peaks in the histograms, the user can switch between a linear and log-base-10 display scale for the y-axis of the histograms. Pixel intensity data from different images can be overlaid onto the same histogram-workspace-plot for visual comparisons. The visual outputs of the program can be saved as histogram-workspace-plots in the PNG format for future usage. The source code of the program and a brief user manual are published in the supporting materials and on GitHub. Its functionality is currently being extended to 16-bit unsigned TIF/TIFF images. Instead of taking only 2D images as inputs, the program's functionality could be extended by a few lines of code to other potential uses employing data tables with one or two dimensions in the CSV format.

Auction-based federated learning (AFL) is an important emerging category of FL incentive mechanism design, due to its ability to fairly and efficiently motivate high-quality data owners to join data consumers' (i.e., servers') FL training tasks. To enhance the efficiency in AFL decision support for stakeholders (i.e., data consumers, data owners, and the auctioneer), intelligent agent-based techniques have emerged. However, due to the highly interdisciplinary nature of this field and the lack of a comprehensive survey providing an accessible perspective, it is a challenge for researchers to enter and contribute to this field. This paper bridges this important gap by providing a first-of-its-kind survey on the Intelligent Agents for AFL (IA-AFL) literature. We propose a unique multi-tiered taxonomy that organises existing IA-AFL works according to 1) the stakeholders served, 2) the auction mechanism adopted, and 3) the goals of the agents, to provide readers with a multi-perspective view into this field. In addition, we analyse the limitations of existing approaches, summarise the commonly adopted performance evaluation metrics, and discuss promising future directions leading towards effective and efficient stakeholder-oriented decision support in IA-AFL ecosystems.

We investigate the computational power of particle methods, a well-established class of algorithms with applications in scientific computing and computer simulation. The computational power of a compute model determines the class of problems it can solve. Automata theory allows describing the computational power of abstract machines (automata) and the problems they can solve. At the top of the Chomsky hierarchy of formal languages and grammars are Turing machines, which resemble the concept on which most modern computers are built. Although particle methods can be interpreted as automata based on their formal definition, their computational power has so far not been studied. We address this by analyzing Turing completeness of particle methods. In particular, we prove two sets of restrictions under which a particle method is still Turing powerful, and we show when it loses Turing powerfulness. This contributes to understanding the theoretical foundations of particle methods and provides insight into the powerfulness of computer simulations.

This work initiates the study of a beyond-diagonal reconfigurable intelligent surface (BD-RIS)-aided transmitter architecture for integrated sensing and communication (ISAC) in the millimeter-wave (mmWave) frequency band. Deploying BD-RIS at the transmitter side not only alleviates the need for extensive fully digital radio frequency (RF) chains but also enhances both communication and sensing performance. These benefits are facilitated by the additional design flexibility introduced by the fully-connected scattering matrix of BD-RIS. To achieve the aforementioned benefits, in this work, we propose an efficient two-stage algorithm to design the digital beamforming of the transmitter and the scattering matrix of the BD-RIS with the aim of jointly maximizing the sum rate for multiple communication users and minimizing the largest eigenvalue of the Cramer-Rao bound (CRB) matrix for multiple sensing targets. Numerical results show that the transmitter-side BD-RIS-aided mmWave ISAC outperforms the conventional diagonal-RIS-aided ones in both communication and sensing performance.

User interface (UI) design is a difficult yet important task for ensuring the usability, accessibility, and aesthetic qualities of applications. In our paper, we develop a machine-learned model, UIClip, for assessing the design quality and visual relevance of a UI given its screenshot and natural language description. To train UIClip, we used a combination of automated crawling, synthetic augmentation, and human ratings to construct a large-scale dataset of UIs, collated by description and ranked by design quality. Through training on the dataset, UIClip implicitly learns properties of good and bad designs by i) assigning a numerical score that represents a UI design's relevance and quality and ii) providing design suggestions. In an evaluation that compared the outputs of UIClip and other baselines to UIs rated by 12 human designers, we found that UIClip achieved the highest agreement with ground-truth rankings. Finally, we present three example applications that demonstrate how UIClip can facilitate downstream applications that rely on instantaneous assessment of UI design quality: i) UI code generation, ii) UI design tips generation, and iii) quality-aware UI example search.

Analog Computing-in-Memory (ACIM) is an emerging architecture to perform efficient AI edge computing. However, current ACIM designs usually have unscalable topology and still heavily rely on manual efforts. These drawbacks limit the ACIM application scenarios and lead to an undesired time-to-market. This work proposes an end-to-end automated ACIM based on a synthesizable architecture (EasyACIM). With a given array size and customized cell library, EasyACIM can generate layouts for ACIMs with various design specifications end-to-end automatically. Leveraging the multi-objective genetic algorithm (MOGA)-based design space explorer, EasyACIM can obtain high-quality ACIM solutions based on the proposed synthesizable architecture, targeting versatile application scenarios. The ACIM solutions given by EasyACIM have a wide design space and competitive performance compared to the state-of-the-art (SOTA) ACIMs.

Knowledge graph embedding (KGE) is a increasingly popular technique that aims to represent entities and relations of knowledge graphs into low-dimensional semantic spaces for a wide spectrum of applications such as link prediction, knowledge reasoning and knowledge completion. In this paper, we provide a systematic review of existing KGE techniques based on representation spaces. Particularly, we build a fine-grained classification to categorise the models based on three mathematical perspectives of the representation spaces: (1) Algebraic perspective, (2) Geometric perspective, and (3) Analytical perspective. We introduce the rigorous definitions of fundamental mathematical spaces before diving into KGE models and their mathematical properties. We further discuss different KGE methods over the three categories, as well as summarise how spatial advantages work over different embedding needs. By collating the experimental results from downstream tasks, we also explore the advantages of mathematical space in different scenarios and the reasons behind them. We further state some promising research directions from a representation space perspective, with which we hope to inspire researchers to design their KGE models as well as their related applications with more consideration of their mathematical space properties.

We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.

北京阿比特科技有限公司