亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Bayesian optimization (BO) is an approach to globally optimizing black-box objective functions that are expensive to evaluate. BO-powered experimental design has found wide application in materials science, chemistry, experimental physics, drug development, etc. This work aims to bring attention to the benefits of applying BO in designing experiments and to provide a BO manual, covering both methodology and software, for the convenience of anyone who wants to apply or learn BO. In particular, we briefly explain the BO technique, review all the applications of BO in additive manufacturing, compare and exemplify the features of different open BO libraries, unlock new potential applications of BO to other types of data (e.g., preferential output). This article is aimed at readers with some understanding of Bayesian methods, but not necessarily with knowledge of additive manufacturing; the software performance overview and implementation instructions are instrumental for any experimental-design practitioner. Moreover, our review in the field of additive manufacturing highlights the current knowledge and technological trends of BO. This article has a supplementary material online.

相關內容

In this paper, we are interested in the performance of a variable-length stop-feedback (VLSF) code with $m$ optimal decoding times for the binary-input additive white Gaussian noise (BI-AWGN) channel. We first develop tight approximations on the tail probability of length-$n$ cumulative information density. Building on the work of Yavas \emph{et al.}, we formulate the problem of minimizing the upper bound on average blocklength subject to the error probability, minimum gap, and integer constraints. For this integer program, we show that for a given error constraint, a VLSF code that decodes after every symbol attains the maximum achievable rate. We also present a greedy algorithm that yields possibly suboptimal integer decoding times. By allowing a positive real-valued decoding time, we develop the gap-constrained sequential differential optimization (SDO) procedure. Numerical evaluation shows that the gap-constrained SDO can provide a good estimate on achievable rate of VLSF codes with $m$ optimal decoding times and that a finite $m$ suffices to attain Polyanskiy's bound for VLSF codes with $m = \infty$.

Industrial Internet of Things (IIoT) has exploded key revolutions in several leading industries, such as energy, agriculture, mining, transportation, and healthcare. Due to the nature of high capacity and fast transmission speed, 5G plays a pivot role in enhancing the industrial procedures, practices and guidelines, such as crowdsourcing, cloud outsourcing and platform subcontracting. Spatial crowdsourcing (SC)-servers (such as offered by DiDi, MeiTuan and Uber) assign different tasks based on workers' location information.However, SC-servers are often untrustworthy and have the threat of revealing workers' privacy. In this paper, we introduce a framework Geo-MOEA (Multi-Objective Evolutionary Algorithm) to protect location privacy of workers involved on SC platform in 5G environment. We propose an adaptive regionalized obfuscation mechanism with inference error bounds based on geo-indistinguishability (a strong notion of differential privacy), which is suitable for the context of large-scale location data and task allocations. This offers locally generated pseudo-locations of workers to be reported instead of their actual locations.Further, to optimize the trade-off between SC service availability and privacy protection, we utilize MOEA to improve the global applicability of the mechanism in 5G environment. Finally, by simulating the location scenario, the visual results on experiments show that the mechanism can not only protect location privacy, but also achieve high availability of services as desired.

We develop an efficient Bayesian sequential inference framework for factor analysis models observed via various data types, such as continuous, binary and ordinal data. In the continuous data case, where it is possible to marginalise over the latent factors, the proposed methodology tailors the Iterated Batch Importance Sampling (IBIS) of Chopin (2002) to handle such models and we incorporate Hamiltonian Markov Chain Monte Carlo. For binary and ordinal data, we develop an efficient IBIS scheme to handle the parameter and latent factors, combining with Laplace or Variational Bayes approximations. The methodology can be used in the context of sequential hypothesis testing via Bayes factors, which are known to have advantages over traditional null hypothesis testing. Moreover, the developed sequential framework offers multiple benefits even in non-sequential cases, by providing posterior distribution, model evidence and scoring rules (under the prequential framework) in one go, and by offering a more robust alternative computational scheme to Markov Chain Monte Carlo that can be useful in problematic target distributions.

Online controlled experimentation is widely adopted for evaluating new features in the rapid development cycle for web products and mobile applications. Measurement of the overall experiment sample is a common practice to quantify the overall treatment effect. In order to understand why the treatment effect occurs in a certain way, segmentation becomes a valuable approach to a finer analysis of experiment results. This paper introduces a framework for creating and utilizing user behavioral segments in online experimentation. By using the data of user engagement with individual product components as input, this method defines segments that are closely related to the features being evaluated in the product development cycle. With a real-world example, we demonstrate that the analysis with such behavioral segments offered deep, actionable insights that successfully informed product decision-making.

Motivated by the serious problem that hospitals in rural areas suffer from a shortage of residents, we study the Hospitals/Residents model in which hospitals are associated with lower quotas and the objective is to satisfy them as much as possible. When preference lists are strict, the number of residents assigned to each hospital is the same in any stable matching because of the well-known rural hospitals theorem; thus there is no room for algorithmic interventions. However, when ties are introduced to preference lists, this will no longer apply because the number of residents may vary over stable matchings. In this paper, we formulate an optimization problem to find a stable matching with the maximum total satisfaction ratio for lower quotas. We first investigate how the total satisfaction ratio varies over choices of stable matchings in four natural scenarios and provide the exact values of these maximum gaps. Subsequently, we propose a strategy-proof approximation algorithm for our problem; in one scenario it solves the problem optimally, and in the other three scenarios, which are NP-hard, it yields a better approximation factor than that of a naive tie-breaking method. Finally, we show inapproximability results for the above-mentioned three NP-hard scenarios.

An increasing number of publications present the joint application of Design of Experiments (DOE) and machine learning (ML) as a methodology to collect and analyze data on a specific industrial phenomenon. However, the literature shows that the choice of the design for data collection and model for data analysis is often driven by incidental factors, rather than by statistical or algorithmic advantages, thus there is a lack of studies which provide guidelines on what designs and ML models to jointly use for data collection and analysis. This is the first time in the literature that a paper discusses the choice of design in relation to the ML model performances. An extensive study is conducted that considers 12 experimental designs, 7 families of predictive models, 7 test functions that emulate physical processes, and 8 noise settings, both homoscedastic and heteroscedastic. The results of the research can have an immediate impact on the work of practitioners, providing guidelines for practical applications of DOE and ML.

Research on machine learning for channel estimation, especially neural network solutions for wireless communications, is attracting significant current interest. This is because conventional methods cannot meet the present demands of the high speed communication. In the paper, we deploy a general residual convolutional neural network to achieve channel estimation for the orthogonal frequency-division multiplexing (OFDM) signals in a downlink scenario. Our method also deploys a simple interpolation layer to replace the transposed convolutional layer used in other networks to reduce the computation cost. The proposed method is more easily adapted to different pilot patterns and packet sizes. Compared with other deep learning methods for channel estimation, our results for 3GPP channel models suggest improved mean squared error performance for our approach.

The accurate and interpretable prediction of future events in time-series data often requires the capturing of representative patterns (or referred to as states) underpinning the observed data. To this end, most existing studies focus on the representation and recognition of states, but ignore the changing transitional relations among them. In this paper, we present evolutionary state graph, a dynamic graph structure designed to systematically represent the evolving relations (edges) among states (nodes) along time. We conduct analysis on the dynamic graphs constructed from the time-series data and show that changes on the graph structures (e.g., edges connecting certain state nodes) can inform the occurrences of events (i.e., time-series fluctuation). Inspired by this, we propose a novel graph neural network model, Evolutionary State Graph Network (EvoNet), to encode the evolutionary state graph for accurate and interpretable time-series event prediction. Specifically, Evolutionary State Graph Network models both the node-level (state-to-state) and graph-level (segment-to-segment) propagation, and captures the node-graph (state-to-segment) interactions over time. Experimental results based on five real-world datasets show that our approach not only achieves clear improvements compared with 11 baselines, but also provides more insights towards explaining the results of event predictions.

Deep neural networks can achieve great successes when presented with large data sets and sufficient computational resources. However, their ability to learn new concepts quickly is quite limited. Meta-learning is one approach to address this issue, by enabling the network to learn how to learn. The exciting field of Deep Meta-Learning advances at great speed, but lacks a unified, insightful overview of current techniques. This work presents just that. After providing the reader with a theoretical foundation, we investigate and summarize key methods, which are categorized into i) metric-, ii) model-, and iii) optimization-based techniques. In addition, we identify the main open challenges, such as performance evaluations on heterogeneous benchmarks, and reduction of the computational costs of meta-learning.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

北京阿比特科技有限公司