亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Application services often support mobile and web applications with REST interfaces, implemented using a set of distributed components that interact with each other. This approach allows services to have high availability and performance at a lower cost than a monolithic system. However, the existence of multiple components makes the development process of these systems more complex and therefore susceptible to errors. In this paper, we present JepREST, a system that automates the use of Jepsen libraries to test the correctness of distributed applications that provide a REST interface. Based on a service interface specification, JepREST generates and executes a set of tests with multiple clients performing operations concurrently, subsequently verifying if the system behaviour is linearizable. The preliminary evaluation shows that JepREST simplifies the testing of REST applications.

相關內容

面向服務的前后端通信標準 Not React

Hazard rate functions of natural and manufactured systems often show a bathtub shaped failure rate. A high early rate of failures is followed by an extended period of useful working life where failures are rare, and finally the failure rate increases as the system reaches the end of its life. Parametric modelling of such hazard rate functions can lead to unnecessarily restrictive assumptions on the function shape, however the most common non-parametric estimator (the Kaplan-Meier estimator) does not allow specification of the requirement that it be bathtub shaped. In this paper we extend the Lo and Weng (1989) approach and specify four nonparametric bathtub hazard rate functions drawn from Gamma Process Priors. We implement and demonstrate simulation for these four models.

Research challenges such as climate change and the search for habitable planets increasingly use academic and commercial computing resources distributed across different institutions and physical sites. Furthermore, such analyses often require a level of automation that precludes direct human interaction, and securing these workflows involves adherence to security policies across institutions. In this paper, we present a decentralized authorization and security framework that enables researchers to utilize resources across different sites while allowing service providers to maintain autonomy over their secrets and authorization policies. We describe this framework as part of the Tapis platform, a web-based, hosted API used by researchers from multiple institutions, and we measure the performance of various authorization and security queries, including cross-site queries. We conclude with two use case studies -- a project at the University of Hawaii to study climate change and the NASA NEID telescope project that searches the galaxy for exoplanets.

An integration of distributionally robust risk allocation into sampling-based motion planning algorithms for robots operating in uncertain environments is proposed. We perform non-uniform risk allocation by decomposing the distributionally robust joint risk constraints defined over the entire planning horizon into individual risk constraints given the total risk budget. Specifically, the deterministic tightening defined using the individual risk constraints is leveraged to define our proposed exact risk allocation procedure. Our idea of embedding the risk allocation technique into sampling based motion planning algorithms realises guaranteed conservative, yet increasingly more risk feasible trajectories for efficient state space exploration.

Several kernel based testing procedures are proposed to solve the problem of model selection in the presence of parameter estimation in a family of candidate models. Extending the two sample test of Gretton et al. (2006), we first provide a way of testing whether some data is drawn from a given parametric model (model specification). Second, we provide a test statistic to decide whether two parametric models are equally valid to describe some data (model comparison), in the spirit of Vuong (1989). All our tests are asymptotically standard normal under the null, even when the true underlying distribution belongs to the competing parametric families.Some simulations illustrate the performance of our tests in terms of power and level.

In recent years, different types of distributed learning schemes have received increasing attention for their strong advantages in handling large-scale data information. In the information era, to face the big data challenges which stem from functional data analysis very recently, we propose a novel distributed gradient descent functional learning (DGDFL) algorithm to tackle functional data across numerous local machines (processors) in the framework of reproducing kernel Hilbert space. Based on integral operator approaches, we provide the first theoretical understanding of the DGDFL algorithm in many different aspects in the literature. On the way of understanding DGDFL, firstly, a data-based gradient descent functional learning (GDFL) algorithm associated with a single-machine model is proposed and comprehensively studied. Under mild conditions, confidence-based optimal learning rates of DGDFL are obtained without the saturation boundary on the regularity index suffered in previous works in functional regression. We further provide a semi-supervised DGDFL approach to weaken the restriction on the maximal number of local machines to ensure optimal rates. To our best knowledge, the DGDFL provides the first distributed iterative training approach to functional learning and enriches the stage of functional data analysis.

NUBO, short for Newcastle University Bayesian Optimisation, is a Bayesian optimisation framework for the optimisation of expensive-to-evaluate black-box functions, such as physical experiments and computer simulators. Bayesian optimisation is a cost-efficient optimisation strategy that uses surrogate modelling via Gaussian processes to represent an objective function and acquisition functions to guide the selection of candidate points to approximate the global optimum of the objective function. NUBO itself focuses on transparency and user experience to make Bayesian optimisation easily accessible to researchers from all disciplines. Clean and understandable code, precise references, and thorough documentation ensure transparency, while user experience is ensured by a modular and flexible design, easy-to-write syntax, and careful selection of Bayesian optimisation algorithms. NUBO allows users to tailor Bayesian optimisation to their specific problem by writing the optimisation loop themselves using the provided building blocks. It supports sequential single-point, parallel multi-point, and asynchronous optimisation of bounded, constrained, and/or mixed (discrete and continuous) parameter input spaces. Only algorithms and methods that are extensively tested and validated to perform well are included in NUBO. This ensures that the package remains compact and does not overwhelm the user with an unnecessarily large number of options. The package is written in Python but does not require expert knowledge of Python to optimise your simulators and experiments. NUBO is distributed as open-source software under the BSD 3-Clause licence.

By virtue of technology and benefit advantages, cloud computing has increasingly attracted a large number of potential cloud consumers (PCC) plan to migrate the traditional business to the cloud service. However, trust has become one of the most challenging issues that prevent the PCC from adopting cloud services, especially in trustworthy cloud service selection. Besides, due to the diversity and dynamic of quality of service (QoS) in the cloud environment, the existing trust assessment methods based on the single constant value of QoS attribute and the subjective weight assignment are not good enough to provide an effective solution for PCCs to identify and select a trustworthy cloud service among a wide range of functionally-equivalent cloud service providers (CSPs). To address the challenge, a novel assessment and selection framework for trustworthy cloud service, FASTCloud, is proposed in this study. This framework facilitates PCCs to select a trustworthy cloud service based on their actual QoS requirements. In order to accurately and efficiently assess the trust level of cloud services, a QoS-based trust assessment model is proposed. This model represents a trust level assessment method based on the interval multiple attributes with an objective weight assignment method based on the deviation maximization to adaptively determine the trust level of different cloud services provisioned by candidate CSPs. The advantage of the proposed trust level assessment method in time complexity is demonstrated by the performance analysis and comparison. The experimental result of a case study with an open-source dataset shows that the trust model is efficient in cloud service trust assessment and the FASTCloud can effectively help PCCs select a trustworthy cloud service.

We study best arm identification in a variant of the multi-armed bandit problem where the learner has limited precision in arm selection. The learner can only sample arms via certain exploration bundles, which we refer to as boxes. In particular, at each sampling epoch, the learner selects a box, which in turn causes an arm to get pulled as per a box-specific probability distribution. The pulled arm and its instantaneous reward are revealed to the learner, whose goal is to find the best arm by minimising the expected stopping time, subject to an upper bound on the error probability. We present an asymptotic lower bound on the expected stopping time, which holds as the error probability vanishes. We show that the optimal allocation suggested by the lower bound is, in general, non-unique and therefore challenging to track. We propose a modified tracking-based algorithm to handle non-unique optimal allocations, and demonstrate that it is asymptotically optimal. We also present non-asymptotic lower and upper bounds on the stopping time in the simpler setting when the arms accessible from one box do not overlap with those of others.

Given a poorly documented neural network model, we take the perspective of a forensic investigator who wants to find out the model's data domain (e.g. whether on face images or traffic signs). Although existing methods such as membership inference and model inversion can be used to uncover some information about an unknown model, they still require knowledge of the data domain to start with. In this paper, we propose solving this problem by leveraging on comprehensive corpus such as ImageNet to select a meaningful distribution that is close to the original training distribution and leads to high performance in follow-up investigations. The corpus comprises two components, a large dataset of samples and meta information such as hierarchical structure and textual information on the samples. Our goal is to select a set of samples from the corpus for the given model. The core of our method is an objective function that considers two criteria on the selected samples: the model functional properties (derived from the dataset), and semantics (derived from the metadata). We also give an algorithm to efficiently search the large space of all possible subsets w.r.t. the objective function. Experimentation results show that the proposed method is effective. For example, cloning a given model (originally trained with CIFAR-10) by using Caltech 101 can achieve 45.5% accuracy. By using datasets selected by our method, the accuracy is improved to 72.0%.

Federated learning is a new distributed machine learning framework, where a bunch of heterogeneous clients collaboratively train a model without sharing training data. In this work, we consider a practical and ubiquitous issue in federated learning: intermittent client availability, where the set of eligible clients may change during the training process. Such an intermittent client availability model would significantly deteriorate the performance of the classical Federated Averaging algorithm (FedAvg for short). We propose a simple distributed non-convex optimization algorithm, called Federated Latest Averaging (FedLaAvg for short), which leverages the latest gradients of all clients, even when the clients are not available, to jointly update the global model in each iteration. Our theoretical analysis shows that FedLaAvg attains the convergence rate of $O(1/(N^{1/4} T^{1/2}))$, achieving a sublinear speedup with respect to the total number of clients. We implement and evaluate FedLaAvg with the CIFAR-10 dataset. The evaluation results demonstrate that FedLaAvg indeed reaches a sublinear speedup and achieves 4.23% higher test accuracy than FedAvg.

北京阿比特科技有限公司