亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The change-point detection problem has been widely studied in time series and signal processing literature. The current methods can be resumed in the search for the appropiate partitions of a whole time series such that the problem can be approached as one of optimization; nevertheless, an exact optimization approach could result computationally expensive and approximate ones discard potential scenarios for change-points configurations in a non-rigorous manner. Thus, a framework it is presented to detect change-points in a univariate time series using a decision criterion based on the Minimum Description Length (MDL), modified such that a Bayesian analysis is included. To search for the points of change, the times where mean value deviations occur (exceedances) are analyzed and then it is evaluated which of these could constitute a change-point through a genetic algorithm using as a fitness function the previously described MDL. The effectiveness of the method it is assessed through a simulation study and on the other hand, it is analyzed its practical validity in a real dataset for the presence of Particulate Matter of less than 2.5 microns (PM2.5) in Bogot\'a, Colombia for the 2018-2020 period under different settings to understand the algorithm convergence. It is found that this definition for the objective function tends to find better results for both the number of change-points and their location in the series for most of cases reducing the error in comparison to other available methods in the literature.

相關內容

The unit commitment (UC) problem, which determines operating schedules of generation units to meet demand, is a fundamental task in power systems operation. Existing UC methods using mixed-integer programming are not well-suited to highly stochastic systems. Approaches which more rigorously account for uncertainty could yield large reductions in operating costs by reducing spinning reserve requirements; operating power stations at higher efficiencies; and integrating greater volumes of variable renewables. A promising approach to solving the UC problem is reinforcement learning (RL), a methodology for optimal decision-making which has been used to conquer long-standing grand challenges in artificial intelligence. This thesis explores the application of RL to the UC problem and addresses challenges including robustness under uncertainty; generalisability across multiple problem instances; and scaling to larger power systems than previously studied. To tackle these issues, we develop guided tree search, a novel methodology combining model-free RL and model-based planning. The UC problem is formalised as a Markov decision process and we develop an open-source environment based on real data from Great Britain's power system to train RL agents. In problems of up to 100 generators, guided tree search is shown to be competitive with deterministic UC methods, reducing operating costs by up to 1.4\%. An advantage of RL is that the framework can be easily extended to incorporate considerations important to power systems operators such as robustness to generator failure, wind curtailment or carbon prices. When generator outages are considered, guided tree search saves over 2\% in operating costs as compared with methods using conventional $N-x$ reserve criteria.

Background: Continuous experimentation (CE) has been proposed as a data-driven approach to software product development. Several challenges with this approach have been described in large organisations, but its application in smaller companies with early-stage products remains largely unexplored. Aims: The goal of this study is to understand what factors could affect the adoption of CE in early-stage software startups. Method: We present a descriptive multiple-case study of five startups in Finland which differ in their utilisation of experimentation. Results: We find that practices often mentioned as prerequisites for CE, such as iterative development and continuous integration and delivery, were used in the case companies. CE was not widely recognised or used as described in the literature. Only one company performed experiments and used experimental data systematically. Conclusions: Our study indicates that small companies may be unlikely to adopt CE unless 1) at least some company employees have prior experience with the practice, 2) the company's limited available resources are not exceeded by its adoption, and 3) the practice solves a problem currently experienced by the company, or the company perceives almost immediate benefit of adopting it. We discuss implications for advancing CE in early-stage startups and outline directions for future research on the approach.

Machine learning models are known to be susceptible to adversarial perturbation. One famous attack is the adversarial patch, a sticker with a particularly crafted pattern that makes the model incorrectly predict the object it is placed on. This attack presents a critical threat to cyber-physical systems that rely on cameras such as autonomous cars. Despite the significance of the problem, conducting research in this setting has been difficult; evaluating attacks and defenses in the real world is exceptionally costly while synthetic data are unrealistic. In this work, we propose the REAP (REalistic Adversarial Patch) benchmark, a digital benchmark that allows the user to evaluate patch attacks on real images, and under real-world conditions. Built on top of the Mapillary Vistas dataset, our benchmark contains over 14,000 traffic signs. Each sign is augmented with a pair of geometric and lighting transformations, which can be used to apply a digitally generated patch realistically onto the sign. Using our benchmark, we perform the first large-scale assessments of adversarial patch attacks under realistic conditions. Our experiments suggest that adversarial patch attacks may present a smaller threat than previously believed and that the success rate of an attack on simpler digital simulations is not predictive of its actual effectiveness in practice. We release our benchmark publicly at //github.com/wagner-group/reap-benchmark.

We formulate a class of angular Gaussian distributions that allows different degrees of isotropy for directional random variables of arbitrary dimension. Through a series of novel reparameterization, this distribution family is indexed by parameters with meaningful statistical interpretations that can range over the entire real space of an adequate dimension. The new parameterization greatly simplifies maximum likelihood estimation of all model parameters, which in turn leads to theoretically sound and numerically stable inference procedures to infer key features of the distribution. Byproducts from the likelihood-based inference are used to develop graphical and numerical diagnostic tools for assessing goodness of fit of this distribution in a data application. Simulation study and application to data from a hydrogeology study are used to demonstrate implementation and performance of the inference procedures and diagnostics methods.

The problem of adversarial defenses for image classification, where the goal is to robustify a classifier against adversarial examples, is considered. Inspired by the hypothesis that these examples lie beyond the natural image manifold, a novel aDversarIal defenSe with local impliCit functiOns (DISCO) is proposed to remove adversarial perturbations by localized manifold projections. DISCO consumes an adversarial image and a query pixel location and outputs a clean RGB value at the location. It is implemented with an encoder and a local implicit module, where the former produces per-pixel deep features and the latter uses the features in the neighborhood of query pixel for predicting the clean RGB value. Extensive experiments demonstrate that both DISCO and its cascade version outperform prior defenses, regardless of whether the defense is known to the attacker. DISCO is also shown to be data and parameter efficient and to mount defenses that transfers across datasets, classifiers and attacks.

Nonlinear Markov Chains (nMC) are regarded as the original (linear) Markov Chains with nonlinear small perturbations. It fits real-world data better, but its associated properties are difficult to describe. A new approach is proposed to analyze the ergodicity and even estimate the convergence bounds of nMC, which is more precise than existing results. In the new method, Coupling Markov about homogeneous Markov chains is applied to reconstitute the relationship between distribution at any times and the limiting distribution. The convergence bounds can be provided by the transition probability matrix of Coupling Markov. Moreover, a new volatility called TV Volatility can be calculated through the convergence bounds, wavelet analysis and Gaussian HMM. It's tested to estimate the volatility of two securities (TSLA and AMC). The results show TV Volatility can reflect the magnitude of the change of square returns in a period wonderfully.

Polynomials are common algebraic structures, which are often used to approximate functions including probability distributions. This paper proposes to directly define polynomial distributions in order to describe stochastic properties of systems rather than to assume polynomials for only approximating known or empirically estimated distributions. Polynomial distributions offer a great modeling flexibility, and often, also mathematical tractability. However, unlike canonical distributions, polynomial functions may have non-negative values in the interval of support for some parameter values, the number of their parameters is usually much larger than for canonical distributions, and the interval of support must be finite. In particular, polynomial distributions are defined here assuming three forms of polynomial function. The transformation of polynomial distributions and fitting a histogram to a polynomial distribution are considered. The key properties of polynomial distributions are derived in closed-form. A piecewise polynomial distribution construction is devised to ensure that it is non-negative over the support interval. Finally, the problems of estimating parameters of polynomial distributions and generating polynomially distributed samples are also studied.

This article develops a convex description of a classical or quantum learner's or agent's state of knowledge about its environment, presented as a convex subset of a commutative R-algebra. With caveats, this leads to a generalization of certain semidefinite programs in quantum information (such as those describing the universal query algorithm dual to the quantum adversary bound, related to optimal learning or control of the environment) to the classical and faulty-quantum setting, which would not be possible with a naive description via joint probability distributions over environment and internal memory. More philosophically, it also makes an interpretation of the set of reduced density matrices as "states of knowledge" of an observer of its environment, related to these techniques, more explicit. As another example, I describe and solve a formal differential equation of states of knowledge in that algebra, where an agent obtains experimental data in a Poissonian process, and its state of knowledge evolves as an exponential power series. However, this framework currently lacks impressive applications, and I post it in part to solicit feedback and collaboration on those. In particular, it may be possible to develop it into a new framework for the design of experiments, e.g. the problem of finding maximally informative questions to ask human labelers or the environment in machine-learning problems. The parts of the article not related to quantum information don't assume knowledge of it.

Image captioning is a challenging task that combines the field of computer vision and natural language processing. A variety of approaches have been proposed to achieve the goal of automatically describing an image, and recurrent neural network (RNN) or long-short term memory (LSTM) based models dominate this field. However, RNNs or LSTMs cannot be calculated in parallel and ignore the underlying hierarchical structure of a sentence. In this paper, we propose a framework that only employs convolutional neural networks (CNNs) to generate captions. Owing to parallel computing, our basic model is around 3 times faster than NIC (an LSTM-based model) during training time, while also providing better results. We conduct extensive experiments on MSCOCO and investigate the influence of the model width and depth. Compared with LSTM-based models that apply similar attention mechanisms, our proposed models achieves comparable scores of BLEU-1,2,3,4 and METEOR, and higher scores of CIDEr. We also test our model on the paragraph annotation dataset, and get higher CIDEr score compared with hierarchical LSTMs

Salient object detection is a problem that has been considered in detail and many solutions proposed. In this paper, we argue that work to date has addressed a problem that is relatively ill-posed. Specifically, there is not universal agreement about what constitutes a salient object when multiple observers are queried. This implies that some objects are more likely to be judged salient than others, and implies a relative rank exists on salient objects. The solution presented in this paper solves this more general problem that considers relative rank, and we propose data and metrics suitable to measuring success in a relative objects saliency landscape. A novel deep learning solution is proposed based on a hierarchical representation of relative saliency and stage-wise refinement. We also show that the problem of salient object subitizing can be addressed with the same network, and our approach exceeds performance of any prior work across all metrics considered (both traditional and newly proposed).

北京阿比特科技有限公司