亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Testing is a significant aspect of software development. As systems become complex and their use becomes critical to the security and the function of society, the need for testing methodologies that ensure reliability and detect faults as early as possible becomes critical. The most promising approach is the model-based approach where a model is developed that defines how the system is expected to behave and how it is meant to react. The tests are derived from the model and an analysis of the test results is conducted based on it. We will investigate the prospects of using the Behavioral Programming (BP) for a model-based testing (MBT) approach that we will develop. We will develop a natural language for representing the requirements. The model will be fed to algorithms that we will develop. This includes algorithms for the automatic creation of minimal sets of test cases that cover all of the system's requirements, analysing the results of the tests, and other tools that support the testing process. The focus of our methodology will be to find faults caused by the interaction between different requirements in ways that are difficult for the testers to detect. Specifically, we will focus our attention to concurrency issues such as deadlocks and logical race condition. We will use a variety of methods that are made possible by BP, such as non-deterministic execution of scenarios and use of in-code model-checking for building test scenarios and for finding minimal coverage of the test scenarios for the system requirements using Combinatorial Test Design (CTD) methodologies. We will develop a proof-of-concept tool kit which will allow us to demonstrate and evaluate the above mentioned capabilities. We will compare the performance of our tools with the performance of manual testers and of other model-based tools using comparison criteria that we will define and develop.

相關內容

這個新版本的工具會議系列恢復了從1989年到2012年的50個會議的傳統。工具最初是“面向對象語言和系統的技術”,后來發展到包括軟件技術的所有創新方面。今天許多最重要的軟件概念都是在這里首次引入的。2019年TOOLS 50+1在俄羅斯喀山附近舉行,以同樣的創新精神、對所有與軟件相關的事物的熱情、科學穩健性和行業適用性的結合以及歡迎該領域所有趨勢和社區的開放態度,延續了該系列。 官網鏈接: · 離散化 · 樣例 · 操作 ·
2022 年 2 月 7 日

Probabilistic pushdown automata (pPDA) are a standard operational model for programming languages involving discrete random choices, procedures, and returns. Temporal properties are useful for gaining insight into the chronological order of events during program execution. Existing approaches in the literature have focused mostly on $\omega$-regular and LTL properties. In this paper, we study the model checking problem of pPDA against $\omega$-visibly pushdown languages that can be described by specification logics such as CaRet and are strictly more expressive than $\omega$-regular properties. With these logical formulae, it is possible to specify properties that explicitly take the structured computations arising from procedural programs into account. For example, CaRet is able to match procedure calls with their corresponding future returns, and thus allows to express fundamental program properties like total and partial correctness.

In these lectures notes, we review our recent works addressing various problems of finding the nearest stable system to an unstable one. After the introduction, we provide some preliminary background, namely, defining Port-Hamiltonian systems and dissipative Hamiltonian systems and their properties, briefly discussing matrix factorizations, and describing the optimization methods that we will use in these notes. In the third chapter, we present our approach to tackle the distance to stability for standard continuous linear time invariant (LTI) systems. The main idea is to rely on the characterization of stable systems as dissipative Hamiltonian systems. We show how this idea can be generalized to compute the nearest $\Omega$-stable matrix, where the eigenvalues of the sought system matrix $A$ are required to belong a rather general set $\Omega$. We also show how these ideas can be used to compute minimal-norm static feedbacks, that is, stabilize a system by choosing a proper input $u(t)$ that linearly depends on $x(t)$ (static-state feedback), or on $y(t)$ (static-output feedback). In the fourth chapter, we present our approach to tackle the distance to passivity. The main idea is to rely on the characterization of stable systems as port-Hamiltonian systems. We also discuss in more details the special case of computing the nearest stable matrix pairs. In the last chapter, we focus on discrete-time LTI systems. Similarly as for the continuous case, we propose a parametrization that allows efficiently compute the nearest stable system (for matrices and matrix pairs), allowing to compute the distance to stability. We show how this idea can be used in data-driven system identification, that is, given a set of input-output pairs, identify the system $A$.

Sorting is needed in many application domains. The data is read from memory and sent to a general purpose processor or application specific hardware for sorting. The sorted data is then written back to the memory. Reading/writing data from/to memory and transferring data between memory and processing unit incur a large latency and energy overhead. In this work, we develop, to the best of our knowledge, the first architectures for in-memory sorting of data. We propose two architectures. The first architecture is applicable to the conventional format of representing data, weighted binary radix. The second architecture is proposed for the developing unary processing systems where data is encoded as uniform unary bitstreams. The two architectures have different advantages and disadvantages, making one or the other more suitable for a specific application. However, the common property of both is a significant reduction in the processing time compared to prior sorting designs. Our evaluations show on average 37x and 138x energy reduction for binary and unary designs, respectively, as compared to conventional CMOS off-memory sorting systems.

Stock Market can be easily seen as one of the most attractive places for investors, but it is also very complex in terms of making trading decisions. Predicting the market is a risky venture because of the uncertainties and nonlinear nature of the market. Deciding on the right time to trade is key to every successful trader as it can lead to either a huge gain of money or totally a loss in investment that will be recorded as a careless trade. The aim of this research is to develop a prediction system for stock market using Fuzzy Logic Type2 which will handle these uncertainties and complexities of human behaviour in general when it comes to buy, hold or sell decision making in stock trading. The proposed system was developed using VB.NET programming language as frontend and Microsoft SQL Server as backend. A total of four different technical indicators were selected for this research. The selected indicators are the Relative Strength Index, William Average, Moving Average Convergence and Divergence, and Stochastic Oscillator. These indicators serve as input variable to the Fuzzy System. The MACD and SO are deployed as primary indicators, while the RSI and WA are used as secondary indicators. Fibonacci retracement ratio was adopted for the secondary indicators to determine their support and resistance level in terms of making trading decisions. The input variables to the Fuzzy System is fuzzified to Low, Medium, and High using the Triangular and Gaussian Membership Function. The Mamdani Type Fuzzy Inference rules were used for combining the trading rules for each input variable to the fuzzy system. The developed system was tested using sample data collected from ten different companies listed on the Nigerian Stock Exchange for a total of fifty two periods. The dataset collected are Opening, High, Low, and Closing prices of each security.

Collaborative robotic systems will be a key enabling technology for current and future industrial applications. The main aspect of such applications is to guarantee safety for humans. To detect hazardous situations, current commercially available robotic systems rely on direct physical contact to the co-working person. To further advance this technology, there are multiple efforts to develop predictive capabilities for such systems. Using motion tracking sensors and pose estimation systems combined with adequate predictive models, potential episodes of hazardous collisions between humans and robots can be predicted. Based on the provided predictive information, the robotic system can avoid physical contact by adjusting speed or position. A potential approach for such systems is to perform human motion prediction with machine learning methods like Artificial Neural Networks. In our approach, the motion patterns of past seconds are used to predict future ones by applying a linear Tensor-on-Tensor regression model, selected according to a similarity measure between motion sequences obtained by Dynamic TimeWarping. For test and validation of our proposed approach, industrial pseudo assembly tasks were recorded with a motion capture system, providing unique traceable Cartesian coordinates $(x, y, z)$ for each human joint. The prediction of repetitive human motions associated with assembly tasks, whose data vary significantly in length and have highly correlated variables, has been achieved in real time.

"Program sensitivity" measures the distance between the outputs of a program when it is run on two related inputs. This notion, which plays an important role in areas such as data privacy and optimization, has been the focus of several program analysis techniques introduced in recent years. One approach that has proved particularly fruitful for this domain is the use of type systems inspired by linear logic, as pioneered by Reed and Pierce in the Fuzz programming language. In Fuzz, each type is equipped with its own notion of distance, and the typing rules explain how those distances can be treated soundly when analyzing the sensitivity of a program. In particular, Fuzz features two products types, corresponding to two different sensitivity analyses: the "tensor product" combines the distances of each component by adding them, while the "with product" takes their maximum. In this work, we show that products in Fuzz can be generalized to arbitrary $L^p$ distances, metrics that are often used in privacy and optimization. The original Fuzz products, tensor and with, correspond to the special cases $L^1$ and $L^\infty$. To simplify the handling of such products, we extend the Fuzz type system with bunches -- as in the logic of bunched implications -- where the distances of different groups of variables can be combined using different $L^p$ distances. We show that our extension can be used to reason about important examples of metrics between probability distributions in a natural way.

We present the design and implementation of a taskable reactive mobile manipulation system. In contrary to related work, we treat the arm and base degrees of freedom as a holistic structure which greatly improves the speed and fluidity of the resulting motion. At the core of this approach is a robust and reactive motion controller which can achieve a desired end-effector pose, while avoiding joint position and velocity limits, and ensuring the mobile manipulator is manoeuvrable throughout the trajectory. This can support sensor-based behaviours such as closed-loop visual grasping. As no planning is involved in our approach, the robot is never stationary thinking about what to do next. We show the versatility of our holistic motion controller by implementing a pick and place system using behaviour trees and demonstrate this task on a 9-degree-of-freedom mobile manipulator. Additionally, we provide an open-source implementation of our motion controller for both non-holonomic and omnidirectional mobile manipulators available at jhavl.github.io/holistic.

Specialized domain knowledge is often necessary to accurately annotate training sets for in-depth analysis, but can be burdensome and time-consuming to acquire from domain experts. This issue arises prominently in automated behavior analysis, in which agent movements or actions of interest are detected from video tracking data. To reduce annotation effort, we present TREBA: a method to learn annotation-sample efficient trajectory embedding for behavior analysis, based on multi-task self-supervised learning. The tasks in our method can be efficiently engineered by domain experts through a process we call "task programming", which uses programs to explicitly encode structured knowledge from domain experts. Total domain expert effort can be reduced by exchanging data annotation time for the construction of a small number of programmed tasks. We evaluate this trade-off using data from behavioral neuroscience, in which specialized domain knowledge is used to identify behaviors. We present experimental results in three datasets across two domains: mice and fruit flies. Using embeddings from TREBA, we reduce annotation burden by up to a factor of 10 without compromising accuracy compared to state-of-the-art features. Our results thus suggest that task programming and self-supervision can be an effective way to reduce annotation effort for domain experts.

The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司