亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A trend has emerged over the past decades pointing to the harnessing of structural instability in movable, programmable, and transformable mechanisms. Inspired by a steel hair clip, we combine the in-plane assembly with a bistable structure and build a compliant flapping mechanism using semi-rigid plastic sheets and installed it on both a tethered pneumatic soft robotic fish and an untethered motor-driven one to demonstrate its unprecedented advantages. Designing rules are proposed following the theories and verification. A two-fold increase in the swimming speed of the pneumatic fish compared to the reference is observed and the further study of the untether fish demonstrates a record-breaking velocity of 2.03 BL/s (43.6 cm/s) for the untethered compliant swimmer, outperforming the previously report fastest one with a significant margin of 194%. This work probably heralds a structural revolution for next-generation compliant robotics.

相關內容

機器人(英語:Robot)包括一切模擬人類行為或思想與模擬其他生物的機械(如機器狗,機器貓等)。狹義上對機器人的定義還有很多分類法及爭議,有些電腦程序甚至也被稱為機器人。在當代工業中,機器人指能自動運行任務的人造機器設備,用以取代或協助人類工作,一般會是機電設備,由計算機程序或是電子電路控制。

知識薈萃

精品入門和進階教程、論文和代碼整理等

更多

查看相關VIP內容、論文、資訊等

Bounded model checking (BMC) is an efficient formal verification technique which allows for desired properties of a software system to be checked on bounded runs of an abstract model of the system. The properties are frequently described in some temporal logic and the system is modeled as a state transition system. In this paper we propose a novel counting logic, $\mathcal{L}_{C}$, to describe the temporal properties of client-server systems with an unbounded number of clients. We also propose two dimensional bounded model checking ($2D$-BMC) strategy that uses two distinguishable parameters, one for execution steps and another for the number of tokens in the net representing a client-server system, and these two evolve separately, which is different from the standard BMC techniques in the Petri Nets formalism. This $2D$-BMC strategy is implemented in a tool called DCModelChecker which leverages the $2D$-BMC technique with a state-of-the-art satisfiability modulo theories (SMT) solver Z3. The system is given as a Petri Net and properties specified using $\mathcal{L}_{C}$ are encoded into formulas that are checked by the solver. Our tool can also work on industrial benchmarks from the Model Checking Contest (MCC). We report on these experiments to illustrate the applicability of the $2D$-BMC strategy.

The time needed to apply a hierarchical clustering algorithm is most often dominated by the number of computations of a pairwise dissimilarity measure. Such a constraint, for larger data sets, puts at a disadvantage the use of all the classical linkage criteria but the single linkage one. However, it is known that the single linkage clustering algorithm is very sensitive to outliers, produces highly skewed dendrograms, and therefore usually does not reflect the true underlying data structure -- unless the clusters are well-separated. To overcome its limitations, we propose a new hierarchical clustering linkage criterion called Genie. Namely, our algorithm links two clusters in such a way that a chosen economic inequity measure (e.g., the Gini- or Bonferroni-index) of the cluster sizes does not drastically increase above a given threshold. The presented benchmarks indicate a high practical usefulness of the introduced method: it most often outperforms the Ward or average linkage in terms of the clustering quality while retaining the single linkage's speed. The Genie algorithm is easily parallelizable and thus may be run on multiple threads to speed up its execution even further. Its memory overhead is small: there is no need to precompute the complete distance matrix to perform the computations in order to obtain a desired clustering. It can be applied on arbitrary spaces equipped with a dissimilarity measure, e.g., on real vectors, DNA or protein sequences, images, rankings, informetric data, etc. A reference implementation of the algorithm has been included in the open source 'genie' package for R. See also //genieclust.gagolewski.com for a new implementation (genieclust) -- available for both R and Python.

Robots working in real environments need to adapt to unexpected changes to avoid failures. This is an open and complex challenge that requires robots to timely predict and identify the causes of failures to prevent them. In this paper, we present a causal method that will enable robots to predict when errors are likely to occur and prevent them from happening by executing a corrective action. First, we propose a causal-based method to detect the cause-effect relationships between task executions and their consequences by learning a causal Bayesian network (BN). The obtained model is transferred from simulated data to real scenarios to demonstrate the robustness and generalization of the obtained models. Based on the causal BN, the robot can predict if and why the executed action will succeed or not in its current state. Then, we introduce a novel method that finds the closest state alternatives through a contrastive Breadth-First-Search if the current action was predicted to fail. We evaluate our approach for the problem of stacking cubes in two cases; a) single stacks (stacking one cube) and; b) multiple stacks (stacking three cubes). In the single-stack case, our method was able to reduce the error rate by 97%. We also show that our approach can scale to capture multiple actions in one model, allowing to measure timely shifted action effects, such as the impact of an imprecise stack of the first cube on the stacking success of the third cube. For these complex situations, our model was able to prevent around 75% of the stacking errors, even for the challenging multiple-stack scenario. Thus, demonstrating that our method is able to explain, predict, and prevent execution failures, which even scales to complex scenarios that require an understanding of how the action history impacts future actions.

The escalating gas-supply crisis in the EU calls for immediate political and regulatory actions to improve the energy security. Currently, all member states are aiming to fill their reservoirs independently, while it is not clear how solidarity will or even could be put into practice in the future, i.e. how the accumulated reserves of one or more members may be potentially redistributed to help others in need. In this paper we aim to formalize a simple game-theoretic model in order to capture the basic features of the problem, considering the related uncertainty of the future conditions related to reservoir levels and possible transmission bottlenecks as well. We propose a mechanism for supply-security related cooperation, which is based on voluntary participation, and may contribute to the more efficient utilization of storage capacities if its principles may be later implemented. We demonstrate the operation of the proposed framework on a simple example and show that under the assumption of risk-averse participants, the concept exhibits potential.

We propose a novel deep neural network (DNN) based approximation architecture to learn estimates of measurements. We detail an algorithm that enables training of the DNN. The DNN estimator only uses measurements, if and when they are received over a communication network. The measurements are communicated over a network as packets, at a rate unknown to the estimator. Packets may suffer drops and need retransmission. They may suffer waiting delays as they traverse a network path. Works on estimation often assume knowledge of the dynamic model of the measured system, which may not be available in practice. The DNN estimator doesn't assume knowledge of the dynamic system model or the communication network. It doesn't require a history of measurements, often used by other works. The DNN estimator results in significantly smaller average estimation error than the commonly used Time-varying Kalman Filter and the Unscented Kalman Filter, in simulations of linear and nonlinear dynamic systems. The DNN need not be trained separately for different communications network settings. It is robust to errors in estimation of network delays that occur due to imperfect time synchronization between the measurement source and the estimator. Last but not the least, our simulations shed light on the rate of updates that result in low estimation error.

We propose quantum subroutines for the simplex method that avoid classical computation of the basis inverse. We show how to quantize all steps of the simplex algorithm, including checking optimality, unboundedness, and identifying a pivot (i.e., pricing the columns and performing the ratio test) according to Dantzig's rule or the steepest edge rule. The quantized subroutines obtain a polynomial speedup in the dimension of the problem, but have worse dependence on other numerical parameters. For example, for a problem with $m$ constraints, $n$ variables, at most $d_c$ nonzero elements per column of the costraint matrix, at most $d$ nonzero elements per column or row of the basis, basis condition number $\kappa$, and optimality tolerance $\epsilon$, pricing can be performed in $\tilde{O}(\frac{1}{\epsilon}\kappa d \sqrt{n}(d_c n + d m))$ time, where the $\tilde{O}$ notation hides polylogarithmic factors; classically, pricing requires $O(d_c^{0.7} m^{1.9} + m^{2 + o(1)} + d_c n)$ time in the worst case using the fastest known algorithm for sparse matrix multiplication. For well-conditioned sparse problems the quantum subroutines scale better in $m$ and $n$, and may therefore have an advantage for very large problems. The running time of the quantum subroutines can be improved if the constraint matrix admits an efficient algorithmic description, or if quantum RAM is available.

We present a new template for building oblivious transfer from quantum information that we call the "fixed basis" framework. Our framework departs from prior work (eg., Crepeau and Kilian, FOCS '88) by fixing the correct choice of measurement basis used by each player, except for some hidden trap qubits that are intentionally measured in a conjugate basis. We instantiate this template in the quantum random oracle model (QROM) to obtain simple protocols that implement, with security against malicious adversaries: 1. Non-interactive random-input bit OT in a model where parties share EPR pairs a priori. 2. Two-round random-input bit OT without setup, obtained by showing that the protocol above remains secure even if the (potentially malicious) OT receiver sets up the EPR pairs. 3. Three-round chosen-input string OT from BB84 states without entanglement or setup. This improves upon natural variations of the CK88 template that require at least five rounds. Along the way, we develop technical tools that may be of independent interest. We prove that natural functions like XOR enable seedless randomness extraction from certain quantum sources of entropy. We also use idealized (i.e. extractable and equivocal) bit commitments, which we obtain by proving security of simple and efficient constructions in the QROM.

This paper proposes a novel method for geo-tracking, i.e. continuous metric self-localization in outdoor environments by registering a vehicle's sensor information with aerial imagery of an unseen target region. Geo-tracking methods offer the potential to supplant noisy signals from global navigation satellite systems (GNSS) and expensive and hard to maintain prior maps that are typically used for this purpose. The proposed geo-tracking method aligns data from on-board cameras and lidar sensors with geo-registered orthophotos to continuously localize a vehicle. We train a model in a metric learning setting to extract visual features from ground and aerial images. The ground features are projected into a top-down perspective via the lidar points and are matched with the aerial features to determine the relative pose between vehicle and orthophoto. Our method is the first to utilize on-board cameras in an end-to-end differentiable model for metric self-localization on unseen orthophotos. It exhibits strong generalization, is robust to changes in the environment and requires only geo-poses as ground truth. We evaluate our approach on the KITTI-360 dataset and achieve a mean absolute position error (APE) of 0.94m. We further compare with previous approaches on the KITTI odometry dataset and achieve state-of-the-art results on the geo-tracking task.

This paper addresses efficient feasibility evaluation of possible emergency landing sites, online navigation, and path following for automatic landing under engine-out failure subject to turbulent weather. The proposed Multi-level Adaptive Safety Control framework enables unmanned aerial vehicles (UAVs) under large uncertainties to perform safety maneuvers traditionally reserved for human pilots with sufficient experience. In this framework, a simplified flight model is first used for time-efficient feasibility evaluation of a set of landing sites and trajectory generation. Then, an online path following controller is employed to track the selected landing trajectory. We used a high-fidelity simulation environment for a fixed-wing aircraft to test and validate the proposed approach under various weather uncertainties. For the case of emergency landing due to engine failure under severe weather conditions, the simulation results show that the proposed automatic landing framework is robust to uncertainties and adaptable at different landing stages while being computationally inexpensive for planning and tracking tasks.

The content based image retrieval aims to find the similar images from a large scale dataset against a query image. Generally, the similarity between the representative features of the query image and dataset images is used to rank the images for retrieval. In early days, various hand designed feature descriptors have been investigated based on the visual cues such as color, texture, shape, etc. that represent the images. However, the deep learning has emerged as a dominating alternative of hand-designed feature engineering from a decade. It learns the features automatically from the data. This paper presents a comprehensive survey of deep learning based developments in the past decade for content based image retrieval. The categorization of existing state-of-the-art methods from different perspectives is also performed for greater understanding of the progress. The taxonomy used in this survey covers different supervision, different networks, different descriptor type and different retrieval type. A performance analysis is also performed using the state-of-the-art methods. The insights are also presented for the benefit of the researchers to observe the progress and to make the best choices. The survey presented in this paper will help in further research progress in image retrieval using deep learning.

北京阿比特科技有限公司