亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper investigates the observational capabilities of monitors that can observe a system over multiple runs. We study how the augmented monitoring setup affect the class of properties that can be verified at runtime, focussing on branching-time properties expressed in the modal mu-calculus. Our results show that the setup can be used to systematically extend previously established monitorability limits. We also prove bounds that capture the correspondence between the syntactic structure of a branching-time property and the number of system runs required to conduct the verification.

相關內容

Modern ML predictions models are surprisingly accurate in practice and incorporating their power into algorithms has led to a new research direction. Algorithms with predictions have already been used to improve on worst-case optimal bounds for online problems and for static graph problems. With this work, we initiate the study of the complexity of {\em data structures with predictions}, with an emphasis on dynamic graph problems. Unlike the independent work of v.d.~Brand et al.~[arXiv:2307.09961] that aims at upper bounds, our investigation is focused on establishing conditional fine-grained lower bounds for various notions of predictions. Our lower bounds are conditioned on the Online Matrix Vector (OMv) hypothesis. First we show that a prediction-based algorithm for OMv provides a smooth transition between the known bounds, for the offline and the online setting, and then show that this algorithm is essentially optimal under the OMv hypothesis. Further, we introduce and study four different kinds of predictions. (1) For {\em $\varepsilon$-accurate predictions}, where $\varepsilon \in (0,1)$, we show that any lower bound from the non-prediction setting carries over, reduced by a factor of $1-\varepsilon$. (2) For {\em $L$-list accurate predictions}, we show that one can efficiently compute a $(1/L)$-accurate prediction from an $L$-list accurate prediction. (3) For {\em bounded delay predictions} and {\em bounded delay predictions with outliers}, we show that a lower bound from the non-prediction setting carries over, if the reduction fulfills a certain reordering condition (which is fulfilled by many reductions from OMv for dynamic graph problems). This is demonstrated by showing lower and almost tight upper bounds for a concrete, dynamic graph problem, called $\# s \textrm{-} \triangle$, where the number of triangles that contain a fixed vertex $s$ must be reported.

With the continuous advancement of robot teleoperation technology, shared control is used to reduce the physical and mental load of the operator in teleoperation system. This paper proposes an alternating shared control framework for object grasping that considers both operator's preferences through their manual manipulation and the constraints of the follower robot. The switching between manual mode and automatic mode enables the operator to intervene the task according to their wishes. The generation of the grasping pose takes into account the current state of the operator's hand pose, as well as the manipulability of the robot. The object grasping experiment indicates that the use of the proposed grasping pose selection strategy leads to smoother follower movements when switching from manual mode to automatic mode.

For industrial control systems (ICS), many existing defense solutions focus on detecting attacks only when they make the system behave anomalously. Instead, in this work, we study how to detect attackers who are still in their hiding phase. Specifically, we consider an off-path false-data-injection attacker who makes the original sensor's readings unavailable and then impersonates that sensor by sending out legitimate-looking fake readings, so that she can stay hidden in the system for a prolonged period of time (e.g., to gain more information or to launch the actual devastating attack on a specific time). To expose such hidden attackers, our approach relies on continuous injection of ``micro distortion'' to the original sensor's readings, either through digital or physical means. We keep the distortions strictly within a small magnitude (e.g., $0.5\%$ of the possible operating value range) to ensure that it does not affect the normal functioning of the ICS. Micro-distortions are generated based on secret key(s) shared only between the targeted sensor and the defender. For digitally-inserted micro-distortions, we propose and discuss the pros and cons of a two-layer least-significant-bit-based detection algorithm. Alternatively, when the micro-distortions are added physically, a main design challenge is to ensure the introduced micro-distortions do not get overwhelmed by the fluctuation of actual readings and can still provide accurate detection capability. Towards that, we propose a simple yet effective Filtered-$\Delta$-Mean-Difference algorithm that can expose the hidden attackers in a highly accurate and fast manner. We demonstrate the effectiveness and versatility of our defense by using real-world sensor reading traces from different industrial control (including smart grid) systems.

The majority of fault-tolerant distributed algorithms are designed assuming a nominal corruption model, in which at most a fraction $f_n$ of parties can be corrupted by the adversary. However, due to the infamous Sybil attack, nominal models are not sufficient to express the trust assumptions in open (i.e., permissionless) settings. Instead, permissionless systems typically operate in a weighted model, where each participant is associated with a weight and the adversary can corrupt a set of parties holding at most a fraction $f_w$ of total weight. In this paper, we suggest a simple way to transform a large class of protocols designed for the nominal model into the weighted model. To this end, we formalize and solve three novel optimization problems, which we collectively call the weight reduction problems, that allow us to map large real weights into small integer weights while preserving the properties necessary for the correctness of the protocols. In all cases, we manage to keep the sum of the integer weights to be at most linear in the number of parties, resulting in extremely efficient protocols for the weighted model. Moreover, we demonstrate that, on weight distributions that emerge in practice, the sum of the integer weights tends to be far from the theoretical worst-case and, often even smaller than the number of participants. While, for some protocols, our transformation requires an arbitrarily small reduction in resilience (i.e., $f_w = f_n - \epsilon$), surprisingly, for many important problems we manage to obtain weighted solutions with the same resilience ($f_w = f_n$) as nominal ones. Notable examples include asynchronous consensus, verifiable secret sharing, erasure-coded distributed storage and broadcast protocols.

Detecting changes that occurred in a pair of 3D airborne LiDAR point clouds, acquired at two different times over the same geographical area, is a challenging task because of unmatching spatial supports and acquisition system noise. Most recent attempts to detect changes on point clouds are based on supervised methods, which require large labelled data unavailable in real-world applications. To address these issues, we propose an unsupervised approach that comprises two components: Neural Field (NF) for continuous shape reconstruction and a Gaussian Mixture Model for categorising changes. NF offer a grid-agnostic representation to encode bi-temporal point clouds with unmatched spatial support that can be regularised to increase high-frequency details and reduce noise. The reconstructions at each timestamp are compared at arbitrary spatial scales, leading to a significant increase in detection capabilities. We apply our method to a benchmark dataset of simulated LiDAR point clouds for urban sprawling. The dataset offers different challenging scenarios with different resolutions, input modalities and noise levels, allowing a multi-scenario comparison of our method with the current state-of-the-art. We boast the previous methods on this dataset by a 10% margin in intersection over union metric. In addition, we apply our methods to a real-world scenario to identify illegal excavation (looting) of archaeological sites and confirm that they match findings from field experts.

An NP-complete graph decision problem, the "Multi-stage graph Simple Path" (abbr. MSP) problem, is introduced. The main contribution of this paper is a poly-time algorithm named the ZH algorithm for the problem together with the proof of its correctness, which implies NP=P. (1) A crucial structural property is discovered, whereby all MSP instances are arranged into the sequence $G_{0}$,$G_{1}$,$G_{2}$,... ($G_{k}$ essentially stands for a group of graphs for all $k\geq 0$). For each $G_{j}(j>0)$ in the sequence, there is a graph $G_{i}(0\leq i<j)$ "mathematically homomorphic" to $G_{j}$ which keeps completely accordant with $G_{j}$ on the existence of global solutions. This naturally provides a chance of applying mathematical induction for the proof of an algorithm. In previous attempts, algorithms used for making global decisions were mostly guided by heuristics and intuition. Rather, the ZH algorithm is dedicatedly designed to comply with the proposed proving framework of mathematical induction. (2) Although the ZH algorithm deals with paths, it always regards paths as a collection of edge sets. This is the key to the avoidance of exponential complexity. (3) Any poly-time algorithm that seeks global information can barely avoid the error caused by localized computation. In the ZH algorithm, the proposed reachable-path edge-set $R(e)$ and the computed information for it carry all necessary contextual information, which can be utilized to summarize the "history" and to detect the "future" for searching global solutions. (4) The relation between local strategies and global strategies is discovered and established, wherein preceding decisions can pose constraints to subsequent decisions (and vice versa). This interplay resembles the paradigm of dynamic programming, while much more convoluted. Nevertheless, the computation is always strait forward and decreases monotonically.

The use of multiple imputation (MI) is becoming increasingly popular for addressing missing data. Although some conventional MI approaches have been well studied and have shown empirical validity, they have limitations when processing large datasets with complex data structures. Their imputation performances usually rely on the proper specification of imputation models, which requires expert knowledge of the inherent relations among variables. Moreover, these standard approaches tend to be computationally inefficient for medium and large datasets. In this paper, we propose a scalable MI framework mixgb, which is based on XGBoost, subsampling, and predictive mean matching. Our approach leverages the power of XGBoost, a fast implementation of gradient boosted trees, to automatically capture interactions and non-linear relations while achieving high computational efficiency. In addition, we incorporate subsampling and predictive mean matching to reduce bias and better account for appropriate imputation variability. The proposed framework is implemented in an R package mixgb. Supplementary materials for this article are available online.

Camera-based autonomous systems that emulate human perception are increasingly being integrated into safety-critical platforms. Consequently, an established body of literature has emerged that explores adversarial attacks targeting the underlying machine learning models. Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems. However, the real world poses challenges related to the "survivability" of adversarial manipulations given environmental noise in perception pipelines and the dynamicity of autonomous systems. In this paper, we take a sensor-first approach. We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples. EvilEye exploits the camera's optics to induce misclassifications under a variety of illumination conditions. To generate dynamic perturbations, we formalize the projection of a digital attack into the physical domain by modeling the transformation function of the captured image through the optical pipeline. Our extensive experiments show that EvilEye's generated adversarial perturbations are much more robust across varying environmental light conditions relative to existing physical perturbation frameworks, achieving a high attack success rate (ASR) while bypassing state-of-the-art physical adversarial detection frameworks. We demonstrate that the dynamic nature of EvilEye enables attackers to adapt adversarial examples across a variety of objects with a significantly higher ASR compared to state-of-the-art physical world attack frameworks. Finally, we discuss mitigation strategies against the EvilEye attack.

In this work, we bound a machine's ability to learn based on computational limitations implied by physicality. We start by considering the information processing capacity (IPC), a normalized measure of the expected squared error of a collection of signals to a complete basis of functions. We use the IPC to measure the degradation under noise of the performance of reservoir computers, a particular kind of recurrent network, when constrained by physical considerations. First, we show that the IPC is at most a polynomial in the system size $n$, even when considering the collection of $2^n$ possible pointwise products of the $n$ output signals. Next, we argue that this degradation implies that the family of functions represented by the reservoir requires an exponential number of samples to learn in the presence of the reservoir's noise. Finally, we conclude with a discussion of the performance of the same collection of $2^n$ functions without noise when being used for binary classification.

While existing work in robust deep learning has focused on small pixel-level $\ell_p$ norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations -- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.

北京阿比特科技有限公司