Attestation is a fundamental building block to establish trust over software systems. When used in conjunction with trusted execution environments, it guarantees that genuine code is executed even when facing strong attackers, paving the way for adoption in several sensitive application domains. This paper reviews existing remote attestation principles and compares the functionalities of current trusted execution environments as Intel SGX, Arm TrustZone and AMD SEV, as well as emerging RISC-V solutions.
The ability to accurately predict human behavior is central to the safety and efficiency of robot autonomy in interactive settings. Unfortunately, robots often lack access to key information on which these predictions may hinge, such as people's goals, attention, and willingness to cooperate. Dual control theory addresses this challenge by treating unknown parameters of a predictive model as stochastic hidden states and inferring their values at runtime using information gathered during system operation. While able to optimally and automatically trade off exploration and exploitation, dual control is computationally intractable for general interactive motion planning, mainly due to the fundamental coupling between robot trajectory optimization and human intent inference. In this paper, we present a novel algorithmic approach to enable active uncertainty reduction for interactive motion planning based on the implicit dual control paradigm. Our approach relies on sampling-based approximation of stochastic dynamic programming, leading to a model predictive control problem that can be readily solved by real-time gradient-based optimization methods. The resulting policy is shown to preserve the dual control effect for a broad class of predictive human models with both continuous and categorical uncertainty. The efficacy of our approach is demonstrated with simulated driving examples.
Introduction: Systems that exist in the hospital or clinic settings are capable of providing services in the physical environment. These systems (e.g., Picture Archiving and communication systems) provide remote service for patients. To design such systems, we need some unique methods such as software development life cycle and different methods such as prototyping. Clinical setting: This study designs an image exchange system in the private dental sector of Urmia city using user-centered methods and prototyping. Methods: Information was collected based on each stage's software development life cycle. Interviews and observations were used to gather user-needs data, such as object-oriented programming for developing a Prototype. Results: The users' needs were determined to consider at the beginning. Ease of use, security, and mobile apps were their most essential needs. Then, the prototype was designed and evaluated in the focus group session. These steps continued until users were satisfied in the focus group. Eventually, after the users' consent, the prototype became the final system. Discussion: Instant access to Information, volunteering, user interface design, and usefulness were the most critical variables users considered. The advantage of this system also includes less radiation to the patient due to not losing and missing the clips of the patient's images. Conclusion: The success of such a system requires the consideration of end-users needs and their application to the system. In addition to this system, having an electronic health record can improve the treatment process and improve the work of the medical staff.
Reconfigurable intelligent surface (RIS) can effectively control the wavefront of the impinging signals and has emerged as a cost-effective promising solution to improve the spectrum and energy efficiency of wireless systems. Most existing researches on RIS assume that the hardware operations are perfect. However, both physical transceiver and RIS suffer from inevitable hardware impairments in practice, which can lead to severe system performance degradation and increase the complexity of beamforming optimization. Consequently, the existing researches on RIS, including channel estimation, beamforming optimization, spectrum and energy efficiency analysis, etc., cannot directly apply to the case of hardware impairments. In this paper, by taking hardware impairments into consideration, we conduct the joint transmit and reflect beamforming optimization, and reevaluate the system performance. First, we characterize the closed-form estimators of direct and cascaded channels in both single-user and multi-user cases, and analyze the impact of hardware impairments on channel estimation accuracy. Then, the optimal transmit beamforming solution is derived, and a gradient descent method-based algorithm is also proposed to optimize the reflect beamforming. Moreover, we analyze the three types of asymptotic channel capacities with respect to the transmit power, the antenna number, and the reflecting element number. Finally, in terms of the system energy consumption, we analyze the power scaling law and the energy efficiency. Our experimental results also reveal an encouraging phenomenon that the RIS-assisted wireless system with massive reflecting elements can achieve both high spectrum and energy efficiency without the need for massive antennas and without allocating too many resources to optimize the reflect beamforming.
In many life science experiments or medical studies, subjects are repeatedly observed and measurements are collected in factorial designs with multivariate data. The analysis of such multivariate data is typically based on multivariate analysis of variance (MANOVA) or mixed models, requiring complete data, and certain assumption on the underlying parametric distribution such as continuity or a specific covariance structure, e.g., compound symmetry. However, these methods are usually not applicable when discrete data or even ordered categorical data are present. In such cases, nonparametric rank-based methods that do not require stringent distributional assumptions are the preferred choice. However, in the multivariate case, most rank-based approaches have only been developed for complete observations. It is the aim of this work is to develop asymptotic correct procedures that are capable of handling missing values, allowing for singular covariance matrices and are applicable for ordinal or ordered categorical data. This is achieved by applying a wild bootstrap procedure in combination with quadratic form-type test statistics. Beyond proving their asymptotic correctness, extensive simulation studies validate their applicability for small samples. Finally, two real data examples are analyzed.
The CUR decomposition is a technique for low-rank approximation that selects small subsets of the columns and rows of a given matrix to use as bases for its column and rowspaces. It has recently attracted much interest, as it has several advantages over traditional low rank decompositions based on orthonormal bases. These include the preservation of properties such as sparsity or non-negativity, the ability to interpret data, and reduced storage requirements. The problem of finding the skeleton sets that minimize the norm of the residual error is known to be NP-hard, but classical pivoting schemes such as column pivoted QR work tend to work well in practice. When combined with randomized dimension reduction techniques, classical pivoting based methods become particularly effective, and have proven capable of very rapidly computing approximate CUR decompositions of large, potentially sparse, matrices. Another class of popular algorithms for computing CUR de-compositions are based on drawing the columns and rows randomly from the full index sets, using specialized probability distributions based on leverage scores. Such sampling based techniques are particularly appealing for very large scale problems, and are well supported by theoretical performance guarantees. This manuscript provides a comparative study of the various randomized algorithms for computing CUR decompositions that have recently been proposed. Additionally, it proposes some modifications and simplifications to the existing algorithms that leads to faster execution times.
Although, in the task of grasping via a data-driven method, closed-loop feedback and predicting 6 degrees of freedom (DoF) grasp rather than conventionally used 4DoF top-down grasp are demonstrated to improve performance individually, few systems have both. Moreover, the sequential property of that task is hardly dealt with, while the approaching motion necessarily generates a series of observations. Therefore, this paper synthesizes three approaches and suggests a closed-loop framework that can predict the 6DoF grasp in a heavily cluttered environment from continuously received vision observations. This can be realized by formulating the grasping problem as Hidden Markov Model and applying a particle filter to infer grasp. Additionally, we introduce a novel lightweight Convolutional Neural Network (CNN) model that evaluates and initializes grasp samples in real-time, making the particle filter process possible. The experiments, which are conducted on a real robot with a heavily cluttered environment, show that our framework not only quantitatively improves the grasping success rate significantly compared to the baseline algorithms, but also qualitatively reacts to a dynamic change in the environment and cleans up the table.
We consider the problem of controlling an unknown linear dynamical system under adversarially changing convex costs and full feedback of both the state and cost function. We present the first computationally-efficient algorithm that attains an optimal $\smash{\sqrt{T}}$-regret rate compared to the best stabilizing linear controller in hindsight, while avoiding stringent assumptions on the costs such as strong convexity. Our approach is based on a careful design of non-convex lower confidence bounds for the online costs, and uses a novel technique for computationally-efficient regret minimization of these bounds that leverages their particular non-convex structure.
We leverage the Neural Tangent Kernel and its equivalence to training infinitely-wide neural networks to devise $\infty$-AE: an autoencoder with infinitely-wide bottleneck layers. The outcome is a highly expressive yet simplistic recommendation model with a single hyper-parameter and a closed-form solution. Leveraging $\infty$-AE's simplicity, we also develop Distill-CF for synthesizing tiny, high-fidelity data summaries which distill the most important knowledge from the extremely large and sparse user-item interaction matrix for efficient and accurate subsequent data-usage like model training, inference, architecture search, etc. This takes a data-centric approach to recommendation, where we aim to improve the quality of logged user-feedback data for subsequent modeling, independent of the learning algorithm. We particularly utilize the concept of differentiable Gumbel-sampling to handle the inherent data heterogeneity, sparsity, and semi-structuredness, while being scalable to datasets with hundreds of millions of user-item interactions. Both of our proposed approaches significantly outperform their respective state-of-the-art and when used together, we observe 96-105% of $\infty$-AE's performance on the full dataset with as little as 0.1% of the original dataset size, leading us to explore the counter-intuitive question: Is more data what you need for better recommendation?
In the past few decades, artificial intelligence (AI) technology has experienced swift developments, changing everyone's daily life and profoundly altering the course of human society. The intention of developing AI is to benefit humans, by reducing human labor, bringing everyday convenience to human lives, and promoting social good. However, recent research and AI applications show that AI can cause unintentional harm to humans, such as making unreliable decisions in safety-critical scenarios or undermining fairness by inadvertently discriminating against one group. Thus, trustworthy AI has attracted immense attention recently, which requires careful consideration to avoid the adverse effects that AI may bring to humans, so that humans can fully trust and live in harmony with AI technologies. Recent years have witnessed a tremendous amount of research on trustworthy AI. In this survey, we present a comprehensive survey of trustworthy AI from a computational perspective, to help readers understand the latest technologies for achieving trustworthy AI. Trustworthy AI is a large and complex area, involving various dimensions. In this work, we focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being. For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems. We also discuss the accordant and conflicting interactions among different dimensions and discuss potential aspects for trustworthy AI to investigate in the future.
Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.