The introduction of novel technology has oftentimes changed the concept of ownership. Non-fungible tokens are a recent example, as they allow a decentralized way to generate and verify proof of ownership via distributed ledger technology. Despite crucial uncertainties, these tokens have generated great enthusiasm for the future of digital property and its surrounding economy. In this regard, I think there is an untapped opportunity in applying a hypertext approach to augment such highly structured ownership-based associations. To this end, in this work I propose hyperownership, based on the premises that property is the law of lists and ledgers, and that hypertext is an apt method to inquiry such a ledger system. In spite of the significant risks and challenges to realize such a vision, I believe that it has great potential to transform the way with which we interact with digital property.
Digital twins have recently gained significant interest in simulation, optimization, and predictive maintenance of Industrial Control Systems (ICS). Recent studies discuss the possibility of using digital twins for intrusion detection in industrial systems. Accordingly, this study contributes to a digital twin-based security framework for industrial control systems, extending its capabilities for simulation of attacks and defense mechanisms. Four types of process-aware attack scenarios are implemented on a standalone open-source digital twin of an industrial filling plant: command injection, network Denial of Service (DoS), calculated measurement modification, and naive measurement modification. A stacked ensemble classifier is proposed as the real-time intrusion detection, based on the offline evaluation of eight supervised machine learning algorithms. The designed stacked model outperforms previous methods in terms of F1-Score and accuracy, by combining the predictions of various algorithms, while it can detect and classify intrusions in near real-time (0.1 seconds). This study also discusses the practicality and benefits of the proposed digital twin-based security framework.
Enhancing existing transmission lines is a useful tool to combat transmission congestion and guarantee transmission security with increasing demand and boosting the renewable energy source. This study concerns the selection of lines whose capacity should be expanded and by how much from the perspective of independent system operator (ISO) to minimize the system cost with the consideration of transmission line constraints and electricity generation and demand balance conditions, and incorporating ramp-up and startup ramp rates, shutdown ramp rates, ramp-down rate limits and minimum up and minimum down times. For that purpose, we develop the ISO unit commitment and economic dispatch model and show it as a right-hand side uncertainty multiple parametric analysis for the mixed integer linear programming (MILP) problem. We first relax the binary variable to continuous variables and employ the Lagrange method and Karush-Kuhn-Tucker conditions to obtain optimal solutions (optimal decision variables and objective function) and critical regions associated with active and inactive constraints. Further, we extend the traditional branch and bound method for the large-scale MILP problem by determining the upper bound of the problem at each node, then comparing the difference between the upper and lower bounds and reaching the approximate optimal solution within the decision makers' tolerated error range. In additional, the objective function's first derivative on the parameters of each line is used to inform the selection of lines to ease congestion and maximize social welfare. Finally, the amount of capacity upgrade will be chosen by balancing the cost-reduction rate of the objective function on parameters and the cost of the line upgrade. Our findings are supported by numerical simulation and provide transmission line planners with decision-making guidance.
Representation learning in recent years has been addressed with self-supervised learning methods. The input data is augmented into two distorted views and an encoder learns the representations that are invariant to distortions -- cross-view prediction. Augmentation is one of the key components in cross-view self-supervised learning frameworks to learn visual representations. This paper presents ExAgt, a novel method to include expert knowledge for augmenting traffic scenarios, to improve the learnt representations without any human annotation. The expert-guided augmentations are generated in an automated fashion based on the infrastructure, the interactions between the EGO and the traffic participants and an ideal sensor model. The ExAgt method is applied in two state-of-the-art cross-view prediction methods and the representations learnt are tested in downstream tasks like classification and clustering. Results show that the ExAgt method improves representation learning compared to using only standard augmentations and it provides a better representation space stability. The code is available at \url{//github.com/lab176344/ExAgt}.
Let $D$ be a digraph. A stable set $S$ of $D$ and a path partition $\mathcal{P}$ of $D$ are orthogonal if every path $P \in \mathcal{P}$ contains exactly one vertex of $S$. In 1982, Berge defined the class of $\alpha$-diperfect digraphs. A digraph $D$ is $\alpha$-diperfect if for every maximum stable set $S$ of $D$ there is a path partition $\mathcal{P}$ of $D$ orthogonal to $S$ and this property holds for every induced subdigraph of $D$. An anti-directed odd cycle is an orientation of an odd cycle $(x_0,\ldots,x_{2k},x_0)$ with $k\geq2$ in which each vertex $x_0,x_1,\ldots,x_{2k-1}$ is either a source or a sink. Berge conjectured that a digraph $D$ is $\alpha$-diperfect if and only if $D$ does not contain an anti-directed odd cycle as an induced subdigraph. In this paper, we show that this conjecture is false by exhibiting an infinite family of orientations of complements of odd cycles with at least seven vertices that are not $\alpha$-diperfect.
Digital communication is often brisk and automated. From auto-completed messages to "likes," research has shown that such lightweight interactions can affect perceptions of authenticity and closeness. On the other hand, effort in relationships can forge emotional bonds by conveying a sense of caring and is essential in building and maintaining relationships. To explore effortful communication, we designed and evaluated Auggie, an iOS app that encourages partners to create digitally handcrafted Augmented Reality (AR) experiences for each other. Auggie is centered around crafting a 3D character with photos, animated movements, drawings, and audio for someone else. We conducted a two-week-long field study with 30 participants (15 pairs), who used Auggie with their partners remotely. Our qualitative findings show that Auggie participants engaged in meaningful effort through the handcrafting process, and felt closer to their partners, although the tool may not be appropriate in all situations. We discuss design implications and future directions for systems that encourage effortful communication.
Unlike traditional media, social media typically provides quantified metrics of how many users have engaged with each piece of content. Some have argued that the presence of these cues promotes the spread of misinformation. Here we investigate the causal effect of social cues on users' engagement with social media posts. We conducted an experiment with N=628 Americans on a custom-built newsfeed interface where we systematically varied the presence and strength of social cues. We find that when cues are shown, indicating that a larger number of others have engaged with a post, users were more likely to share and like that post. Furthermore, relative to a control without social cues, the presence of social cues increased the sharing of true relative to false news. The presence of social cues also makes it more difficult to precisely predict how popular any given post would be. Together, our results suggest that -- instead of distracting users or causing them to share low-quality news -- social cues may, in certain circumstances, actually boost truth discernment and reduce the sharing of misinformation. Our work suggests that social cues play important roles in shaping users' attention and engagement on social media, and platforms should understand the effects of different cues before making changes to what cues are displayed and how.
Along with the massive growth of the Internet from the 1990s until now, various innovative technologies have been created to bring users breathtaking experiences with more virtual interactions in cyberspace. Many virtual environments with thousands of services and applications, from social networks to virtual gaming worlds, have been developed with immersive experience and digital transformation, but most are incoherent instead of being integrated into a platform. In this context, metaverse, a term formed by combining meta and universe, has been introduced as a shared virtual world that is fueled by many emerging technologies, such as fifth-generation networks and beyond, virtual reality, and artificial intelligence (AI). Among such technologies, AI has shown the great importance of processing big data to enhance immersive experience and enable human-like intelligence of virtual agents. In this survey, we make a beneficial effort to explore the role of AI in the foundation and development of the metaverse. We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse. We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the metaverse. Subsequently, several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds. Finally, we conclude the key contribution of this survey and open some future research directions in AI for the metaverse.
We present a novel counterfactual framework for both Zero-Shot Learning (ZSL) and Open-Set Recognition (OSR), whose common challenge is generalizing to the unseen-classes by only training on the seen-classes. Our idea stems from the observation that the generated samples for unseen-classes are often out of the true distribution, which causes severe recognition rate imbalance between the seen-class (high) and unseen-class (low). We show that the key reason is that the generation is not Counterfactual Faithful, and thus we propose a faithful one, whose generation is from the sample-specific counterfactual question: What would the sample look like, if we set its class attribute to a certain class, while keeping its sample attribute unchanged? Thanks to the faithfulness, we can apply the Consistency Rule to perform unseen/seen binary classification, by asking: Would its counterfactual still look like itself? If ``yes'', the sample is from a certain class, and ``no'' otherwise. Through extensive experiments on ZSL and OSR, we demonstrate that our framework effectively mitigates the seen/unseen imbalance and hence significantly improves the overall performance. Note that this framework is orthogonal to existing methods, thus, it can serve as a new baseline to evaluate how ZSL/OSR models generalize. Codes are available at //github.com/yue-zhongqi/gcm-cf.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.
The accurate and interpretable prediction of future events in time-series data often requires the capturing of representative patterns (or referred to as states) underpinning the observed data. To this end, most existing studies focus on the representation and recognition of states, but ignore the changing transitional relations among them. In this paper, we present evolutionary state graph, a dynamic graph structure designed to systematically represent the evolving relations (edges) among states (nodes) along time. We conduct analysis on the dynamic graphs constructed from the time-series data and show that changes on the graph structures (e.g., edges connecting certain state nodes) can inform the occurrences of events (i.e., time-series fluctuation). Inspired by this, we propose a novel graph neural network model, Evolutionary State Graph Network (EvoNet), to encode the evolutionary state graph for accurate and interpretable time-series event prediction. Specifically, Evolutionary State Graph Network models both the node-level (state-to-state) and graph-level (segment-to-segment) propagation, and captures the node-graph (state-to-segment) interactions over time. Experimental results based on five real-world datasets show that our approach not only achieves clear improvements compared with 11 baselines, but also provides more insights towards explaining the results of event predictions.