亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Smart Cities are happening everywhere around us and yet they are still incomprehensibly far from directly impacting everyday life. What needs to happen to make cities really smart? Digital Twins (DTs) represent their Physical Twin (PT) in the real world through models, sensed data, context awareness, and interactions. A Digital Twin of a city appears to offer the right combination to make the Smart City accessible and thus usable. However, without appropriate interfaces, the complexity of a city cannot be represented. Ultimately, fully leveraging the potential of Smart Cities requires going beyond the Digital Twin. Can this issue be addressed? I advance embedding the Digital Twin into the Physical Twin, i.e. Fused Twins. Thus, this fusion allows access to data where it is generated in a context that can make it easily understandable. The Fused Twins paradigm is the formalization of this vision. Prototypes of Fused Twins are appearing at an neck-break speed from different domains but Smart Cities will be the context where Fused Twins will predominantly be seen in the future. This paper reviews Digital Twins to understand how Fused Twins can be constructed from Augmented Reality, Geographic Information Systems, Building/City Information Models and Digital Twins and provides an overview of current research and future directions.

相關內容

智(zhi)慧(hui)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)(英語:Smart City)是(shi)指利用(yong)(yong)各種(zhong)信(xin)(xin)息技術(shu)或(huo)創(chuang)新(xin)(xin)(xin)意念,集成(cheng)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)的(de)(de)(de)組成(cheng)系統(tong)和服(fu)務,以(yi)提(ti)升資源運(yun)用(yong)(yong)的(de)(de)(de)效(xiao)率,優化(hua)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)管(guan)理(li)和服(fu)務,以(yi)及改善(shan)市(shi)民生(sheng)活質(zhi)量(liang)。智(zhi)慧(hui)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)把新(xin)(xin)(xin)一代信(xin)(xin)息技術(shu)充分(fen)運(yun)用(yong)(yong)在(zai)(zai)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)的(de)(de)(de)各行各業之中的(de)(de)(de)基于知識(shi)社(she)會下一代創(chuang)新(xin)(xin)(xin)(創(chuang)新(xin)(xin)(xin)2.0)的(de)(de)(de)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)信(xin)(xin)息化(hua)高(gao)級形(xing)態,實現信(xin)(xin)息化(hua)、工業化(hua)與(yu)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)鎮(zhen)化(hua)深度融合,有(you)助于緩解“大(da)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)病”,提(ti)高(gao)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)鎮(zhen)化(hua)質(zhi)量(liang),實現精(jing)細化(hua)和動態管(guan)理(li),并提(ti)升城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)管(guan)理(li)成(cheng)效(xiao)和改善(shan)市(shi)民生(sheng)活質(zhi)量(liang)。關于智(zhi)慧(hui)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)的(de)(de)(de)具體定(ding)義比較廣泛,目前在(zai)(zai)國際上(shang)被廣泛認同的(de)(de)(de)定(ding)義是(shi),智(zhi)慧(hui)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)是(shi)新(xin)(xin)(xin)一代信(xin)(xin)息技術(shu)支撐(cheng)、知識(shi)社(she)會下一代創(chuang)新(xin)(xin)(xin)(創(chuang)新(xin)(xin)(xin)2.0)環境(jing)下的(de)(de)(de)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)形(xing)態,強調(diao)智(zhi)慧(hui)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)不僅(jin)(jin)僅(jin)(jin)是(shi)物聯(lian)網、云計算等新(xin)(xin)(xin)一代信(xin)(xin)息技術(shu)的(de)(de)(de)應用(yong)(yong),更重要的(de)(de)(de)是(shi)通(tong)過面向知識(shi)社(she)會的(de)(de)(de)創(chuang)新(xin)(xin)(xin)2.0的(de)(de)(de)方法論應用(yong)(yong),構建(jian)用(yong)(yong)戶創(chuang)新(xin)(xin)(xin)、開(kai)放創(chuang)新(xin)(xin)(xin)、大(da)眾(zhong)創(chuang)新(xin)(xin)(xin)、協同創(chuang)新(xin)(xin)(xin)為特征的(de)(de)(de)城(cheng)(cheng)(cheng)(cheng)(cheng)(cheng)市(shi)可持(chi)續創(chuang)新(xin)(xin)(xin)生(sheng)態。

We consider minimizing a smooth and strongly convex objective function using a stochastic Newton method. At each iteration, the algorithm is given an oracle access to a stochastic estimate of the Hessian matrix. The oracle model includes popular algorithms such as the Subsampled Newton and Newton Sketch, which can efficiently construct stochastic Hessian estimates for many tasks. Despite using second-order information, these existing methods do not exhibit superlinear convergence, unless the stochastic noise is gradually reduced to zero during the iteration, which would lead to a computational blow-up in the per-iteration cost. We address this limitation with Hessian averaging: instead of using the most recent Hessian estimate, our algorithm maintains an average of all past estimates. This reduces the stochastic noise while avoiding the computational blow-up. We show that this scheme enjoys local $Q$-superlinear convergence with a non-asymptotic rate of $(\Upsilon\sqrt{\log (t)/t}\,)^{t}$, where $\Upsilon$ is proportional to the level of stochastic noise in the Hessian oracle. A potential drawback of this (uniform averaging) approach is that the averaged estimates contain Hessian information from the global phase of the iteration, i.e., before the iterates converge to a local neighborhood. This leads to a distortion that may substantially delay the superlinear convergence until long after the local neighborhood is reached. To address this drawback, we study a number of weighted averaging schemes that assign larger weights to recent Hessians, so that the superlinear convergence arises sooner, albeit with a slightly slower rate. Remarkably, we show that there exists a universal weighted averaging scheme that transitions to local convergence at an optimal stage, and still enjoys a superlinear convergence~rate nearly (up to a logarithmic factor) matching that of uniform Hessian averaging.

Multilingual language models were shown to allow for nontrivial transfer across scripts and languages. In this work, we study the structure of the internal representations that enable this transfer. We focus on the representation of gender distinctions as a practical case study, and examine the extent to which the gender concept is encoded in shared subspaces across different languages. Our analysis shows that gender representations consist of several prominent components that are shared across languages, alongside language-specific components. The existence of language-independent and language-specific components provides an explanation for an intriguing empirical observation we make: while gender classification transfers well across languages, interventions for gender removal, trained on a single language, do not transfer easily to others.

Multimodal AI advancements have presented people with powerful ways to create images from text. Recent work has shown that text-to-image generations are able to represent a broad range of subjects and artistic styles. However, translating text prompts into visual messages is difficult. In this paper, we address this challenge with Opal, a system that produces text-to-image generations for editorial illustration. Given an article text, Opal guides users through a structured search for visual concepts and provides pipelines allowing users to illustrate based on an article's tone, subjects, and intended illustration style. Our evaluation shows that Opal efficiently generates diverse sets of editorial illustrations, graphic assets, and concept ideas. Users with Opal were more efficient at generation and generated over two times more usable results than users without. We conclude on a discussion of how structured and rapid exploration can help users better understand the capabilities of human AI co-creative systems.

In order to provide more security on double-spending, we have implemented a system allowing for a web-of-trust. In this paper, we explore different approaches taken against double-spending and implement our own version to avoid this within TrustChain as part of the ecosystem of EuroToken, the digital version of the euro. We have used the EVA protocol as a means to transfer data between users, building on the existing functionality of transferring money between users. This allows the sender of EuroTokens to leave recommendations of users based on their previous interactions with other users. This dissemination of trust through the network allows users to make more trustworthy decisions. Although this provides an upgrade in terms of usability, the mathematical details of our implementation can be explored further in other research.

This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets, which unrealistically assume that each image should contain at least one clear and uncluttered salient object. This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets. However, these models are still far from satisfactory when applied to real-world scenes. Based on our analyses, we propose a new high-quality dataset and update the previous saliency benchmark. Specifically, our dataset, called Salient Objects in Clutter~\textbf{(SOC)}, includes images with both salient and non-salient objects from several common object categories. In addition to object category annotations, each salient image is accompanied by attributes that reflect common challenges in common scenes, which can help provide deeper insight into the SOD problem. Further, with a given saliency encoder, e.g., the backbone network, existing saliency models are designed to achieve mapping from the training image set to the training ground-truth set. We, therefore, argue that improving the dataset can yield higher performance gains than focusing only on the decoder design. With this in mind, we investigate several dataset-enhancement strategies, including label smoothing to implicitly emphasize salient boundaries, random image augmentation to adapt saliency models to various scenarios, and self-supervised learning as a regularization strategy to learn from small datasets. Our extensive results demonstrate the effectiveness of these tricks. We also provide a comprehensive benchmark for SOD, which can be found in our repository: //github.com/DengPingFan/SODBenchmark.

We introduce the first algorithm for distributed decision-making that provably balances the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board computation, communication, and memory resources. We are motivated by the future of autonomy that involves heterogeneous robots collaborating in complex~tasks, such as image covering, target tracking, and area monitoring. Current algorithms, such as consensus algorithms, are insufficient to fulfill this future: they achieve distributed communication only, at the expense of high communication, computation, and memory overloads. A shift to resource-aware algorithms is needed, that can account for each robot's on-board resources, independently. We provide the first resource-aware algorithm, Resource-Aware distributed Greedy (RAG). We focus on maximization problems involving monotone and "doubly" submodular functions, a diminishing returns property. RAG has near-minimal on-board resource requirements. Each agent can afford to run the algorithm by adjusting the size of its neighborhood, even if that means selecting actions in complete isolation. RAG has provable approximation performance, where each agent can independently determine its contribution. All in all, RAG is the first algorithm to quantify the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board resource requirements. To capture the trade-off, we introduce the notion of Centralization Of Information among non-Neighbors (COIN). We validate RAG in simulated scenarios of image covering with mobile robots.

A digital twin contains up-to-date data-driven models of the physical world being studied and can use simulation to optimise the physical world. However, the analysis made by the digital twin is valid and reliable only when the model is equivalent to the physical world. Maintaining such an equivalent model is challenging, especially when the physical systems being modelled are intelligent and autonomous. The paper focuses in particular on digital twin models of intelligent systems where the systems are knowledge-aware but with limited capability. The digital twin improves the acting of the physical system at a meta-level by accumulating more knowledge in the simulated environment. The modelling of such an intelligent physical system requires replicating the knowledge-awareness capability in the virtual space. Novel equivalence maintaining techniques are needed, especially in synchronising the knowledge between the model and the physical system. This paper proposes the notion of knowledge equivalence and an equivalence maintaining approach by knowledge comparison and updates. A quantitative analysis of the proposed approach confirms that compared to state equivalence, knowledge equivalence maintenance can tolerate deviation thus reducing unnecessary updates and achieve more Pareto efficient solutions for the trade-off between update overhead and simulation reliability.

Background. From information theory, surprisal is a measurement of how unexpected an event is. Statistical language models provide a probabilistic approximation of natural languages, and because surprisal is constructed with the probability of an event occuring, it is therefore possible to determine the surprisal associated with English sentences. The issues and pull requests of software repository issue trackers give insight into the development process and likely contain the surprising events of this process. Objective. Prior works have identified that unusual events in software repositories are of interest to developers, and use simple code metrics-based methods for detecting them. In this study we will propose a new method for unusual event detection in software repositories using surprisal. With the ability to find surprising issues and pull requests, we intend to further analyse them to determine if they actually hold importance in a repository, or if they pose a significant challenge to address. If it is possible to find bad surprises early, or before they cause additional troubles, it is plausible that effort, cost and time will be saved as a result. Method. After extracting the issues and pull requests from 5000 of the most popular software repositories on GitHub, we will train a language model to represent these issues. We will measure their perceived importance in the repository, measure their resolution difficulty using several analogues, measure the surprisal of each, and finally generate inferential statistics to describe any correlations.

The core of information retrieval (IR) is to identify relevant information from large-scale resources and return it as a ranked list to respond to the user's information need. Recently, the resurgence of deep learning has greatly advanced this field and leads to a hot topic named NeuIR (i.e., neural information retrieval), especially the paradigm of pre-training methods (PTMs). Owing to sophisticated pre-training objectives and huge model size, pre-trained models can learn universal language representations from massive textual data, which are beneficial to the ranking task of IR. Since there have been a large number of works dedicating to the application of PTMs in IR, we believe it is the right time to summarize the current status, learn from existing researches, and gain some insights for future development. In this survey, we present an overview of PTMs applied in different components of an IR system, including the retrieval component, the re-ranking component, and other components. In addition, we also introduce PTMs specifically designed for IR, and summarize available datasets as well as benchmark leaderboards. Moreover, we discuss some open challenges and envision some promising directions, with the hope of inspiring more works on these topics for future research.

Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.

北京阿比特科技有限公司