亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We identify the algebraic structure of the material histories generated by concurrent processes. Specifically, we extend existing categorical theories of resource convertibility to capture concurrent interaction. Our formalism admits an intuitive graphical presentation via string diagrams for proarrow equipments. We also consider certain induced categories of resource transducers, which are of independent interest due to their unusual structure.

相關內容

Microservice system solutions are driving digital transformation; however, fundamental tools and system perspectives are missing to better observe, understand, and manage these systems, their properties, and their dependencies. Microservices architecture leads towards decentralization, which implies many advantages to system operation; it, however, brings challenges to their development. Microservice systems often lack a system-centric perspective that would help engineers better cope with system evolution and quality assessment. In this work, we explored microservice-specific architecture reconstruction based on static analysis. Such reconstruction typically results in system models to visualize selected system-centric perspectives. Conventional models involve 2D methods; however, these methods are limited in utility when services proliferate. We considered various architectural perspectives relevant to microservices and assessed the relevancy of the traditional method, comparing it to alternative data visualization using 3D space. As a representative of the 3D method, we considered a 3D graph model presented in augmented reality. To begin testing the feasibility of deriving such perspectives from microservice systems, we developed and implemented prototype tools for software architecture reconstruction and visualization of compared perspectives. Using these prototypes, we performed a small user study with software practitioners to highlight the potentials and limitations of these innovative visualizations used for common practitioner reasoning and tasks.

In analyzing large scale structures it is necessary to take into account the fine scale heterogeneity for accurate failure prediction. Resolving fine scale features in the numerical model drastically increases the number of degrees of freedom, thus making full fine-scale simulations infeasible, especially in cases where the model needs to be evaluated many times. In this paper, a methodology for fine scale modeling of large scale structures is proposed, which combines the variational multiscale method, domain decomposition and model order reduction. To address applications where the assumption of scale separation does not hold, the influence of the fine scale on the coarse scale is modelled directly by the use of an additive split of the displacement field. Possible coarse and fine scale solutions are exploited for a representative coarse grid element (RCE) to construct local approximation spaces. The local spaces are designed such that local contributions of RCE subdomains can be coupled in a conforming way. Therefore, the resulting global system of equations takes the effect of the fine scale on the coarse scale into account, is sparse and reduced in size compared to the full order model. Several numerical experiments show the accuracy and efficiency of the method.

Providing security guarantees for embedded devices with limited interface capabilities is an increasingly crucial task. Although these devices don't have traditional interfaces, they still generate unintentional electromagnetic signals that correlate with the instructions being executed. By collecting these traces using our methodology and leveraging a random forest algorithm to develop a machine learning model, we built an EM side channel based instruction level disassembler. The disassembler was tested on an Arduino UNO board, yielding an accuracy of 88.69% instruction recognition for traces from twelve instructions captured at a single location in the device; this is an improvement compared to the 75.6% (for twenty instructions) reported in previous similar work.

The automation and digitalization of business processes has resulted in large amounts of data captured in information systems, which can aid businesses in understanding their processes better, improve workflows, or provide operational support. By making predictions about ongoing processes, bottlenecks can be identified and resources reallocated, as well as insights gained into the state of a process instance (case). Traditionally, data is extracted from systems in the form of an event log with a single identifying case notion, such as an order id for an Order to Cash (O2C) process. However, real processes often have multiple object types, for example, order, item, and package, so a format that forces the use of a single case notion does not reflect the underlying relations in the data. The Object-Centric Event Log (OCEL) format was introduced to correctly capture this information. The state-of-the-art predictive methods have been tailored to only traditional event logs. This thesis shows that a prediction method utilizing Generative Adversarial Networks (GAN), Long Short-Term Memory (LSTM) architectures, and Sequence to Sequence models (Seq2seq), can be augmented with the rich data contained in OCEL. Objects in OCEL can have attributes that are useful in predicting the next event and timestamp, such as a priority class attribute for an object type package indicating slower or faster processing. In the metrics of sequence similarity of predicted remaining events and mean absolute error (MAE) of the timestamp, the approach in this thesis matches or exceeds previous research, depending on whether selected object attributes are useful features for the model. Additionally, this thesis provides a web interface to predict the next sequence of activities from user input.

In this paper, we propose a novel uniformity framework for highlight detection and removal in multi-scenes, including synthetic images, face images, natural images, and text images. The framework consists of three main components, highlight feature extractor module, highlight coarse removal module, and highlight refine removal module. Firstly, the highlight feature extractor module can directly separate the highlight feature and non-highlight feature from the original highlight image. Then highlight removal image is obtained using a coarse highlight removal network. To further improve the highlight removal effect, the refined highlight removal image is finally obtained using refine highlight removal module based on contextual highlight attention mechanisms. Extensive experimental results in multiple scenes indicate that the proposed framework can obtain excellent visual effects of highlight removal and achieve state-of-the-art results in several quantitative evaluation metrics. Our algorithm is applied for the first time in video highlight removal with promising results.

Careful rational synthesis was defined in (Condurache et al. 2021) as a quantitative extension of Fisman et al.'s rational synthesis (Fisman et al. 2010), as a model of multi-agent systems in which agents are interacting in a graph arena in a turn-based fashion. There is one common resource, and each action may decrease or increase the resource. Each agent has a temporal qualitative objective and wants to maintain the value of the resource positive. One must find a Nash equilibrium. This problem is decidable. In more practical settings, the verification of the critical properties of multi-agent systems calls for models with many resources. Indeed, agents and robots consume and produce more than one type of resource: electric energy, fuel, raw material, manufactured goods, etc. We thus explore the problem of careful rational synthesis with several resources. We show that the problem is undecidable. We then propose a variant with bounded resources, motivated by the observation that in practical settings, the storage of resources is limited. We show that the problem becomes decidable, and is no harder than controller synthesis with Linear-time Temporal Logic objectives.

Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.

It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.

Graph Neural Networks (GNNs), which generalize deep neural networks to graph-structured data, have drawn considerable attention and achieved state-of-the-art performance in numerous graph related tasks. However, existing GNN models mainly focus on designing graph convolution operations. The graph pooling (or downsampling) operations, that play an important role in learning hierarchical representations, are usually overlooked. In this paper, we propose a novel graph pooling operator, called Hierarchical Graph Pooling with Structure Learning (HGP-SL), which can be integrated into various graph neural network architectures. HGP-SL incorporates graph pooling and structure learning into a unified module to generate hierarchical representations of graphs. More specifically, the graph pooling operation adaptively selects a subset of nodes to form an induced subgraph for the subsequent layers. To preserve the integrity of graph's topological information, we further introduce a structure learning mechanism to learn a refined graph structure for the pooled graph at each layer. By combining HGP-SL operator with graph neural networks, we perform graph level representation learning with focus on graph classification task. Experimental results on six widely used benchmarks demonstrate the effectiveness of our proposed model.

Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.

北京阿比特科技有限公司