亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Finite Volume method (FVM) is widely adopted in many different applications because of its built-in conservation properties, its ability to deal with arbitrary mesh and its computational efficiency. In this work, we consider the Rhie-Chow stabilized Box Method (RCBM) for the approximation of the Stokes problem. The Box Method (BM) is a piecewise linear Petrov-Galerkin formulation on the Voronoi dual mesh of a Delaunay triangulation, whereas the Rhie-Chow (RC) stabilization is a well known stabilization technique for FVM. The first part of the paper provides a variational formulation of the RC stabilization and discusses the validity of crucial properties relevant for the well-posedeness and convergence of RCBM. Moreover, a numerical exploration of the convergence properties of the method on 2D and 3D test cases is presented. The last part of the paper considers the theoretically justification of the well-posedeness of RCBM and the experimentally observed convergence rates. This latter justification hinges upon suitable assumptions, whose validity is numerically explored.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌入式系統編譯器、體系結構和綜合國際會議。 Publisher:ACM。 SIT:

Datacenter capacity is growing exponentially to satisfy the increasing demand for emerging computationally-intensive applications, such as deep learning. This trend has led to concerns over datacenters' increasing energy consumption and carbon footprint. The basic prerequisite for optimizing a datacenter's energy- and carbon-efficiency is accurately monitoring and attributing energy consumption to specific users and applications. Since datacenter servers tend to be multi-tenant, i.e., they host many applications, server- and rack-level power monitoring alone does not provide insight into their resident applications' energy usage and carbon emissions. At the same time, current application-level energy monitoring and attribution techniques are intrusive: they require privileged access to servers and require coordinated support in hardware and software, which is not always possible in cloud. To address the problem, we design WattScope, a system for non-intrusively estimating the power consumption of individual applications using external measurements of a server's aggregate power usage without requiring direct access to the server's operating system or applications. Our key insight is that, based on an analysis of production traces, the power characteristics of datacenter workloads, e.g., low variability, low magnitude, and high periodicity, are highly amenable to disaggregation of a server's total power consumption into application-specific values. WattScope adapts and extends a machine learning-based technique for disaggregating building power and applies it to server- and rack-level power meter measurements in data centers. We evaluate WattScope's accuracy on a production workload and show that it yields high accuracy, e.g., often <10% normalized mean absolute error, and is thus a potentially useful tool for datacenters in externally monitoring application-level power usage.

Mobile manipulators have been employed in many applications which are usually performed by multiple fixed-base robots or a large-size system, thanks to the mobility of the mobile base. However, the mobile base also brings redundancies to the system, which makes trajectory planning more challenging. One class of problems recently arising from mobile 3D printing is the trajectory-continuous tasks, in which the end-effector is required to follow a designed continuous trajectory (time-parametrized path) in task space. This paper formulates and solves the optimal trajectory planning problem for mobile manipulators under end-effector trajectory continuity constraint, which allows considerations of other constraints and trajectory optimization. To demonstrate our method, a discrete optimal trajectory planning algorithm is proposed to solve mobile 3D printing tasks in multiple experiments.

This paper is concerned with the expressivity and denotational semantics of a functional higher-order reversible programming language based on Theseus. In this language, pattern-matching is used to ensure the reversibility of functions. We show how one can encode any Reversible Turing Machine in said language. We then build a sound and adequate categorical semantics based on join inverse categories, with additional structures to capture pattern-matching. We then derive a full completeness result, stating that any computable, partial injective function is the image of a term in the language.

With the advent of the IoT, AI and ML/DL algorithms, the landscape of data-driven medical applications has emerged as a promising avenue for designing robust and scalable diagnostic and prognostic models from medical data. Consequently, the realm of data-driven medical applications has garnered significant attention spanning academia and industry, ushering in marked enhancements in healthcare delivery quality. Despite these strides, the adoption of AI-driven medical applications remains hindered by formidable challenges, including the arduous task of meeting security, privacy, and quality of service (QoS) standards. Recent developments in federated learning have made it possible to train complex machine-learned models in a distributed manner and has become an active research domain, particularly processing the medical data at the edge of the network in a decentralized way to preserve privacy and address security concerns. To this end, this survey paper highlights the current and future of FL technology in medical applications where data sharing is a significant burden. We delve into the contemporary research trends and their outcomes, unravelling the intricacies of designing reliable and scalable FL models. Our survey outlines the foundational statistical predicaments of FL, confronts device-related obstacles, delves into security challenges, and navigates the intricate terrain of privacy concerns, all while spotlighting its transformative potential within the medical domain. A primary focus of our study rests on medical applications, where we underscore the weighty burden of global cancer and illuminate the potency of FL in engendering computer-aided diagnosis tools that address this challenge with heightened efficacy.

Geographical and Temporal Weighted Regression (GTWR) model is an important local technique for exploring spatial heterogeneity in data relationships, as well as temporal dependence due to its high fitting capacity when it comes to real data. In this article, we consider a GTWR model driven by a spatio-temporal noise, colored in space and fractional in time. Concerning the covariates, we consider that they are correlated, taking into account two interaction types between covariates, weak and strong interaction. Under these assumptions, Weighted Least Squares Estimator (WLS) is obtained, as well as its rate of convergence. In order to evidence the good performance of the estimator studied, it is provided a simulation study of four different scenarios, where it is observed that the residuals oscillate with small variation around zero. The STARMA package of the R software allows obtaining a variant of the $R^{2}$ coefficient, with values very close to 1, which means that most of the variability is explained by the model.

Compartmentalization is a form of defensive software design in which an application is broken down into isolated but communicating components. Retrofitting compartmentalization into existing applications is often thought to be expensive from the engineering effort and performance overhead points of view. Still, recent years have seen proposals of compartmentalization methods with promises of low engineering efforts and reduced performance impact. ARM Morello combines a modern ARM processor with an implementation of Capability Hardware Enhanced RISC Instructions (CHERI) aiming to provide efficient and secure compartmentalization. Past works exploring CHERI-based compartmentalization were restricted to emulated/FPGA prototypes. In this paper, we explore possible compartmentalization schemes with CHERI on the Morello chip. We propose two approaches representing different trade-offs in terms of engineering effort, security, scalability, and performance impact. We describe and implement these approaches on a prototype OS running bare metal on the Morello chip, compartmentalize two popular applications, and investigate the performance overheads. Furthermore, we show that compartmentalization can be achieved with an engineering cost that can be quite low if one is willing to trade off on scalability and security, and that performance overheads are similar to other intra-address space isolation mechanisms.

Explainable recommender systems (RS) have traditionally followed a one-size-fits-all approach, delivering the same explanation level of detail to each user, without considering their individual needs and goals. Further, explanations in RS have so far been presented mostly in a static and non-interactive manner. To fill these research gaps, we aim in this paper to adopt a user-centered, interactive explanation model that provides explanations with different levels of detail and empowers users to interact with, control, and personalize the explanations based on their needs and preferences. We followed a user-centered approach to design interactive explanations with three levels of detail (basic, intermediate, and advanced) and implemented them in the transparent Recommendation and Interest Modeling Application (RIMA). We conducted a qualitative user study (N=14) to investigate the impact of providing interactive explanations with varying level of details on the users' perception of the explainable RS. Our study showed qualitative evidence that fostering interaction and giving users control in deciding which explanation they would like to see can meet the demands of users with different needs, preferences, and goals, and consequently can have positive effects on different crucial aspects in explainable recommendation, including transparency, trust, satisfaction, and user experience.

Many protocols in distributed computing rely on a source of randomness, usually called a random beacon, both for their applicability and security. This is especially true for proof-of-stake blockchain protocols in which the next miner or set of miners have to be chosen randomly and each party's likelihood to be selected is in proportion to their stake in the cryptocurrency. Current random beacons used in proof-of-stake protocols, such as Ouroboros and Algorand, have two fundamental limitations: Either (i)~they rely on pseudorandomness, e.g.~assuming that the output of a hash function is uniform, which is a widely-used but unproven assumption, or (ii)~they generate their randomness using a distributed protocol in which several participants are required to submit random numbers which are then used in the generation of a final random result. However, in this case, there is no guarantee that the numbers provided by the parties are uniformly random and there is no incentive for the parties to honestly generate uniform randomness. Most random beacons have both limitations. In this thesis, we provide a protocol for distributed generation of randomness. Our protocol does not rely on pseudorandomness at all. Similar to some of the previous approaches, it uses random inputs by different participants to generate a final random result. However, the crucial difference is that we provide a game-theoretic guarantee showing that it is in everyone's best interest to submit uniform random numbers. Hence, our approach is the first to incentivize honest behavior instead of just assuming it. Moreover, the approach is trustless and generates unbiased random numbers. It is also tamper-proof and no party can change the output or affect its distribution. Finally, it is designed with modularity in mind and can be easily plugged into existing distributed protocols such as proof-of-stake blockchains.

Pre-trained Language Models (PLMs) which are trained on large text corpus via self-supervised learning method, have yielded promising performance on various tasks in Natural Language Processing (NLP). However, though PLMs with huge parameters can effectively possess rich knowledge learned from massive training text and benefit downstream tasks at the fine-tuning stage, they still have some limitations such as poor reasoning ability due to the lack of external knowledge. Research has been dedicated to incorporating knowledge into PLMs to tackle these issues. In this paper, we present a comprehensive review of Knowledge-Enhanced Pre-trained Language Models (KE-PLMs) to provide a clear insight into this thriving field. We introduce appropriate taxonomies respectively for Natural Language Understanding (NLU) and Natural Language Generation (NLG) to highlight these two main tasks of NLP. For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG), and rule knowledge. The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods. Finally, we point out some promising future directions of KE-PLMs.

The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.

北京阿比特科技有限公司