亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

While the success of edge and fog computing increased with the proliferation of the Internet of Things (IoT) solutions, such novel computing paradigm, that moves compute resources closer to the source of data and services, must address many challenges such as reducing communication overhead to/from datacenters, the latency to compute and receive results, as well as energy consumption at the mobile and IoT devices. fog-to-fog (f2f) cooperation has recently been proposed to increase the computation capacity at the network edge through cooperation across multiple stakeholders. In this paper we adopt an analytical approach to studying f2f cooperation paradigm. We highlight the benefits of using such new paradigm in comparison with traditional three-tier fog computing paradigms. We use a Continuous Time Markov Chain (CTMC) model for the N f2f cooperating nodes and cast cooperation as an optimization problem, which we solve using the proposed model.

相關內容

The former CMS Run 2 High Level Trigger (HLT) farm is one of the largest contributors to CMS compute resources, providing about 25k job slots for offline computing. This CPU farm was initially employed as an opportunistic resource, exploited during inter-fill periods, in the LHC Run 2. Since then, it has become a nearly transparent extension of the CMS capacity at CERN, being located on-site at the LHC interaction point 5 (P5), where the CMS detector is installed. This resource has been configured to support the execution of critical CMS tasks, such as prompt detector data reconstruction. It can therefore be used in combination with the dedicated Tier 0 capacity at CERN, in order to process and absorb peaks in the stream of data coming from the CMS detector. The initial configuration for this resource, based on statically configured VMs, provided the required level of functionality. However, regular operations of this cluster revealed certain limitations compared to the resource provisioning and use model employed in the case of WLCG sites. A new configuration, based on a vacuum-like model, has been implemented for this resource in order to solve the detected shortcomings. This paper reports about this redeployment work on the permanent cloud for an enhanced support to CMS offline computing, comparing the former and new models' respective functionalities, along with the commissioning effort for the new setup.

The total energy cost of computing activities is steadily increasing and projections indicate that it will be one of the dominant global energy consumers in the coming decades. However, perhaps due to its relative youth, the video game sector has not yet developed the same level of environmental awareness as other computing technologies despite the estimated three billion regular video game players in the world. This work evaluates the energy consumption of the most widely used industry-scale video game engines: Unity and Unreal Engine. Specifically, our work uses three scenarios representing relevant aspects of video games (Physics, Statics Meshes, and Dynamic Meshes) to compare the energy consumption of the engines. The aim is to determine the influence of using each of the two engines on energy consumption. Our research has confirmed significant differences in the energy consumption of video game engines: 351% in Physics in favor of Unity, 17% in Statics Meshes in favor of Unity, and 26% in Dynamic Meshes in favor of Unreal Engine. These results represent an opportunity for worldwide potential savings of at least 51 TWh per year, equivalent to the annual consumption of nearly 13 million European households, that might encourage a new branch of research on energy-efficient video game engines.

Minimum Weight Cycle (MWC) is the problem of finding a simple cycle of minimum weight in a graph $G=(V,E)$. This is a fundamental graph problem with classical sequential algorithms that run in $\tilde{O}(n^3)$ and $\tilde{O}(mn)$ time where $n=|V|$ and $m=|E|$. In recent years this problem has received significant attention in the context of fine-grained sequential complexity as well as in the design of faster sequential approximation algorithms, though not much is known in the distributed CONGEST model. We present sublinear-round approximation algorithms for computing MWC in directed graphs, and weighted graphs. Our algorithms use a variety of techniques in non-trivial ways, such as in our approximate directed unweighted MWC algorithm that efficiently computes BFS from all vertices restricted to certain implicitly computed neighborhoods in sublinear rounds, and in our weighted approximation algorithms that use unweighted MWC algorithms on scaled graphs combined with a fast and streamlined method for computing multiple source approximate SSSP. We present $\tilde{\Omega}(\sqrt{n})$ lower bounds for arbitrary constant factor approximation of MWC in directed graphs and undirected weighted graphs.

In this paper, we generalize the Jacobi eigenvalue algorithm to compute all eigenvalues and eigenvectors of a dual quaternion Hermitian matrix and show the convergence. We also propose a three-step Jacobi eigenvalue algorithm to compute the eigenvalues when a dual quaternion Hermitian matrix has two eigenvalues with identical standard parts but different dual parts and prove the convergence. Numerical experiments are presented to illustrate the efficiency and stability of the proposed Jacobi eigenvalue algorithm compaired to the power method and the Rayleigh quotient iteration method.

While the increasing number of Vantage Points (VPs) in RIPE RIS and RouteViews improves our understanding of the Internet, the quadratically increasing volume of collected data poses a challenge to the scientific and operational use of the data. The design and implementation of BGP and BGP data collection systems lead to data archives with enormous redundancy, as there is substantial overlap in announced routes across many different VPs. Researchers thus often resort to arbitrary sampling of the data, which we demonstrate comes at a cost to the accuracy and coverage of previous works. The continued growth of the Internet, and of these collection systems, exacerbates this cost. The community needs a better approach to managing and using these data archives. We propose MVP, a system that scores VPs according to their level of redundancy with other VPs, allowing more informed sampling of these data archives. Our challenge is that the degree of redundancy between two updates depends on how we define redundancy, which in turn depends on the analysis objective. Our key contribution is a general framework and associated algorithms to assess redundancy between VP observations. We quantify the benefit of our approach for four canonical BGP routing analyses: AS relationship inference, AS rank computation, hijack detection, and routing detour detection. MVP improves the coverage or accuracy (or both) of all these analyses while processing the same volume of data.

We present several results in the CONGEST model on round complexity for Replacement Paths (RPaths), Minimum Weight Cycle (MWC), and All Nodes Shortest Cycles (ANSC). We study these fundamental problems in both directed and undirected graphs, both weighted and unweighted. Many of our results are optimal to within a polylog factor: For an $n$-node graph $G$ we establish near linear lower and upper bounds for computing RPaths if $G$ is directed and weighted, and for computing MWC and ANSC if $G$ is weighted, directed or undirected; near $\sqrt{n}$ lower and upper bounds for undirected weighted RPaths; and $\Theta(D)$ bound for undirected unweighted RPaths. We also present lower and upper bounds for approximation versions of these problems, notably a $(2-(1/g))$-approximation algorithm for undirected unweighted MWC that runs in $\tilde{O}(\sqrt{n}+D)$ rounds, improving on the previous best bound of $\tilde{O}(\sqrt{ng}+D)$ rounds, where $g$ is the MWC length. We present a $(1+\epsilon)$-approximation algorithm for directed weighted RPaths, which beats the linear lower bound for exact RPaths.

Selecting the best regularization parameter in inverse problems is a classical and yet challenging problem. Recently, data-driven approaches have become popular to tackle this challenge. These approaches are appealing since they do require less a priori knowledge, but their theoretical analysis is limited. In this paper, we propose and study a statistical machine learning approach, based on empirical risk minimization. Our main contribution is a theoretical analysis, showing that, provided with enough data, this approach can reach sharp rates while being essentially adaptive to the noise and smoothness of the problem. Numerical simulations corroborate and illustrate the theoretical findings. Our results are a step towards grounding theoretically data-driven approaches to inverse problems.

This paper examines the complex nature of cyber attacks through an analysis of the LastPass breach. It argues for the integration of human-centric considerations into cybersecurity measures, focusing on mitigating factors such as goal-directed behavior, cognitive overload, human biases (e.g., optimism, anchoring), and risky behaviors. Findings from an analysis of this breach offers support to the perspective that addressing both the human and technical dimensions of cyber defense can significantly enhance the resilience of cyber systems against complex threats. This means maintaining a balanced approach while simultaneously simplifying user interactions, making users aware of biases, and discouraging risky practices are essential for preventing cyber incidents.

The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

北京阿比特科技有限公司