亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Time-Sensitive Networking (TSN) is an enhancement of Ethernet which provides various mechanisms for real-time communication. Time-triggered (TT) traffic represents periodic data streams with strict real-time requirements. Amongst others, TSN supports scheduled transmission of TT streams, i.e., the transmission of their frames by end stations is coordinated in such a way that none or very little queuing delay occurs in intermediate nodes. TSN supports multiple priority queues per egress port. The TAS uses so-called gates to explicitly allow and block these queues for transmission on a short periodic timescale. The TAS is utilized to protect scheduled traffic from other traffic to minimize its queuing delay. In this work, we consider scheduling in TSN which comprises the computation of periodic transmission instants at end stations and the periodic opening and closing of queue gates. In this paper, we first give a brief overview of TSN features and standards. We state the TSN scheduling problem and explain common extensions which also include optimization problems. We review scheduling and optimization methods that have been used in this context. Then, the contribution of currently available research work is surveyed. We extract and compile optimization objectives, solved problem instances, and evaluation results. Research domains are identified, and specific contributions are analyzed. Finally, we discuss potential research directions and open problems.

相關內容

Using the blockwise matrix inversion, inversions of large matrices with different ways of memory handling are presented in this article. Algorithm for performing inversion of matrix which is partitioned into large number of blocks is presented in which inversions and multiplications involving the blocks can be carried out with parallel processing. Optimized memory handling and efficient methods for intermediate multiplications among the partitioned blocks are implemented in this algorithm.

This correspondence studies the wireless powered over-the-air computation (AirComp) for achieving sustainable wireless data aggregation (WDA) by integrating AirComp and wireless power transfer (WPT) into a joint design. In particular, we consider that a multi-antenna hybrid access point (HAP) employs the transmit energy beamforming to charge multiple single-antenna low-power wireless devices (WDs) in the downlink, and the WDs use the harvested energy to simultaneously send their messages to the HAP for AirComp in the uplink. Under this setup, we minimize the computation mean square error (MSE), by jointly optimizing the transmit energy beamforming and the receive AirComp beamforming at the HAP, as well as the transmit power at the WDs, subject to the maximum transmit power constraint at the HAP and the wireless energy harvesting constraints at individual WDs. To tackle the non-convex computation MSE minimization problem, we present an efficient algorithm to find a converged high-quality solution by using the alternating optimization technique. Numerical results show that the proposed joint WPT-AirComp approach significantly reduces the computation MSE, as compared to other benchmark schemes.

Byzantine fault-tolerant (BFT) systems are able to maintain the availability and integrity of IoT systems, in presence of failure of individual components, random data corruption or malicious attacks. Fault-tolerant systems in general are essential in assuring continuity of service for mission critical applications. However, their implementation may be challenging and expensive. In this study, IoT Systems with Byzantine Fault-Tolerance are considered. Analytical models and solutions are presented as well as a detailed analysis for the evaluation of the availability. Byzantine Fault Tolerance is particularly important for blockchain mechanisms, and in turn for IoT, since it can provide a secure, reliable and decentralized infrastructure for IoT devices to communicate and transact with each other. The proposed model is based on continuous-time Markov chains, and it analyses the availability of Byzantine Fault-Tolerant systems. While the availability model is based on a continuous-time Markov chain where the breakdown and repair times follow exponential distributions, the number of the Byzantine nodes in the network studied follows various distributions. The numerical results presented report availability as a function of the number of participants and the relative number of honest actors in the system. It can be concluded from the model that there is a non-linear relationship between the number of servers and network availability; i.e. the availability is inversely proportional to the number of nodes in the system. This relationship is further strengthened as the ratio of break-down rate over repair rate increases.

We consider a general queueing model with price-sensitive customers in which the service provider seeks to balance two objectives, maximizing the average revenue rate and minimizing the average queue length. Customers arrive according to a Poisson process, observe an offered price, and decide to join the queue if their valuation exceeds the price. The queue is operated first-in first-out, and the service times are exponential. Our model represents applications in areas like make-to-order manufacturing, cloud computing, and food delivery. The optimal solution for our model is dynamic; the price changes as the state of the system changes. However, such dynamic pricing policies may be undesirable for a variety of reasons. In this work, we provide performance guarantees for a simple and natural class of static pricing policies which charge a fixed price up to a certain occupancy threshold and then allow no more customers into the system. We provide a series of results showing that such static policies can simultaneously guarantee a constant fraction of the optimal revenue with at most a constant factor increase in expected queue length. For instance, our policy for the M/M/1 setting allows bi-criteria approximations of $(0.5, 1), (0.66, 1.16), (0.75, 1.54)$ and $(0.8, 2)$ for the revenue and queue length, respectively. We also provide guarantees for settings with multiple customer classes and multiple servers, as well as the expected sojourn time objective.

The timely transportation of goods to customers is an essential component of economic activities. However, heavy-duty diesel trucks that deliver goods contribute significantly to greenhouse gas emissions within many large metropolitan areas, including Los Angeles, New York, and San Francisco. To facilitate freight electrification, this paper proposes joint routing and charging (JRC) scheduling for electric trucks. The objective of the associated optimization problem is to minimize the cost of transportation, charging, and tardiness. As a result of a large number of combinations of road segments, electric trucks can take a large number of combinations of possible charging decisions and charging duration as well. The resulting mixed-integer linear programming problem (MILP) is extremely challenging because of the combinatorial complexity even in the deterministic case. Therefore, a Level-Based Surrogate Lagrangian Relaxation method is employed to decompose and coordinate the overall problem into truck subproblems that are significantly less complex. In the coordination aspect, each truck subproblem is solved independently of other subproblems based on charging cost, tardiness, and the values of Lagrangian multipliers. In addition to serving as a means of guiding and coordinating trucks, multipliers can also serve as a basis for transparent and explanatory decision-making by trucks. Testing results demonstrate that even small instances cannot be solved using the over-the-shelf solver CPLEX after several days of solving. The new method, on the other hand, can obtain near-optimal solutions within a few minutes for small cases, and within 30 minutes for large ones. Furthermore, it has been demonstrated that as battery capacity increases, the total cost decreases significantly; moreover, as the charging power increases, the number of trucks required decreases as well.

Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.

Deep learning techniques have led to remarkable breakthroughs in the field of generic object detection and have spawned a lot of scene-understanding tasks in recent years. Scene graph has been the focus of research because of its powerful semantic representation and applications to scene understanding. Scene Graph Generation (SGG) refers to the task of automatically mapping an image into a semantic structural scene graph, which requires the correct labeling of detected objects and their relationships. Although this is a challenging task, the community has proposed a lot of SGG approaches and achieved good results. In this paper, we provide a comprehensive survey of recent achievements in this field brought about by deep learning techniques. We review 138 representative works that cover different input modalities, and systematically summarize existing methods of image-based SGG from the perspective of feature extraction and fusion. We attempt to connect and systematize the existing visual relationship detection methods, to summarize, and interpret the mechanisms and the strategies of SGG in a comprehensive way. Finally, we finish this survey with deep discussions about current existing problems and future research directions. This survey will help readers to develop a better understanding of the current research status and ideas.

It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.

Machine learning is completely changing the trends in the fashion industry. From big to small every brand is using machine learning techniques in order to improve their revenue, increase customers and stay ahead of the trend. People are into fashion and they want to know what looks best and how they can improve their style and elevate their personality. Using Deep learning technology and infusing it with Computer Vision techniques one can do so by utilizing Brain-inspired Deep Networks, and engaging into Neuroaesthetics, working with GANs and Training them, playing around with Unstructured Data,and infusing the transformer architecture are just some highlights which can be touched with the Fashion domain. Its all about designing a system that can tell us information regarding the fashion aspect that can come in handy with the ever growing demand. Personalization is a big factor that impacts the spending choices of customers.The survey also shows remarkable approaches that encroach the subject of achieving that by divulging deep into how visual data can be interpreted and leveraged into different models and approaches. Aesthetics play a vital role in clothing recommendation as users' decision depends largely on whether the clothing is in line with their aesthetics, however the conventional image features cannot portray this directly. For that the survey also highlights remarkable models like tensor factorization model, conditional random field model among others to cater the need to acknowledge aesthetics as an important factor in Apparel recommendation.These AI inspired deep models can pinpoint exactly which certain style resonates best with their customers and they can have an understanding of how the new designs will set in with the community. With AI and machine learning your businesses can stay ahead of the fashion trends.

Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.

北京阿比特科技有限公司