亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

With the rapid acceleration of transportation electrification, public charging stations are becoming vital infrastructure in a smart sustainable city to provide on-demand electric vehicle (EV) charging services. As more consumers seek to utilize public charging services, the pricing and scheduling of such services will become vital, complementary tools to mediate competition for charging resources. However, determining the right prices to charge is difficult due to the online nature of EV arrivals. This paper studies a joint pricing and scheduling problem for the operator of EV charging networks with limited charging capacity and time-varying energy cost. Upon receiving a charging request, the operator offers a price, and the EV decides whether to admit the offer based on its own value and the posted price. The operator then schedules the real-time charging process to satisfy the charging request if the EV admits the offer. We propose an online pricing algorithm that can determine the posted price and EV charging schedule to maximize social welfare, i.e., the total value of EVs minus the energy cost of charging stations. Theoretically, we prove the devised algorithm can achieve the order-optimal competitive ratio under the competitive analysis framework. Practically, we show the empirical performance of our algorithm outperforms other benchmark algorithms in experiments using real EV charging data.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

In crowdsourcing, a group of common people is asked to execute the tasks and in return will receive some incentives. In this article, one of the crowdsourcing scenarios with multiple heterogeneous tasks and multiple IoT devices (as task executors) is studied as a two-tiered process. In the first tier of the proposed model, it is assumed that a substantial number of IoT devices are not aware of the hiring process and are made aware by utilizing their social connections. Each of the IoT devices reports a cost (private value) that it will charge in return for its services. The participating IoT devices are rational and strategic. The goal of the first tier is to select the subset of IoT devices as initial notifiers so as to maximize the number of IoT devices notified with the constraint that the total payment made to the notifiers is within the budget. For this purpose, an incentive compatible mechanism is proposed. In the second tier, a set of quality IoT devices is determined by utilizing the idea of single-peaked preferences. The next objective of the second tier is to hire quality IoT devices for the floated tasks. For this purpose, each quality IoT device reports private valuation along with its favorite bundle of tasks. In the second tier, it is assumed that the valuation of the IoT devices satisfies gross substitute criteria and is private. For the second tier, the truthful mechanisms are designed independently for determining the quality IoT devices and for hiring them and deciding their payment respectively. Theoretical analysis is carried out for the two tiers independently and is shown that the proposed mechanisms are computationally efficient, truthful, correct, budget feasible, and individually rational. Further, the probabilistic analysis is carried out to estimate the expected number of IoT devices got notified about the task execution process.

Thanks to the rapid growth in wearable technologies and recent advancement in machine learning and signal processing, monitoring complex human contexts becomes feasible, paving the way to develop human-in-the-loop IoT systems that naturally evolve to adapt to the human and environment state autonomously. Nevertheless, a central challenge in designing many of these IoT systems arises from the requirement to infer the human mental state, such as intention, stress, cognition load, or learning ability. While different human contexts can be inferred from the fusion of different sensor modalities that can correlate to a particular mental state, the human brain provides a richer sensor modality that gives us more insights into the required human context. This paper proposes ERUDITE, a human-in-the-loop IoT system for the learning environment that exploits recent wearable neurotechnology to decode brain signals. Through insights from concept learning theory, ERUDITE can infer the human state of learning and understand when human learning increases or declines. By quantifying human learning as an input sensory signal, ERUDITE can provide adequate personalized feedback to humans in a learning environment to enhance their learning experience. ERUDITE is evaluated across $15$ participants and showed that by using the brain signals as a sensor modality to infer the human learning state and providing personalized adaptation to the learning environment, the participants' learning performance increased on average by $26\%$. Furthermore, we showed that ERUDITE can be deployed on an edge-based prototype to evaluate its practicality and scalability.

This report documents the programme and the outcomes of Dagstuhl Seminar 22382 "Machine Learning for Science: Bridging Data-Driven and Mechanistic Modelling". Today's scientific challenges are characterised by complexity. Interconnected natural, technological, and human systems are influenced by forces acting across time- and spatial-scales, resulting in complex interactions and emergent behaviours. Understanding these phenomena -- and leveraging scientific advances to deliver innovative solutions to improve society's health, wealth, and well-being -- requires new ways of analysing complex systems. The transformative potential of AI stems from its widespread applicability across disciplines, and will only be achieved through integration across research domains. AI for science is a rendezvous point. It brings together expertise from $\mathrm{AI}$ and application domains; combines modelling knowledge with engineering know-how; and relies on collaboration across disciplines and between humans and machines. Alongside technical advances, the next wave of progress in the field will come from building a community of machine learning researchers, domain experts, citizen scientists, and engineers working together to design and deploy effective AI tools. This report summarises the discussions from the seminar and provides a roadmap to suggest how different communities can collaborate to deliver a new wave of progress in AI and its application for scientific discovery.

Containerization is a virtualization technique that allows one to create and run executables consistently on any infrastructure. Compared to virtual machines, containers are lighter since they do not bundle a (guest) operating system but they share its kernel, and they only include the files, libraries, and dependencies that are required to properly execute a process. In the past few years, multiple container engines (i.e., tools for configuring, executing, and managing containers) have been developed ranging from some that are ``general purpose'', and mostly employed for Cloud executions, to others that are built for specific contexts, namely Internet of Things and High-Performance Computing. Given the importance of this technology for many practitioners and researchers, this paper analyses six state-of-the-art container engines and compares them through a comprehensive study of their characteristics and performance. The results are organized around 10 findings that aim to help the readers understand the differences among the technologies and help them choose the best approach for their needs.

Ridesharing has become a promising travel mode recently due to the economic and social benefits. As an essential operator, "insertion operator" has been extensively studied over static road networks. When a new request appears, the insertion operator is used to find the optimal positions of a worker's current route to insert the origin and destination of this request and minimize the travel time of this worker. Previous works study how to conduct the insertion operation efficiently in static road networks, however, in reality, the route planning should be addressed by considering the dynamic traffic scenario (i.e., a time-dependent road network). Unfortunately, existing solutions to the insertion operator become in efficient under this setting. Thus, this paper studies the insertion operator over time-dependent road networks. Specially, to reduce the high time complexity $O(n^3)$ of existing solution, we calculate the compound travel time functions along the route to speed up the calculation of the travel time between vertex pairs belonging to the route, as a result time complexity of an insertion can be reduced to $O(n^2)$. Finally, we further improve the method to a linear-time insertion algorithm by showing that it only needs $O(1)$ time to find the best position of current route to insert the origin when linearly enumerating each possible position for the new request's destination. Evaluations on two real-world and large-scale datasets show that our methods can accelerate the existing insertion algorithm by up to 25 times.

The timely transportation of goods to customers is an essential component of economic activities. However, heavy-duty diesel trucks that deliver goods contribute significantly to greenhouse gas emissions within many large metropolitan areas, including Los Angeles, New York, and San Francisco. To facilitate freight electrification, this paper proposes joint routing and charging (JRC) scheduling for electric trucks. The objective of the associated optimization problem is to minimize the cost of transportation, charging, and tardiness. As a result of a large number of combinations of road segments, electric trucks can take a large number of combinations of possible charging decisions and charging duration as well. The resulting mixed-integer linear programming problem (MILP) is extremely challenging because of the combinatorial complexity even in the deterministic case. Therefore, a Level-Based Surrogate Lagrangian Relaxation method is employed to decompose and coordinate the overall problem into truck subproblems that are significantly less complex. In the coordination aspect, each truck subproblem is solved independently of other subproblems based on charging cost, tardiness, and the values of Lagrangian multipliers. In addition to serving as a means of guiding and coordinating trucks, multipliers can also serve as a basis for transparent and explanatory decision-making by trucks. Testing results demonstrate that even small instances cannot be solved using the over-the-shelf solver CPLEX after several days of solving. The new method, on the other hand, can obtain near-optimal solutions within a few minutes for small cases, and within 30 minutes for large ones. Furthermore, it has been demonstrated that as battery capacity increases, the total cost decreases significantly; moreover, as the charging power increases, the number of trucks required decreases as well.

In the context of state-space models, skeleton-based smoothing algorithms rely on a backward sampling step which by default has a $\mathcal O(N^2)$ complexity (where $N$ is the number of particles). Existing improvements in the literature are unsatisfactory: a popular rejection sampling -- based approach, as we shall show, might lead to badly behaved execution time; another rejection sampler with stopping lacks complexity analysis; yet another MCMC-inspired algorithm comes with no stability guarantee. We provide several results that close these gaps. In particular, we prove a novel non-asymptotic stability theorem, thus enabling smoothing with truly linear complexity and adequate theoretical justification. We propose a general framework which unites most skeleton-based smoothing algorithms in the literature and allows to simultaneously prove their convergence and stability, both in online and offline contexts. Furthermore, we derive, as a special case of that framework, a new coupling-based smoothing algorithm applicable to models with intractable transition densities. We elaborate practical recommendations and confirm those with numerical experiments.

Neural architecture-based recommender systems have achieved tremendous success in recent years. However, when dealing with highly sparse data, they still fall short of expectation. Self-supervised learning (SSL), as an emerging technique to learn with unlabeled data, recently has drawn considerable attention in many fields. There is also a growing body of research proceeding towards applying SSL to recommendation for mitigating the data sparsity issue. In this survey, a timely and systematical review of the research efforts on self-supervised recommendation (SSR) is presented. Specifically, we propose an exclusive definition of SSR, on top of which we build a comprehensive taxonomy to divide existing SSR methods into four categories: contrastive, generative, predictive, and hybrid. For each category, the narrative unfolds along its concept and formulation, the involved methods, and its pros and cons. Meanwhile, to facilitate the development and evaluation of SSR models, we release an open-source library SELFRec, which incorporates multiple benchmark datasets and evaluation metrics, and has implemented a number of state-of-the-art SSR models for empirical comparison. Finally, we shed light on the limitations in the current research and outline the future research directions.

Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

北京阿比特科技有限公司