亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In recent years, there has been an increased emphasis on reducing the carbon emissions from electricity consumption. Many organizations have set ambitious targets to reduce the carbon footprint of their operations as a part of their sustainability goals. The carbon footprint of any consumer of electricity is computed as the product of the total energy consumption and the carbon intensity of electricity. Third-party carbon information services provide information on carbon intensity across regions that consumers can leverage to modulate their energy consumption patterns to reduce their overall carbon footprint. In addition, to accelerate their decarbonization process, large electricity consumers increasingly acquire power purchase agreements (PPAs) from renewable power plants to obtain renewable energy credits that offset their "brown" energy consumption. There are primarily two methods for attributing carbon-free energy, or renewable energy credits, to electricity consumers: location-based and market-based. These two methods yield significantly different carbon intensity values for various consumers. As there is a lack of consensus which method to use for carbon-free attribution, a concurrent application of both approaches is observed in practice. In this paper, we show that such concurrent applications can cause discrepancies in the carbon savings reported by carbon optimization techniques. Our analysis across three state-of-the-art carbon optimization techniques shows possible overestimation of up to 55.1% in the carbon reductions reported by the consumers and even increased emissions for consumers in some cases. We also find that carbon optimization techniques make different decisions under the market-based method and location-based method, and the market-based method can yield up to 28.2% less carbon savings than those claimed by the location-based method for consumers without PPAs.

相關內容

High-resolution fMRI provides a window into the brain's mesoscale organization. Yet, higher spatial resolution increases scan times, to compensate for the low signal and contrast-to-noise ratio. This work introduces a deep learning-based 3D super-resolution (SR) method for fMRI. By incorporating a resolution-agnostic image augmentation framework, our method adapts to varying voxel sizes without retraining. We apply this innovative technique to localize fine-scale motion-selective sites in the early visual areas. Detection of these sites typically requires a resolution higher than 1 mm isotropic, whereas here, we visualize them based on lower resolution (2-3mm isotropic) fMRI data. Remarkably, the super-resolved fMRI is able to recover high-frequency detail of the interdigitated organization of these sites (relative to the color-selective sites), even with training data sourced from different subjects and experimental paradigms -- including non-visual resting-state fMRI, underscoring its robustness and versatility. Quantitative and qualitative results indicate that our method has the potential to enhance the spatial resolution of fMRI, leading to a drastic reduction in acquisition time.

The 6TiSCH protocol stack was proposed to ensure high-performance communications in the Industrial Internet of Things (IIoT). However, the lack of sufficient time slots for nodes outside the 6TiSCH's Destination Oriented Directed Acyclic Graph (DODAG) to transmit their Destination Advertisement Object (DAO) messages and cell reservation requests significantly hinders their integration into the DODAG. This oversight not only prolongs the device's join time but also increases energy consumption during the network formation phase. Moreover, challenges emerge due to the substantial number of control packets employed by both the 6TiSCH Scheduling Function (SF) and routing protocol (RPL), thus draining more energy resources, increasing medium contention, and decreasing spatial reuse. Furthermore, an SF that overlooks previously allocated slots when assigning new ones to the same node may increase jitter, and more complications ensue when it neglects the state of the TSCH queue, thus leading to packet dropping due to queue saturation. Additional complexity arises when the RPL disregards the new parent's schedule saturation during parent switching, which results in inefficient energy and time usage. To address these issues, we introduce in this paper novel mechanisms, strategically situated at the intersection of SF and RPL that are designed to balance the control packet distribution and adaptively manage parent switching. Our proposal, implemented within the 6TiSCH simulator, demonstrates significant improvements across vital performance metrics, such as node's joining time, jitter, latency, energy consumption, and amount of traffic, in comparison to the conventional 6TiSCH benchmark.

Printed circuit boards (PCBs) are an integral part of electronic systems. Hence, verifying their physical integrity in the presence of supply chain attacks (e.g., tampering and counterfeiting) is of utmost importance. Recently, tamper detection techniques grounded in impedance characterization of PCB's Power Delivery Network (PDN) have gained prominence due to their global detection coverage, non-invasive, and low-cost nature. Similar to other physical verification methods, these techniques rely on the existence of a physical golden sample for signature comparisons. However, having access to a physical golden sample for golden signature extraction is not feasible in many real-world scenarios. In this work, we assess the feasibility of eliminating a physical golden sample and replacing it with a simulated golden signature obtained by the PCB design files. By performing extensive simulation and measurements on an in-house designed PCB, we demonstrate how the parasitic impedance of the PCB components plays a major role in reaching a successful verification. Based on the obtained results and using statistical metrics, we show that we can mitigate the discrepancy between collected signatures from simulation and measurements.

Integration of unmanned aerial vehicles (UAVs) for surveillance or monitoring applications into fifth generation (5G) New Radio (NR) cellular networks is an intriguing problem that has recently tackled a lot of interest in both academia and industry. For an efficient spectrum usage, we consider a recently-proposed sky-ground nonorthogonal multiple access (NOMA) scheme, where a cellular-connected UAV acting as aerial user (AU) and a static terrestrial user (TU) are paired to simultaneously transmit their uplink signals to a base station (BS) in the same time-frequency resource blocks. In such a case, due to the highly dynamic nature of the UAV, the signal transmitted by the AU experiences both time dispersion due to multipath propagation effects and frequency dispersion caused by Doppler shifts. On the other hand, for a static ground network, frequency dispersion of the signal transmitted by the TU is negligible and only multipath effects have to be taken into account. To decode the superposed signals at the BS through successive interference cancellation, accurate estimates of both the AU and TU channels are needed. In this paper, we propose channel estimation procedures that suitably exploit the different circular/noncircular modulation formats (modulation diversity) and the different almost-cyclostationarity features (Doppler diversity) of the AU and TU by means of widely-linear time-varying processing. Our estimation approach is semi-blind since Doppler shifts and time delays of the AU are estimated based on the received data only, whereas the remaining relevant parameters of the AU and TU channels are acquired relying also on the available training symbols. Monte Carlo numerical results demonstrate that the proposed channel estimation algorithms can satisfactorily acquire all the relevant parameters in different operative conditions.

The field of efficient Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges. Although the field has expanded and is vibrant, there hasn't been a concise framework that analyzes the various methods of LLM Inference to provide a clear understanding of this domain. Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model for systematic analysis of LLM inference techniques. This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems, such as why LLMs are memory-bound, how much memory and computation they need, and how to choose the right hardware. We systematically collate the latest advancements in efficient LLM inference, covering crucial areas such as model compression (e.g., Knowledge Distillation and Quantization), algorithm improvements (e.g., Early Exit and Mixture-of-Expert), and both hardware and system-level enhancements. Our survey stands out by analyzing these methods with roofline model, helping us understand their impact on memory access and computation. This distinctive approach not only showcases the current research landscape but also delivers valuable insights for practical implementation, positioning our work as an indispensable resource for researchers new to the field as well as for those seeking to deepen their understanding of efficient LLM deployment. The analyze tool, LLM-Viewer, is open-sourced.

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.

Owing to effective and flexible data acquisition, unmanned aerial vehicle (UAV) has recently become a hotspot across the fields of computer vision (CV) and remote sensing (RS). Inspired by recent success of deep learning (DL), many advanced object detection and tracking approaches have been widely applied to various UAV-related tasks, such as environmental monitoring, precision agriculture, traffic management. This paper provides a comprehensive survey on the research progress and prospects of DL-based UAV object detection and tracking methods. More specifically, we first outline the challenges, statistics of existing methods, and provide solutions from the perspectives of DL-based models in three research topics: object detection from the image, object detection from the video, and object tracking from the video. Open datasets related to UAV-dominated object detection and tracking are exhausted, and four benchmark datasets are employed for performance evaluation using some state-of-the-art methods. Finally, prospects and considerations for the future work are discussed and summarized. It is expected that this survey can facilitate those researchers who come from remote sensing field with an overview of DL-based UAV object detection and tracking methods, along with some thoughts on their further developments.

Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization. However, their optimization properties are less well understood. We take the first step towards analyzing GNN training by studying the gradient dynamics of GNNs. First, we analyze linearized GNNs and prove that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that we validate on real-world graphs. Second, we study what may affect the GNNs' training speed. Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution. Empirical results confirm that our theoretical results for linearized GNNs align with the training behavior of nonlinear GNNs. Our results provide the first theoretical support for the success of GNNs with skip connections in terms of optimization, and suggest that deep GNNs with skip connections would be promising in practice.

Object detectors usually achieve promising results with the supervision of complete instance annotations. However, their performance is far from satisfactory with sparse instance annotations. Most existing methods for sparsely annotated object detection either re-weight the loss of hard negative samples or convert the unlabeled instances into ignored regions to reduce the interference of false negatives. We argue that these strategies are insufficient since they can at most alleviate the negative effect caused by missing annotations. In this paper, we propose a simple but effective mechanism, called Co-mining, for sparsely annotated object detection. In our Co-mining, two branches of a Siamese network predict the pseudo-label sets for each other. To enhance multi-view learning and better mine unlabeled instances, the original image and corresponding augmented image are used as the inputs of two branches of the Siamese network, respectively. Co-mining can serve as a general training mechanism applied to most of modern object detectors. Experiments are performed on MS COCO dataset with three different sparsely annotated settings using two typical frameworks: anchor-based detector RetinaNet and anchor-free detector FCOS. Experimental results show that our Co-mining with RetinaNet achieves 1.4%~2.1% improvements compared with different baselines and surpasses existing methods under the same sparsely annotated setting.

ASR (automatic speech recognition) systems like Siri, Alexa, Google Voice or Cortana has become quite popular recently. One of the key techniques enabling the practical use of such systems in people's daily life is deep learning. Though deep learning in computer vision is known to be vulnerable to adversarial perturbations, little is known whether such perturbations are still valid on the practical speech recognition. In this paper, we not only demonstrate such attacks can happen in reality, but also show that the attacks can be systematically conducted. To minimize users' attention, we choose to embed the voice commands into a song, called CommandSong. In this way, the song carrying the command can spread through radio, TV or even any media player installed in the portable devices like smartphones, potentially impacting millions of users in long distance. In particular, we overcome two major challenges: minimizing the revision of a song in the process of embedding commands, and letting the CommandSong spread through the air without losing the voice "command". Our evaluation demonstrates that we can craft random songs to "carry" any commands and the modify is extremely difficult to be noticed. Specially, the physical attack that we play the CommandSongs over the air and record them can success with 94 percentage.

北京阿比特科技有限公司