亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the wake of the rapid deployment of large-scale low-Earth orbit satellite constellations, exploiting the full computing potential of Commercial Off-The-Shelf (COTS) devices in these environments has become a pressing issue. However, understanding this problem is far from straightforward due to the inherent differences between the terrestrial infrastructure and the satellite platform in space. In this paper, we take an important step towards closing this knowledge gap by presenting the first measurement study on the thermal control, power management, and performance of COTS computing devices on satellites. Our measurements reveal that the satellite platform and COTS computing devices significantly interplay in terms of the temperature and energy, forming the main constraints on satellite computing. Further, we analyze the critical factors that shape the characteristics of onboard COTS computing devices. We provide guidelines for future research on optimizing the use of such devices for computing purposes. Finally, we have released the datasets to facilitate further study in satellite computing.

相關內容

We present a flexible, deterministic numerical method for computing left-tail rare events of sums of non-negative, independent random variables. The method is based on iterative numerical integration of linear convolutions by means of Newtons-Cotes rules. The periodicity properties of convoluted densities combined with the Trapezoidal rule are exploited to produce a robust and efficient method, and the method is flexible in the sense that it can be applied to all kinds of non-negative continuous RVs. We present an error analysis and study the benefits of utilizing Newton-Cotes rules versus the fast Fourier transform (FFT) for numerical integration, showing that although there can be efficiency-benefits to using FFT, Newton-Cotes rules tend to preserve the relative error better, and indeed do so at an acceptable computational cost. Numerical studies on problems with both known and unknown rare-event probabilities showcase the method's performance and support our theoretical findings.

High-speed interconnects, such as NVLink, are integral to modern multi-GPU systems, acting as a vital link between CPUs and GPUs. This study highlights the vulnerability of multi-GPU systems to covert and side channel attacks due to congestion on interconnects. An adversary can infer private information about a victim's activities by monitoring NVLink congestion without needing special permissions. Leveraging this insight, we develop a covert channel attack across two GPUs with a bandwidth of 45.5 kbps and a low error rate, and introduce a side channel attack enabling attackers to fingerprint applications through the shared NVLink interconnect.

In the rapidly evolving landscape of computing disciplines, substantial efforts are being dedicated to unraveling the sociotechnical implications of generative AI (Gen AI). While existing research has manifested in various forms, there remains a notable gap concerning the direct engagement of knowledge workers in academia with Gen AI. We interviewed 18 knowledge workers, including faculty and students, to investigate the social and technical dimensions of Gen AI from their perspective. Our participants raised concerns about the opacity of the data used to train Gen AI. This lack of transparency makes it difficult to identify and address inaccurate, biased, and potentially harmful, information generated by these models. Knowledge workers also expressed worries about Gen AI undermining trust in the relationship between instructor and student and discussed potential solutions, such as pedagogy readiness, to mitigate them. Additionally, participants recognized Gen AI's potential to democratize knowledge by accelerating the learning process and act as an accessible research assistant. However, there were also concerns about potential social and power imbalances stemming from unequal access to such technologies. Our study offers insights into the concerns and hopes of knowledge workers about the ethical use of Gen AI in educational settings and beyond, with implications for navigating this new landscape.

We explore how high-speed robot arm motions can dynamically manipulate cables to vault over obstacles, knock objects from pedestals, and weave between obstacles. In this paper, we propose a self-supervised learning framework that enables a UR5 robot to perform these three tasks. The framework finds a 3D apex point for the robot arm, which, together with a task-specific trajectory function, defines an arcing motion that dynamically manipulates the cable to perform tasks with varying obstacle and target locations. The trajectory function computes minimum-jerk motions that are constrained to remain within joint limits and to travel through the 3D apex point by repeatedly solving quadratic programs to find the shortest and fastest feasible motion. We experiment with 5 physical cables with different thickness and mass and compare performance against two baselines in which a human chooses the apex point. Results suggest that a baseline with a fixed apex across the three tasks achieves respective success rates of 51.7%, 36.7%, and 15.0%, and a baseline with human-specified, task-specific apex points achieves 66.7%, 56.7%, and 15.0% success rate respectively, while the robot using the learned apex point can achieve success rates of 81.7% in vaulting, 65.0% in knocking, and 60.0% in weaving. Code, data, and supplementary materials are available at https: //sites.google.com/berkeley.edu/dynrope/home.

The problem of optimizing discrete phases in a reconfigurable intelligent surface (RIS) to maximize the received power at a user equipment is addressed. Necessary and sufficient conditions to achieve this maximization are given. These conditions are employed in an algorithm to achieve the maximization. New versions of the algorithm are given that are proven to achieve convergence in N or fewer steps whether the direct link is completely blocked or not, where N is the number of the RIS elements, whereas previously published results achieve this in KN or 2N number of steps where K is the number of discrete phases. Thus, for a discrete-phase RIS, the techniques presented in this paper achieve the optimum received power in the smallest number of steps published in the literature. In addition, in each of those N steps, the techniques presented in this paper determine only one or a small number of phase shifts with a simple elementwise update rule, which result in a substantial reduction of computation time, as compared to the algorithms in the literature. As a secondary result, we define the uniform polar quantization (UPQ) algorithm which is an intuitive quantization algorithm that can approximate the continuous solution with an approximation ratio of sinc^2(1/K) and achieve low time-complexity, given perfect knowledge of the channel.

As the demand for digital information grows in fields like medicine, remote sensing, and archival, efficient image compression becomes crucial. This paper focuses on lossless image compression, vital for managing the increasing volume of image data without quality loss. Current research emphasizes techniques such as predictive coding, transform coding, and context modeling to improve compression ratios. This study evaluates lossless compression in JPEG XL, the latest standard in the JPEG family, and aims to enhance its compression ratio by modifying the codebase. Results show that while overall compression levels are below the original codec, one prediction method improves compression for specific image types. This study offers insights into enhancing lossless compression performance and suggests possibilities for future advancements in this area.

With the emergence of Artificial Intelligence (AI)-based decision-making, explanations help increase new technology adoption through enhanced trust and reliability. However, our experimental study challenges the notion that every user universally values explanations. We argue that the agreement with AI suggestions, whether accompanied by explanations or not, is influenced by individual differences in personality traits and the users' comfort with technology. We found that people with higher neuroticism and lower technological comfort showed more agreement with the recommendations without explanations. As more users become exposed to eXplainable AI (XAI) and AI-based systems, we argue that the XAI design should not provide explanations for users with high neuroticism and low technology comfort. Prioritizing user personalities in XAI systems will help users become better collaborators of AI systems.

With the exponential surge in diverse multi-modal data, traditional uni-modal retrieval methods struggle to meet the needs of users demanding access to data from various modalities. To address this, cross-modal retrieval has emerged, enabling interaction across modalities, facilitating semantic matching, and leveraging complementarity and consistency between different modal data. Although prior literature undertook a review of the cross-modal retrieval field, it exhibits numerous deficiencies pertaining to timeliness, taxonomy, and comprehensiveness. This paper conducts a comprehensive review of cross-modal retrieval's evolution, spanning from shallow statistical analysis techniques to vision-language pre-training models. Commencing with a comprehensive taxonomy grounded in machine learning paradigms, mechanisms, and models, the paper then delves deeply into the principles and architectures underpinning existing cross-modal retrieval methods. Furthermore, it offers an overview of widely used benchmarks, metrics, and performances. Lastly, the paper probes the prospects and challenges that confront contemporary cross-modal retrieval, while engaging in a discourse on potential directions for further progress in the field. To facilitate the research on cross-modal retrieval, we develop an open-source code repository at //github.com/BMC-SDNU/Cross-Modal-Retrieval.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

北京阿比特科技有限公司