亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Sneakers were designated as the most counterfeited fashion item online, with three times more risk in a trade than any other fashion purchase. As the market expands, the current sneaker scene displays several vulnerabilities and trust flaws, mostly related to the legitimacy of assets or actors. In this paper, we investigate various blockchain-based mechanisms to address these large-scale trust issues. We argue that (i) pre-certified and tracked assets through the use of non-fungible tokens can ensure the genuine nature of an asset and authenticate its owner more effectively during peer-to-peer trading across a marketplace; (ii) a game-theoretic-based system with economic incentives for participating users can greatly reduce the rate of online fraud and address missed delivery deadlines; (iii) a decentralized dispute resolution system biased in favour of an honest party can solve potential conflicts more reliably.

相關內容

ACM SIGACCESS Conference on Computers and Accessibility是為殘疾人和老年人提供與計算機相關的設計、評估、使用和教育研究的首要論壇。我們歡迎提交原始的高質量的有關計算和可訪問性的主題。今年,ASSETS首次將其范圍擴大到包括關于計算機無障礙教育相關主題的原創高質量研究。官網鏈接: · 查全率/召回率 · 目標跟蹤 · 得分 · 相互獨立的 ·
2023 年 8 月 2 日

Long-Term tracking is a hot topic in Computer Vision. In this context, competitive models are presented every year, showing a constant growth rate in performances, mainly measured in standardized protocols as Visual Object Tracking (VOT) and Object Tracking Benchmark (OTB). Fusion-trackers strategy has been applied over last few years for overcoming the known re-detection problem, turning out to be an important breakthrough. Following this approach, this work aims to generalize the fusion concept to an arbitrary number of trackers used as baseline trackers in the pipeline, leveraging a learning phase to better understand how outcomes correlate with each other, even when no target is present. A model and data independence conjecture will be evidenced in the manuscript, yielding a recall of 0.738 on LTB-50 dataset when learning from VOT-LT2022, and 0.619 by reversing the two datasets. In both cases, results are strongly competitive with state-of-the-art and recall turns out to be the first on the podium.

Recent diffusion probabilistic models (DPMs) have shown remarkable abilities of generated content, however, they often suffer from complex forward processes, resulting in inefficient solutions for the reversed process and prolonged sampling times. In this paper, we aim to address the aforementioned challenges by focusing on the diffusion process itself that we propose to decouple the intricate diffusion process into two comparatively simpler process to improve the generative efficacy and speed. In particular, we present a novel diffusion paradigm named DDM (Decoupled Diffusion Models) based on the Ito diffusion process, in which the image distribution is approximated by an explicit transition probability while the noise path is controlled by the standard Wiener process. We find that decoupling the diffusion process reduces the learning difficulty and the explicit transition probability improves the generative speed significantly. We prove a new training objective for DPM, which enables the model to learn to predict the noise and image components separately. Moreover, given the novel forward diffusion equation, we derive the reverse denoising formula of DDM that naturally supports fewer steps of generation without ordinary differential equation (ODE) based accelerators. Our experiments demonstrate that DDM outperforms previous DPMs by a large margin in fewer function evaluations setting and gets comparable performances in long function evaluations setting. We also show that our framework can be applied to image-conditioned generation and high-resolution image synthesis, and that it can generate high-quality images with only 10 function evaluations.

Processing-in-memory (PIM) has been explored for decades by computer architects, yet it has never seen the light of day in real-world products due to their high design overheads and lack of a killer application. With the advent of critical memory-intensive workloads, several commercial PIM technologies have been introduced to the market ranging from domain-specific PIM architectures to more general-purpose PIM architectures. In this work, we deepdive into UPMEM's commercial PIM technology, a general-purpose PIM-enabled parallel architecture that is highly programmable. Our first key contribution is the development of a flexible simulation framework for PIM. The simulator we developed (aka PIMulator) enables the compilation of UPMEM-PIM source codes into its compiled machine-level instructions, which are subsequently consumed by our cycle-level performance simulator. Using PIMulator, we demystify UPMEM's PIM design through a detailed characterization study. Building on top of our characterization, we conduct a series of case studies to pathfind important architectural features that we deem will be critical for future PIM architectures to support

Identification over quantum broadcast channels is considered. As opposed to the information transmission task, the decoder only identifies whether a message of his choosing was sent or not. This relaxation allows for a double-exponential code size. An achievable identification region is derived for a quantum broadcast channel, and a full characterization for the class of classical-quantum broadcast channels. The identification capacity region of the single-mode pure-loss bosonic broadcast channel is obtained as a consequence. Furthermore, the results are demonstrated for the quantum erasure broadcast channel, where our region is suboptimal, but improves on the best previously known bounds.

This paper investigates the adaptive bitrate (ABR) video semantic communication over wireless networks. In the considered model, video sensing devices must transmit video semantic information to an edge server, to facilitate ubiquitous video sensing services such as road environment monitoring at the edge server in autonomous driving scenario. However, due to the varying wireless network conditions, it is challenging to guarantee both low transmission delay and high semantic accuracy at the same time if devices continuously transmit a fixed bitrate video semantic information. To address this challenge, we develop an adaptive bitrate video semantic communication (ABRVSC) system, in which devices adaptively adjust the bitrate of video semantic information according to network conditions. Specifically, we first define the quality of experience (QoE) for video semantic communication. Subsequently, a swin transformer-based semantic codec is proposed to extract semantic information with considering the influence of QoE. Then, we propose an Actor-Critic based ABR algorithm for the semantic codec to enhance the robustness of the proposed ABRVSC scheme against network variations. Simulation results demonstrate that at low bitrates, the mean intersection over union (MIoU) of the proposed ABRVSC scheme is nearly twice that of the traditional scheme. Moreover, the proposed ABRVSC scheme, which increases the QoE in video semantic communication by 36.57%, exhibits more robustness against network variations compared to both the fixed bitrate schemes and traditional ABR schemes.

Performing automatic reformulations of a user's query is a popular paradigm used in information retrieval (IR) for improving effectiveness -- as exemplified by the pseudo-relevance feedback approaches, which expand the query in order to alleviate the vocabulary mismatch problem. Recent advancements in generative language models have demonstrated their ability in generating responses that are relevant to a given prompt. In light of this success, we seek to study the capacity of such models to perform query reformulation and how they compare with long-standing query reformulation methods that use pseudo-relevance feedback. In particular, we investigate two representative query reformulation frameworks, GenQR and GenPRF. GenQR directly reformulates the user's input query, while GenPRF provides additional context for the query by making use of pseudo-relevance feedback information. For each reformulation method, we leverage different techniques, including fine-tuning and direct prompting, to harness the knowledge of language models. The reformulated queries produced by the generative models are demonstrated to markedly benefit the effectiveness of a state-of-the-art retrieval pipeline on four TREC test collections (varying from TREC 2004 Robust to the TREC 2019 Deep Learning). Furthermore, our results indicate that our studied generative models can outperform various statistical query expansion approaches while remaining comparable to other existing complex neural query reformulation models, with the added benefit of being simpler to implement.

In the digital age, data is a valuable commodity, and data marketplaces offer lucrative opportunities for data owners to monetize their private data. However, data privacy is a significant concern, and differential privacy has become a popular solution to address this issue. Private data trading systems (PDQS) facilitate the trade of private data by determining which data owners to purchase data from, the amount of privacy purchased, and providing specific aggregation statistics while protecting the privacy of data owners. However, existing PDQS with separated procurement and query processes are prone to over-perturbation of private data and lack trustworthiness. To address this issue, this paper proposes a framework for PDQS with an integrated procurement and query process to avoid excessive perturbation of private data. We also present two instances of this framework, one based on a greedy approach and another based on a neural network. Our experimental results show that both of our mechanisms outperformed the separately conducted procurement and query mechanism under the same budget regarding accuracy.

Learning-based approaches have achieved remarkable performance in the domain of autonomous driving. Leveraging the impressive ability of neural networks and large amounts of human driving data, complex patterns and rules of driving behavior can be encoded as a model to benefit the autonomous driving system. Besides, an increasing number of data-driven works have been studied in the decision-making and motion planning module. However, the reliability and the stability of the neural network is still full of uncertainty. In this paper, we introduce a hierarchical planning architecture including a high-level grid-based behavior planner and a low-level trajectory planner, which is highly interpretable and controllable. As the high-level planner is responsible for finding a consistent route, the low-level planner generates a feasible trajectory. We evaluate our method both in closed-loop simulation and real world driving, and demonstrate the neural network planner has outstanding performance in complex urban autonomous driving scenarios.

Interdisciplinary collaboration has become a driving force for scientific breakthroughs, and evaluating scholars' performance in interdisciplinary researches is essential for promoting such collaborations. However, traditional scholar evaluation methods based solely on individual achievements do not consider interdisciplinary cooperation, creating a challenge for interdisciplinary scholar evaluation and recommendation. To address this issue, we propose a scholar embedding model that quantifies and represents scholars based on global semantic information and social influence, enabling real-time tracking of scholars' research trends. Our model incorporates semantic information and social influence for interdisciplinary scholar evaluation, laying the foundation for future interdisciplinary collaboration discovery and recommendation projects. We demonstrate the effectiveness of our model on a sample of scholars from the Beijing University of Posts and Telecommunications.

Generative models are now capable of producing highly realistic images that look nearly indistinguishable from the data on which they are trained. This raises the question: if we have good enough generative models, do we still need datasets? We investigate this question in the setting of learning general-purpose visual representations from a black-box generative model rather than directly from data. Given an off-the-shelf image generator without any access to its training data, we train representations from the samples output by this generator. We compare several representation learning methods that can be applied to this setting, using the latent space of the generator to generate multiple "views" of the same semantic content. We show that for contrastive methods, this multiview data can naturally be used to identify positive pairs (nearby in latent space) and negative pairs (far apart in latent space). We find that the resulting representations rival those learned directly from real data, but that good performance requires care in the sampling strategy applied and the training method. Generative models can be viewed as a compressed and organized copy of a dataset, and we envision a future where more and more "model zoos" proliferate while datasets become increasingly unwieldy, missing, or private. This paper suggests several techniques for dealing with visual representation learning in such a future. Code is released on our project page: //ali-design.github.io/GenRep/

北京阿比特科技有限公司