亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Growing recognition of the potential for exploitation of personal data and of the shortcomings of prior privacy regimes has led to the passage of a multitude of new online privacy regulations. Some of these laws -- notably the European Union's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) -- have been the focus of large bodies of research by the computer science community, while others have received less attention. In this work, we analyze a set of Internet privacy and data protection regulations drawn from around the world -- both those that have frequently been studied by computer scientists and those that have not -- and develop a taxonomy of rights granted and obligations imposed by these laws. We then leverage this taxonomy to systematize 270 technical research papers published in computer science venues that investigate the impact of these laws and explore how technical solutions can complement legal protections. Finally, we analyze the results in this space through an interdisciplinary lens and make recommendations for future work at the intersection of computer science and legal privacy.

相關內容

分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)學(xue)是(shi)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)的(de)(de)(de)實踐(jian)和科學(xue)。Wikipedia類(lei)別(bie)說(shuo)明(ming)了一(yi)(yi)種(zhong)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)法,可(ke)(ke)以通過自(zi)動方(fang)式提取Wikipedia類(lei)別(bie)的(de)(de)(de)完整分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)法。截(jie)至2009年,已經證明(ming),可(ke)(ke)以使用人工構(gou)建的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)法(例(li)如像WordNet這(zhe)(zhe)樣的(de)(de)(de)計(ji)算(suan)詞典的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)法)來改進和重(zhong)組(zu)Wikipedia類(lei)別(bie)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)法。 從(cong)(cong)廣義上講,分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)法還適(shi)用于除父(fu)子層次結構(gou)以外的(de)(de)(de)關(guan)系方(fang)案,例(li)如網絡結構(gou)。然后分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)法可(ke)(ke)能包括有多(duo)父(fu)母(mu)的(de)(de)(de)單身孩子,例(li)如,“汽車”可(ke)(ke)能與父(fu)母(mu)雙方(fang)一(yi)(yi)起出現“車輛(liang)”和“鋼(gang)結構(gou)”;但(dan)(dan)是(shi)對某些(xie)人而言,這(zhe)(zhe)僅意味著“汽車”是(shi)幾種(zhong)不同(tong)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)法的(de)(de)(de)一(yi)(yi)部分(fen)(fen)(fen)(fen)(fen)(fen)。分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)法也可(ke)(ke)能只是(shi)將事物組(zu)織成(cheng)組(zu),或者是(shi)按字母(mu)順序排列的(de)(de)(de)列表;但(dan)(dan)是(shi)在(zai)這(zhe)(zhe)里(li),術語(yu)詞匯(hui)更(geng)(geng)合適(shi)。在(zai)知(zhi)識管理(li)(li)中的(de)(de)(de)當前用法中,分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)法被(bei)認為比(bi)本(ben)體論窄(zhai),因(yin)為本(ben)體論應用了各種(zhong)各樣的(de)(de)(de)關(guan)系類(lei)型。 在(zai)數學(xue)上,分(fen)(fen)(fen)(fen)(fen)(fen)層分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)法是(shi)給定對象(xiang)集的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)樹(shu)結構(gou)。該結構(gou)的(de)(de)(de)頂部是(shi)適(shi)用于所有對象(xiang)的(de)(de)(de)單個分(fen)(fen)(fen)(fen)(fen)(fen)類(lei),即根節(jie)點(dian)。此根下(xia)的(de)(de)(de)節(jie)點(dian)是(shi)更(geng)(geng)具體的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)類(lei),適(shi)用于總分(fen)(fen)(fen)(fen)(fen)(fen)類(lei)對象(xiang)集的(de)(de)(de)子集。推理(li)(li)的(de)(de)(de)進展從(cong)(cong)一(yi)(yi)般到更(geng)(geng)具體。

知識薈萃

精品入門和進階教程(cheng)、論文和代碼整(zheng)理等

更多

查看(kan)相關VIP內容、論文、資訊(xun)等(deng)

Bridging the gap between diffusion models and human preferences is crucial for their integration into practical generative workflows. While optimizing downstream reward models has emerged as a promising alignment strategy, concerns arise regarding the risk of excessive optimization with learned reward models, which potentially compromises ground-truth performance. In this work, we confront the reward overoptimization problem in diffusion model alignment through the lenses of both inductive and primacy biases. We first identify the divergence of current methods from the temporal inductive bias inherent in the multi-step denoising process of diffusion models as a potential source of overoptimization. Then, we surprisingly discover that dormant neurons in our critic model act as a regularization against overoptimization, while active neurons reflect primacy bias in this setting. Motivated by these observations, we propose Temporal Diffusion Policy Optimization with critic active neuron Reset (TDPO-R), a policy gradient algorithm that exploits the temporal inductive bias of intermediate timesteps, along with a novel reset strategy that targets active neurons to counteract the primacy bias. Empirical results demonstrate the superior efficacy of our algorithms in mitigating reward overoptimization.

Fervent calls for more robust governance of the harms associated with artificial intelligence (AI) are leading to the adoption around the world of what regulatory scholars have called a management-based approach to regulation. Recent initiatives in the United States and Europe, as well as the adoption of major self-regulatory standards by the International Organization for Standardization, share in common a core management-based paradigm. These management-based initiatives seek to motivate an increase in human oversight of how AI tools are trained and developed. Refinements and systematization of human-guided training techniques will thus be needed to fit within this emerging era of management-based regulatory paradigm. If taken seriously, human-guided training can alleviate some of the technical and ethical pressures on AI, boosting AI performance with human intuition as well as better addressing the needs for fairness and effective explainability. In this paper, we discuss the connection between the emerging management-based regulatory frameworks governing AI and the need for human oversight during training. We broadly cover some of the technical components involved in human-guided training and then argue that the kinds of high-stakes use cases for AI that appear of most concern to regulators should lean more on human-guided training than on data-only training. We hope to foster a discussion between legal scholars and computer scientists involving how to govern a domain of technology that is vast, heterogenous, and dynamic in its applications and risks.

Many existing obstacle avoidance algorithms overlook the crucial balance between safety and agility, especially in environments of varying complexity. In our study, we introduce an obstacle avoidance pipeline based on reinforcement learning. This pipeline enables drones to adapt their flying speed according to the environmental complexity. Moreover, to improve the obstacle avoidance performance in cluttered environments, we propose a novel latent space. The latent space in this representation is explicitly trained to retain memory of previous depth map observations. Our findings confirm that varying speed leads to a superior balance of success rate and agility in cluttered environments. Additionally, our memory-augmented latent representation outperforms the latent representation commonly used in reinforcement learning. Finally, after minimal fine-tuning, we successfully deployed our network on a real drone for enhanced obstacle avoidance.

Sponge attacks aim to increase the energy consumption and computation time of neural networks deployed on hardware accelerators. Existing sponge attacks can be performed during inference via sponge examples or during training via Sponge Poisoning. Sponge examples leverage perturbations added to the model's input to increase energy and latency, while Sponge Poisoning alters the objective function of a model to induce inference-time energy/latency effects. In this work, we propose a novel sponge attack called SpongeNet. SpongeNet is the first sponge attack that is performed directly on the parameters of a pre-trained model. Our experiments show that SpongeNet can successfully increase the energy consumption of vision models with fewer samples required than Sponge Poisoning. Our experiments indicate that poisoning defenses are ineffective if not adjusted specifically for the defense against Sponge Poisoning (i.e., they decrease batch normalization bias values). Our work shows that SpongeNet is more effective on StarGAN than the state-of-the-art. Additionally, SpongeNet is stealthier than the previous Sponge Poisoning attack as it does not require significant changes in the victim model's weights. Our experiments indicate that the SpongeNet attack can be performed even when an attacker has access to only 1% of the entire dataset and reach up to 11% energy increase.

Message brokers often mediate communication between data producers and consumers by adding variable-sized messages to ordered distributed queues. Our goal is to determine the number of consumers and consumer-partition assignments needed to ensure that the rate of data consumption keeps up with the rate of data production. We model the problem as a variable item size bin packing problem. As the rate of production varies, new consumer-partition assignments are computed, which may require rebalancing a partition from one consumer to another. While rebalancing a queue, the data being produced into the queue is not read leading to additional latency costs. As such, we focus on the multi-objective optimization cost of minimizing both the number of consumers and queue migrations. We present a variety of algorithms and compare them to established bin packing heuristics for this application. Comparing our proposed consumer group assignment strategy with Kafka's, a commonly employed strategy, our strategy presents a 90th percentile latency of 4.52s compared to Kafka's 217s with both using the same amount of consumers. Kafka's assignment strategy only improved the consumer group's performance with regards to latency with configurations that used at least 60% more resources than our approach.

Data visualization (DV) systems are increasingly recognized for their profound capability to uncover insights from vast datasets, gaining attention across both industry and academia. Crafting data queries is an essential process within certain declarative visualization languages (DVLs, e.g., Vega-Lite, EChart.). The evolution of natural language processing (NLP) technologies has streamlined the use of natural language interfaces to visualize tabular data, offering a more accessible and intuitive user experience. However, current methods for converting natural language questions into data visualization queries, such as Seq2Vis, ncNet, and RGVisNet, despite utilizing complex neural network architectures, still fall short of expectations and have great room for improvement. Large language models (LLMs) such as ChatGPT and GPT-4, have established new benchmarks in a variety of NLP tasks, fundamentally altering the landscape of the field. Inspired by these advancements, we introduce a novel framework, Prompt4Vis, leveraging LLMs and in-context learning to enhance the performance of generating data visualization from natural language. Prompt4Vis comprises two key components: (1) a multi-objective example mining module, designed to find out the truly effective examples that strengthen the LLM's in-context learning capabilities for text-to-vis; (2) a schema filtering module, which is proposed to simplify the schema of the database. Extensive experiments through 5-fold cross-validation on the NVBench dataset demonstrate the superiority of Prompt4Vis, which notably surpasses the state-of-the-art (SOTA) RGVisNet by approximately 35.9% and 71.3% on dev and test sets, respectively. To the best of our knowledge, Prompt4Vis is the first work that introduces in-context learning into the text-to-vis for generating data visualization queries.

With the exponential surge in diverse multi-modal data, traditional uni-modal retrieval methods struggle to meet the needs of users demanding access to data from various modalities. To address this, cross-modal retrieval has emerged, enabling interaction across modalities, facilitating semantic matching, and leveraging complementarity and consistency between different modal data. Although prior literature undertook a review of the cross-modal retrieval field, it exhibits numerous deficiencies pertaining to timeliness, taxonomy, and comprehensiveness. This paper conducts a comprehensive review of cross-modal retrieval's evolution, spanning from shallow statistical analysis techniques to vision-language pre-training models. Commencing with a comprehensive taxonomy grounded in machine learning paradigms, mechanisms, and models, the paper then delves deeply into the principles and architectures underpinning existing cross-modal retrieval methods. Furthermore, it offers an overview of widely used benchmarks, metrics, and performances. Lastly, the paper probes the prospects and challenges that confront contemporary cross-modal retrieval, while engaging in a discourse on potential directions for further progress in the field. To facilitate the research on cross-modal retrieval, we develop an open-source code repository at //github.com/BMC-SDNU/Cross-Modal-Retrieval.

Graph neural networks generalize conventional neural networks to graph-structured data and have received widespread attention due to their impressive representation ability. In spite of the remarkable achievements, the performance of Euclidean models in graph-related learning is still bounded and limited by the representation ability of Euclidean geometry, especially for datasets with highly non-Euclidean latent anatomy. Recently, hyperbolic space has gained increasing popularity in processing graph data with tree-like structure and power-law distribution, owing to its exponential growth property. In this survey, we comprehensively revisit the technical details of the current hyperbolic graph neural networks, unifying them into a general framework and summarizing the variants of each component. More importantly, we present various HGNN-related applications. Last, we also identify several challenges, which potentially serve as guidelines for further flourishing the achievements of graph learning in hyperbolic spaces.

Recently, Mutual Information (MI) has attracted attention in bounding the generalization error of Deep Neural Networks (DNNs). However, it is intractable to accurately estimate the MI in DNNs, thus most previous works have to relax the MI bound, which in turn weakens the information theoretic explanation for generalization. To address the limitation, this paper introduces a probabilistic representation of DNNs for accurately estimating the MI. Leveraging the proposed MI estimator, we validate the information theoretic explanation for generalization, and derive a tighter generalization bound than the state-of-the-art relaxations.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

北京阿比特科技有限公司