亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Social media platforms, including Twitter (now X), have policies in place to maintain a safe and trustworthy advertising environment. However, the extent to which these policies are adhered to and enforced remains a subject of interest and concern. We present the first large-scale audit of advertising on Twitter focusing on compliance with the platform's advertising policies, particularly those related to political and adult content. We investigate the compliance of advertisements on Twitter with the platform's stated policies and the impact of recent acquisition on the advertising activity of the platform. By analyzing 34K advertisements from ~6M tweets, collected over six months, we find evidence of widespread noncompliance with Twitter's political and adult content advertising policies suggesting a lack of effective ad content moderation. We also find that Elon Musk's acquisition of Twitter had a noticeable impact on the advertising landscape, with most existing advertisers either completely stopping their advertising activity or reducing it. Major brands decreased their advertising on Twitter, suggesting a negative immediate effect on the platform's advertising revenue. Our findings underscore the importance of external audits to monitor compliance and improve transparency in online advertising.

相關內容

 Twitter(推特)是一個社交網絡及微博客服務的網站。它利用無線網絡,有線網絡,通信技術,進行即時通訊,是微博客的典型應用。

Large Multimodal Models (LMMs) have demonstrated impressive performance across various vision and language tasks, yet their potential applications in recommendation tasks with visual assistance remain unexplored. To bridge this gap, we present a preliminary case study investigating the recommendation capabilities of GPT-4V(ison), a recently released LMM by OpenAI. We construct a series of qualitative test samples spanning multiple domains and employ these samples to assess the quality of GPT-4V's responses within recommendation scenarios. Evaluation results on these test samples prove that GPT-4V has remarkable zero-shot recommendation abilities across diverse domains, thanks to its robust visual-text comprehension capabilities and extensive general knowledge. However, we have also identified some limitations in using GPT-4V for recommendations, including a tendency to provide similar responses when given similar inputs. This report concludes with an in-depth discussion of the challenges and research opportunities associated with utilizing GPT-4V in recommendation scenarios. Our objective is to explore the potential of extending LMMs from vision and language tasks to recommendation tasks. We hope to inspire further research into next-generation multimodal generative recommendation models, which can enhance user experiences by offering greater diversity and interactivity. All images and prompts used in this report will be accessible at //github.com/PALIN2018/Evaluate_GPT-4V_Rec.

In this work we consider the hybrid Data-Driven Computational Mechanics (DDCM) approach, in which a smooth constitutive manifold is reconstructed to obtain a well-behaved nonlinear optimization problem (NLP) rather than the much harder discrete-continous NLP (DCNLP) of the direct DDCM approach. The key focus is on the addition of geometric inequality constraints to the hybrid DDCM formulation. Therein, the required constraint force leads to a contact problem in the form of a mathematical program with complementarity constraints (MPCC), a problem class that is still less complex than the DCNLP. For this MPCC we propose a heuristic quick-shot solution approach, which can produce verifiable solutions by solving up to four NLPs. We perform various numerical experiments on three different contact problems of increasing difficulty to demonstrate the potential and limitations of this approach.

We propose a Holistic Return on Ethics (HROE) framework for understanding the return on organizational investments in artificial intelligence (AI) ethics efforts. This framework is useful for organizations that wish to quantify the return for their investment decisions. The framework identifies the direct economic returns of such investments, the indirect paths to return through intangibles associated with organizational reputation, and real options associated with capabilities. The holistic framework ultimately provides organizations with the competency to employ and justify AI ethics investments.

Submarine cables constitute the backbone of the Internet. However, these critical infrastructure components are vulnerable to several natural and man-made threats, and during failures, are difficult to repair in their remote oceanic environments. In spite of their crucial role, we have a limited understanding of the impact of submarine cable failures on global connectivity, particularly on the higher layers of the Internet. In this paper, we present Nautilus, a framework for cross-layer cartography of submarine cables and IP links. Using a corpus of public datasets and Internet cartographic techniques, Nautilus identifies IP links that are likely traversing submarine cables and maps them to one or more potential cables. Nautilus also gives each IP to cable assignment a prediction score that reflects the confidence in the mapping. Nautilus generates a mapping for 3.05 million and 1.43 million IPv4 and IPv6 links respectively, covering 91% of all active cables. In the absence of ground truth data, we validate Nautilus mapping using three techniques: analyzing past cable failures, using targeted traceroute measurements, and comparing with public network maps of two operators.

Remote sensing and artificial intelligence are pivotal technologies of precision agriculture nowadays. The efficient retrieval of large-scale field imagery combined with machine learning techniques shows success in various tasks like phenotyping, weeding, cropping, and disease control. This work will introduce a machine learning framework for automatized large-scale plant-specific trait annotation for the use case disease severity scoring for Cercospora Leaf Spot (CLS) in sugar beet. With concepts of Deep Label Distribution Learning (DLDL), special loss functions, and a tailored model architecture, we develop an efficient Vision Transformer based model for disease severity scoring called SugarViT. One novelty in this work is the combination of remote sensing data with environmental parameters of the experimental sites for disease severity prediction. Although the model is evaluated on this special use case, it is held as generic as possible to also be applicable to various image-based classification and regression tasks. With our framework, it is even possible to learn models on multi-objective problems as we show by a pretraining on environmental metadata.

We propose VQ-NeRF, a two-branch neural network model that incorporates Vector Quantization (VQ) to decompose and edit reflectance fields in 3D scenes. Conventional neural reflectance fields use only continuous representations to model 3D scenes, despite the fact that objects are typically composed of discrete materials in reality. This lack of discretization can result in noisy material decomposition and complicated material editing. To address these limitations, our model consists of a continuous branch and a discrete branch. The continuous branch follows the conventional pipeline to predict decomposed materials, while the discrete branch uses the VQ mechanism to quantize continuous materials into individual ones. By discretizing the materials, our model can reduce noise in the decomposition process and generate a segmentation map of discrete materials. Specific materials can be easily selected for further editing by clicking on the corresponding area of the segmentation outcomes. Additionally, we propose a dropout-based VQ codeword ranking strategy to predict the number of materials in a scene, which reduces redundancy in the material segmentation process. To improve usability, we also develop an interactive interface to further assist material editing. We evaluate our model on both computer-generated and real-world scenes, demonstrating its superior performance. To the best of our knowledge, our model is the first to enable discrete material editing in 3D scenes.

Internet of Things (IoT) technologies are the foundation of a fully connected world. Currently, IoT devices (or nodes) primarily use dedicated sensors to sense and collect data at large scales, and then transmit the data to target nodes or gateways through wireless communication for further processing and analytics. In recent years, research efforts have been made to explore the feasibility of using wireless communication for sensing (while assiduously improving the transmission performance of wireless signals), in an attempt to achieve integrated sensing and communication (ISAC) for smart IoT of the future. In this paper, we leverage the capabilities of LoRa, a long-range IoT communication technology, to explore the possibility of using LoRa signals for both sensing and communication. Based on LoRa, we propose ISAC designs in two typical scenarios of smart IoT, and verify the feasibility and effectiveness of our designs in soil moisture monitoring and human presence detection.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Small data challenges have emerged in many learning problems, since the success of deep neural networks often relies on the availability of a huge amount of labeled data that is expensive to collect. To address it, many efforts have been made on training complex models with small data in an unsupervised and semi-supervised fashion. In this paper, we will review the recent progresses on these two major categories of methods. A wide spectrum of small data models will be categorized in a big picture, where we will show how they interplay with each other to motivate explorations of new ideas. We will review the criteria of learning the transformation equivariant, disentangled, self-supervised and semi-supervised representations, which underpin the foundations of recent developments. Many instantiations of unsupervised and semi-supervised generative models have been developed on the basis of these criteria, greatly expanding the territory of existing autoencoders, generative adversarial nets (GANs) and other deep networks by exploring the distribution of unlabeled data for more powerful representations. While we focus on the unsupervised and semi-supervised methods, we will also provide a broader review of other emerging topics, from unsupervised and semi-supervised domain adaptation to the fundamental roles of transformation equivariance and invariance in training a wide spectrum of deep networks. It is impossible for us to write an exclusive encyclopedia to include all related works. Instead, we aim at exploring the main ideas, principles and methods in this area to reveal where we are heading on the journey towards addressing the small data challenges in this big data era.

北京阿比特科技有限公司