亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The interference from active to passive users is a well-recognized challenge in millimeter-wave (mmWave) communications. We propose a method that enables to limit the interference on passive users (whose presence may not be detected since they do not transmit) with a small penalty to the throughput of active users. Our approach abstracts away (in a simple, yet informative way) the physical layer component and it leverages the directivity of mmWave links and the available network path diversity. We provide linear programming formulations, lower bounds on active users rates, numerical evaluations, and we establish a connection with the problem of (information theoretically) secure communication over mmWave networks.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

In recent years, Low-Earth Orbit (LEO) mega-constellations have emerged as a promising network technology and have ushered in a new era for democratizing Internet access. The Starlink network from SpaceX stands out as the only consumer-facing LEO network with over 2M+ customers and more than 4000 operational satellites. In this paper, we conduct the first-of-its-kind extensive multi-faceted analysis of Starlink network performance leveraging several measurement sources. First, based on 19.2M crowdsourced M-Lab speed test measurements from 34 countries since 2021, we analyze Starlink global performance relative to terrestrial cellular networks. Second, we examine Starlink's ability to support real-time web-based latency and bandwidth-critical applications by analyzing the performance of (i) Zoom video conferencing, and (ii) Luna cloud gaming, comparing it to 5G and terrestrial fiber. Third, we orchestrate targeted measurements from Starlink-enabled RIPE Atlas probes to shed light on the last-mile Starlink access and other factors affecting its performance globally. Finally, we conduct controlled experiments from Starlink dishes in two countries and analyze the impact of globally synchronized "15-second reconfiguration intervals" of the links that cause substantial latency and throughput variations. Our unique analysis provides revealing insights on global Starlink functionality and paints the most comprehensive picture of the LEO network's operation to date.

Recently, contrastive learning has become a key component in fine-tuning code search models for software development efficiency and effectiveness. It pulls together positive code snippets while pushing negative samples away given search queries. Among contrastive learning, InfoNCE is the most widely used loss function due to its better performance. However, the following problems in negative samples of InfoNCE may deteriorate its representation learning: 1) The existence of false negative samples in large code corpora due to duplications. 2). The failure to explicitly differentiate between the potential relevance of negative samples. As an example, a bubble sorting algorithm example is less ``negative'' than a file saving function for the quick sorting algorithm query. In this paper, we tackle the above problems by proposing a simple yet effective Soft-InfoNCE loss that inserts weight terms into InfoNCE. In our proposed loss function, we apply three methods to estimate the weights of negative pairs and show that the vanilla InfoNCE loss is a special case of Soft-InfoNCE. Theoretically, we analyze the effects of Soft-InfoNCE on controlling the distribution of learnt code representations and on deducing a more precise mutual information estimation. We furthermore discuss the superiority of proposed loss functions with other design alternatives. Extensive experiments demonstrate the effectiveness of Soft-InfoNCE and weights estimation methods under state-of-the-art code search models on a large-scale public dataset consisting of six programming languages. Source code is available at \url{//github.com/Alex-HaochenLi/Soft-InfoNCE}.

Fine-tuning large pre-trained computer vision models is infeasible for resource-limited users. Visual prompt learning (VPL) has thus emerged to provide an efficient and flexible alternative to model fine-tuning through Visual Prompt as a Service (VPPTaaS). Specifically, the VPPTaaS provider optimizes a visual prompt given downstream data, and downstream users can use this prompt together with the large pre-trained model for prediction. However, this new learning paradigm may also pose security risks when the VPPTaaS provider instead provides a malicious visual prompt. In this paper, we take the first step to explore such risks through the lens of backdoor attacks. Specifically, we propose BadVisualPrompt, a simple yet effective backdoor attack against VPL. For example, poisoning $5\%$ CIFAR10 training data leads to above $99\%$ attack success rates with only negligible model accuracy drop by $1.5\%$. In particular, we identify and then address a new technical challenge related to interactions between the backdoor trigger and visual prompt, which does not exist in conventional, model-level backdoors. Moreover, we provide in-depth analyses of seven backdoor defenses from model, prompt, and input levels. Overall, all these defenses are either ineffective or impractical to mitigate our BadVisualPrompt, implying the critical vulnerability of VPL.

Factors within a large-scale software system that simultaneously interact and strongly impact the system's response under a configuration are often difficult to identify. Although screening such a system for the existence of such interactions is important, determining their location is more useful for system engineers. Combinatorial interaction testing (CIT) concerns creation of test suites that nonadaptively either detect or locate the desired interactions, each of at most a specified size or show that no such set exists. Under the assumption that there are at most a given number of such interactions causing such a response, locating arrays (LAs) guarantee unique location for every such set of interactions and an algorithm to deal with outliers and nondeterministic behavior from real systems, we additionally require the LAs to have a "separation" between these collections. State-of-the-art approaches generate LAs that can locate at most one interaction of size at most three, due to the massive number of interaction combinations for larger parameters if no constraints are given. This paper presents LocAG, a two-stage algorithm that generates (unconstrained) LAs using a simple, but powerful partitioning strategy of these combinations. In particular, we are able to generate LAs with more factors, with any desired separation, and greater interaction size than existing approaches.

Evaluating retrieval performance without editorial relevance judgments is challenging, but instead, user interactions can be used as relevance signals. Living labs offer a way for small-scale platforms to validate information retrieval systems with real users. If enough user interaction data are available, click models can be parameterized from historical sessions to evaluate systems before exposing users to experimental rankings. However, interaction data are sparse in living labs, and little is studied about how click models can be validated for reliable user simulations when click data are available in moderate amounts. This work introduces an evaluation approach for validating synthetic usage data generated by click models in data-sparse human-in-the-loop environments like living labs. We ground our methodology on the click model's estimates about a system ranking compared to a reference ranking for which the relative performance is known. Our experiments compare different click models and their reliability and robustness as more session log data becomes available. In our setup, simple click models can reliably determine the relative system performance with already 20 logged sessions for 50 queries. In contrast, more complex click models require more session data for reliable estimates, but they are a better choice in simulated interleaving experiments when enough session data are available. While it is easier for click models to distinguish between more diverse systems, it is harder to reproduce the system ranking based on the same retrieval algorithm with different interpolation weights. Our setup is entirely open, and we share the code to reproduce the experiments.

Recent work introduced an algorithm and tool in Coq to automatically repair broken proofs in response to changes that correspond to type equivalences. We report on case studies for manual proof repair across type equivalences using an adaptation of this algorithm in Cubical Agda. Crucially, these case studies capture proof repair use cases that were challenging to impossible in prior work in Coq due to type theoretic limitations, highlighting three benefits to working in Cubical Agda: (1) quotient types enrich the space of repairs we can express as type equivalences, (2) dependent path equality makes it possible to internally state and prove correctness of repaired proofs relative to the original proofs, and (3) functional extensionality and transport make it simple to move between slow and fast computations after repair. They also highlight two challenges of working in Cubical Agda, namely those introduced by: (1) lack of tools for automation, and (2) proof relevance, especially as it interacts with definitional equality. We detail these benefits and challenges in hopes to set the stage for later work in proof repair bridging the benefits of both languages.

Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.

The notion of "in-domain data" in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to build domain-specific systems. We show that massive pre-trained language models implicitly learn sentence representations that cluster by domains without supervision -- suggesting a simple data-driven definition of domains in textual data. We harness this property and propose domain data selection methods based on such models, which require only a small set of in-domain monolingual data. We evaluate our data selection methods for neural machine translation across five diverse domains, where they outperform an established approach as measured by both BLEU and by precision and recall of sentence selection with respect to an oracle.

This paper proposes a generic method to learn interpretable convolutional filters in a deep convolutional neural network (CNN) for object classification, where each interpretable filter encodes features of a specific object part. Our method does not require additional annotations of object parts or textures for supervision. Instead, we use the same training data as traditional CNNs. Our method automatically assigns each interpretable filter in a high conv-layer with an object part of a certain category during the learning process. Such explicit knowledge representations in conv-layers of CNN help people clarify the logic encoded in the CNN, i.e., answering what patterns the CNN extracts from an input image and uses for prediction. We have tested our method using different benchmark CNNs with various structures to demonstrate the broad applicability of our method. Experiments have shown that our interpretable filters are much more semantically meaningful than traditional filters.

The task of detecting 3D objects in point cloud has a pivotal role in many real-world applications. However, 3D object detection performance is behind that of 2D object detection due to the lack of powerful 3D feature extraction methods. In order to address this issue, we propose to build a 3D backbone network to learn rich 3D feature maps by using sparse 3D CNN operations for 3D object detection in point cloud. The 3D backbone network can inherently learn 3D features from almost raw data without compressing point cloud into multiple 2D images and generate rich feature maps for object detection. The sparse 3D CNN takes full advantages of the sparsity in the 3D point cloud to accelerate computation and save memory, which makes the 3D backbone network achievable. Empirical experiments are conducted on the KITTI benchmark and results show that the proposed method can achieve state-of-the-art performance for 3D object detection.

北京阿比特科技有限公司