亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In cellular networks, authorities may need to physically locate user devices to track criminals or illegal equipment. This process involves authorized agents tracing devices by monitoring uplink signals with cellular operator assistance. However, tracking uncooperative uplink signal sources remains challenging, even for operators and authorities. Three key challenges persist for fine-grained localization: i) devices must generate sufficient, consistent uplink traffic over time, ii) target devices may transmit uplink signals at very low power, and iii) signals from cellular repeaters may hinder localization of the target device. While these challenges pose significant practical obstacles to localization, they have been largely overlooked in existing research. This work examines the impact of these real-world challenges on cellular localization and introduces the Uncooperative Multiangulation Attack (UMA) to address them. UMA can 1) force a target device to transmit traffic continuously, 2) boost the target's signal strength to maximum levels, and 3) uniquely differentiate between signals from the target and repeaters. Importantly, UMA operates without requiring privileged access to cellular operators or user devices, making it applicable to any LTE network. Our evaluations demonstrate that UMA effectively overcomes practical challenges in physical localization when devices are uncooperative.

相關內容

Academic research in static analysis produces software implementations. These implementations are time-consuming to develop and some need to be maintained in order to enable building further research upon the implementation. While necessary, these processes can be quickly challenging. This article documents the tools and techniques we have come up with to simplify the maintenance of Mopsa since 2017. Mopsa is a static analysis platform that aims at being sound. First, we describe an automated way to measure precision that does not require any baseline of true bugs obtained by manually inspecting the results. Further, it improves transparency of the analysis, and helps discovering regressions during continuous integration. Second, we have taken inspiration from standard tools observing the concrete execution of a program to design custom tools observing the abstract execution of the analyzed program itself, such as abstract debuggers and profilers. Finally, we report on some cases of automated testcase reduction.

Graph diffusion, which iteratively propagates real-valued substances among the graph, is used in numerous graph/network-involved applications. However, releasing diffusion vectors may reveal sensitive linking information in the data such as transaction information in financial network data. However, protecting the privacy of graph data is challenging due to its interconnected nature. This work proposes a novel graph diffusion framework with edge-level differential privacy guarantees by using noisy diffusion iterates. The algorithm injects Laplace noise per diffusion iteration and adopts a degree-based thresholding function to mitigate the high sensitivity induced by low-degree nodes. Our privacy loss analysis is based on Privacy Amplification by Iteration (PABI), which to our best knowledge, is the first effort that analyzes PABI with Laplace noise and provides relevant applications. We also introduce a novel Infinity-Wasserstein distance tracking method, which tightens the analysis of privacy leakage and makes PABI more applicable in practice. We evaluate this framework by applying it to Personalized Pagerank computation for ranking tasks. Experiments on real-world network data demonstrate the superiority of our method under stringent privacy conditions.

Compromise estimation entails using a weighted average of outputs from several candidate models, and is a viable alternative to model selection when the choice of model is not obvious. As such, it is a tool used by both frequentists and Bayesians, and in both cases, the literature is vast and includes studies of performance in simulations and applied examples. However, frequentist researchers often prove oracle properties, showing that a proposed average asymptotically performs at least as well as any other average comprising the same candidates. On the Bayesian side, such oracle properties are yet to be established. This paper considers Bayesian stacking estimators, and evaluates their performance using frequentist asymptotics. Oracle properties are derived for estimators stacking Bayesian linear and logistic regression models, and combined with Monte Carlo experiments that show Bayesian stacking may outperform the best candidate model included in the stack. Thus, the result is not only a frequentist motivation of a fundamentally Bayesian procedure, but also an extended range of methods available to frequentist practitioners.

As quantum computing advances, traditional cryptographic security measures, including token obfuscation, are increasingly vulnerable to quantum attacks. This paper introduces a quantum-enhanced approach to token obfuscation leveraging quantum superposition and multi-basis verification to establish a robust defense against these threats. In our method, tokens are encoded in superposition states, making them simultaneously exist in multiple states until measured, thus enhancing obfuscation complexity. Multi-basis verification further secures these tokens by enforcing validation across multiple quantum bases, thwarting unauthorized access. Additionally, we incorporate a quantum decay protocol and a refresh mechanism to manage the token life-cycle securely. Our experimental results demonstrate significant improvements in token security and robustness, validating this approach as a promising solution for quantum-secure cryptographic applications. This work not only highlights the feasibility of quantum-based token obfuscation but also lays the foundation for future quantum-safe security architectures.

Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.

Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.

We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

We describe the new field of mathematical analysis of deep learning. This field emerged around a list of research questions that were not answered within the classical framework of learning theory. These questions concern: the outstanding generalization power of overparametrized neural networks, the role of depth in deep architectures, the apparent absence of the curse of dimensionality, the surprisingly successful optimization performance despite the non-convexity of the problem, understanding what features are learned, why deep architectures perform exceptionally well in physical problems, and which fine aspects of an architecture affect the behavior of a learning task in which way. We present an overview of modern approaches that yield partial answers to these questions. For selected approaches, we describe the main ideas in more detail.

Attention Model has now become an important concept in neural networks that has been researched within diverse application domains. This survey provides a structured and comprehensive overview of the developments in modeling attention. In particular, we propose a taxonomy which groups existing techniques into coherent categories. We review the different neural architectures in which attention has been incorporated, and also show how attention improves interpretability of neural models. Finally, we discuss some applications in which modeling attention has a significant impact. We hope this survey will provide a succinct introduction to attention models and guide practitioners while developing approaches for their applications.

北京阿比特科技有限公司