亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Inspired by the success of Geographically Weighted Regression and its accounting for spatial variations, we propose GeogGNN -- A graph neural network model that accounts for geographical latitude and longitudinal points. Using a synthetically generated dataset, we apply the algorithm for a 4-class classification problem in cybersecurity with seemingly realistic geographic coordinates centered in the Gulf Cooperation Council region. We demonstrate that it has higher accuracy than standard neural networks and convolutional neural networks that treat the coordinates as features. Encouraged by the speed-up in model accuracy by the GeogGNN model, we provide a general mathematical result that demonstrates that a geometrically weighted neural network will, in principle, always display higher accuracy in the classification of spatially dependent data by making use of spatial continuity and local averaging features.

相關內容

The use of Large Language Models (LLMs) in mathematical reasoning has become a cornerstone of related research, demonstrating the intelligence of these models and enabling potential practical applications through their advanced performance, such as in educational settings. Despite the variety of datasets and in-context learning algorithms designed to improve the ability of LLMs to automate mathematical problem solving, the lack of comprehensive benchmarking across different datasets makes it complicated to select an appropriate model for specific tasks. In this project, we present a benchmark that fairly compares seven state-of-the-art in-context learning algorithms for mathematical problem solving across five widely used mathematical datasets on four powerful foundation models. Furthermore, we explore the trade-off between efficiency and performance, highlighting the practical applications of LLMs for mathematical reasoning. Our results indicate that larger foundation models like GPT-4o and LLaMA 3-70B can solve mathematical reasoning independently from the concrete prompting strategy, while for smaller models the in-context learning approach significantly influences the performance. Moreover, the optimal prompt depends on the chosen foundation model. We open-source our benchmark code to support the integration of additional models in future research.

The objective of this research is to optimize the eleventh iteration of You Only Look Once (YOLOv11) by developing size-specific modified versions of the architecture. These modifications involve pruning unnecessary layers and reconfiguring the main architecture of YOLOv11. Each proposed version is tailored to detect objects of specific size ranges, from small to large. To ensure proper model selection based on dataset characteristics, we introduced an object classifier program. This program identifies the most suitable modified version for a given dataset. The proposed models were evaluated on various datasets and compared with the original YOLOv11 and YOLOv8 models. The experimental results highlight significant improvements in computational resource efficiency, with the proposed models maintaining the accuracy of the original YOLOv11. In some cases, the modified versions outperformed the original model regarding detection performance. Furthermore, the proposed models demonstrated reduced model sizes and faster inference times. Models weights and the object size classifier can be found in this repository

In this paper, we present generalization bounds for the unsupervised risk in the Deep Contrastive Representation Learning framework, which employs deep neural networks as representation functions. We approach this problem from two angles. On the one hand, we derive a parameter-counting bound that scales with the overall size of the neural networks. On the other hand, we provide a norm-based bound that scales with the norms of neural networks' weight matrices. Ignoring logarithmic factors, the bounds are independent of $k$, the size of the tuples provided for contrastive learning. To the best of our knowledge, this property is only shared by one other work, which employed a different proof strategy and suffers from very strong exponential dependence on the depth of the network which is due to a use of the peeling technique. Our results circumvent this by leveraging powerful results on covering numbers with respect to uniform norms over samples. In addition, we utilize loss augmentation techniques to further reduce the dependency on matrix norms and the implicit dependence on network depth. In fact, our techniques allow us to produce many bounds for the contrastive learning setting with similar architectural dependencies as in the study of the sample complexity of ordinary loss functions, thereby bridging the gap between the learning theories of contrastive learning and DNNs.

This paper presents the Byzantine fault-tolerant agreement protocols Flutter and Blink. Both algorithms are deterministic, leaderless and signature-free; both assume partial synchrony and at least $(5f + 1)$ servers, where $f$ bounds the number of faults. The main contribution, Flutter, is a Total-Order Broadcast implementation that achieves faster broadcast-to-delivery latency by removing the extra message delay associated with serializing messages through a leader. In the "good case" where all processes are correct, the network is synchronous, and local clocks are well-synchronized, Flutter delivers client requests in $(2\Delta + \epsilon)$ time units, $\Delta$ being the message delay and $\epsilon$ an arbitrarily small constant. Under the same conditions, state-of-the-art protocols require $3\Delta$ time units. Flutter's good-case latency is quasi-optimal, meaning it cannot be improved upon by any finite amount. Under the hood, Flutter builds upon Blink, a (Representative) Binary Consensus implementation whose fast path enables decisions in $\Delta$ time units when all correct servers propose the same value. Blink generalizes the existing Binary Consensus solution Bosco from the $(7f + 1)$ to the $(5f + 1)$ setting.

This paper argues that existing governance mechanisms for mitigating risks from AI systems are based on the `Big Compute' paradigm -- a set of assumptions about the relationship between AI capabilities and infrastructure -- that may not hold in the future. To address this, the paper introduces the `Proliferation' paradigm, which anticipates the rise of smaller, decentralized, open-sourced AI models which are easier to augment, and easier to train without being detected. It posits that these developments are both probable and likely to introduce both benefits and novel risks that are difficult to mitigate through existing governance mechanisms. The final section explores governance strategies to address these risks, focusing on access governance, decentralized compute oversight, and information security. Whilst these strategies offer potential solutions, the paper acknowledges their limitations and cautions developers to weigh benefits against developments that could lead to a `vulnerable world'.

Existing quantum computers can only operate with hundreds of qubits in the Noisy Intermediate-Scale Quantum (NISQ) state, while quantum distributed computing (QDC) is regarded as a reliable way to address this limitation, allowing quantum computers to achieve their full computational potential. However, similar to classical distributed computing, QDC also faces the problem of privacy leakage. Existing research has introduced quantum differential privacy (QDP) for privacy protection in central quantum computing, but there is no dedicated privacy protection mechanisms for QDC. To fill this research gap, our paper introduces a novel concept called quantum R\'enyi differential privacy (QRDP), which incorporates the advantages of classical R\'enyi DP and is applicable in the QDC domain. Based on the new quantum R\'enyi divergence, QRDP provides delicate and flexible privacy protection by introducing parameter $\alpha$. In particular, the QRDP composition is well suited for QDC, since it allows for more precise control of the total privacy budget in scenarios requiring multiple quantum operations. We analyze a variety of noise mechanisms that can implement QRDP, and derive the lowest privacy budget provided by these mechanisms. Finally, we investigate the impact of different quantum parameters on QRDP. Through our simulations, we also find that adding noise will make the data less usable, but increase the level of privacy protection.

2D-based Industrial Anomaly Detection has been widely discussed, however, multimodal industrial anomaly detection based on 3D point clouds and RGB images still has many untouched fields. Existing multimodal industrial anomaly detection methods directly concatenate the multimodal features, which leads to a strong disturbance between features and harms the detection performance. In this paper, we propose Multi-3D-Memory (M3DM), a novel multimodal anomaly detection method with hybrid fusion scheme: firstly, we design an unsupervised feature fusion with patch-wise contrastive learning to encourage the interaction of different modal features; secondly, we use a decision layer fusion with multiple memory banks to avoid loss of information and additional novelty classifiers to make the final decision. We further propose a point feature alignment operation to better align the point cloud and RGB features. Extensive experiments show that our multimodal industrial anomaly detection model outperforms the state-of-the-art (SOTA) methods on both detection and segmentation precision on MVTec-3D AD dataset. Code is available at //github.com/nomewang/M3DM.

Graph Neural Networks (GNNs) have been successfully used in many problems involving graph-structured data, achieving state-of-the-art performance. GNNs typically employ a message-passing scheme, in which every node aggregates information from its neighbors using a permutation-invariant aggregation function. Standard well-examined choices such as the mean or sum aggregation functions have limited capabilities, as they are not able to capture interactions among neighbors. In this work, we formalize these interactions using an information-theoretic framework that notably includes synergistic information. Driven by this definition, we introduce the Graph Ordering Attention (GOAT) layer, a novel GNN component that captures interactions between nodes in a neighborhood. This is achieved by learning local node orderings via an attention mechanism and processing the ordered representations using a recurrent neural network aggregator. This design allows us to make use of a permutation-sensitive aggregator while maintaining the permutation-equivariance of the proposed GOAT layer. The GOAT model demonstrates its increased performance in modeling graph metrics that capture complex information, such as the betweenness centrality and the effective size of a node. In practical use-cases, its superior modeling capability is confirmed through its success in several real-world node classification benchmarks.

Technology ecosystems often undergo significant transformations as they mature. For example, telephony, the Internet, and PCs all started with a single provider, but in the United States each is now served by a competitive market that uses comprehensive and universal technology standards to provide compatibility. This white paper presents our view on how the cloud ecosystem, barely over fifteen years old, could evolve as it matures.

Recently, a considerable literature has grown up around the theme of Graph Convolutional Network (GCN). How to effectively leverage the rich structural information in complex graphs, such as knowledge graphs with heterogeneous types of entities and relations, is a primary open challenge in the field. Most GCN methods are either restricted to graphs with a homogeneous type of edges (e.g., citation links only), or focusing on representation learning for nodes only instead of jointly propagating and updating the embeddings of both nodes and edges for target-driven objectives. This paper addresses these limitations by proposing a novel framework, namely the Knowledge Embedding based Graph Convolutional Network (KE-GCN), which combines the power of GCNs in graph-based belief propagation and the strengths of advanced knowledge embedding (a.k.a. knowledge graph embedding) methods, and goes beyond. Our theoretical analysis shows that KE-GCN offers an elegant unification of several well-known GCN methods as specific cases, with a new perspective of graph convolution. Experimental results on benchmark datasets show the advantageous performance of KE-GCN over strong baseline methods in the tasks of knowledge graph alignment and entity classification.

北京阿比特科技有限公司