亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents the Firefighter Optimization (FFO) algorithm as a new hybrid metaheuristic for optimization problems. This algorithm stems inspiration from the collaborative strategies often deployed by firefighters in firefighting activities. To evaluate the performance of FFO, extensive experiments were conducted, wherein the FFO was examined against 13 commonly used optimization algorithms, namely, the Ant Colony Optimization (ACO), Bat Algorithm (BA), Biogeography-Based Optimization (BBO), Flower Pollination Algorithm (FPA), Genetic Algorithm (GA), Grey Wolf Optimizer (GWO), Harmony Search (HS), Particle Swarm Optimization (PSO), Simulated Annealing (SA), Tabu Search (TS), and Whale Optimization Algorithm (WOA), and across 24 benchmark functions of various dimensions and complexities. The results demonstrate that FFO achieves comparative performance and, in some scenarios, outperforms commonly adopted optimization algorithms in terms of the obtained fitness, time taken for exaction, and research space covered per unit of time.

相關內容

Federated Learning (FL) as a promising distributed machine learning paradigm has been widely adopted in Artificial Intelligence of Things (AIoT) applications. However, the efficiency and inference capability of FL is seriously limited due to the presence of stragglers and data imbalance across massive AIoT devices, respectively. To address the above challenges, we present a novel asynchronous FL approach named CaBaFL, which includes a hierarchical Cache-based aggregation mechanism and a feature Balance-guided device selection strategy. CaBaFL maintains multiple intermediate models simultaneously for local training. The hierarchical cache-based aggregation mechanism enables each intermediate model to be trained on multiple devices to align the training time and mitigate the straggler issue. In specific, each intermediate model is stored in a low-level cache for local training and when it is trained by sufficient local devices, it will be stored in a high-level cache for aggregation. To address the problem of imbalanced data, the feature balance-guided device selection strategy in CaBaFL adopts the activation distribution as a metric, which enables each intermediate model to be trained across devices with totally balanced data distributions before aggregation. Experimental results show that compared with the state-of-the-art FL methods, CaBaFL achieves up to 9.26X training acceleration and 19.71\% accuracy improvements.

State of the art Symbolic Regression (SR) methods currently build specialized models, while the application of Large Language Models (LLMs) remains largely unexplored. In this work, we introduce the first comprehensive framework that utilizes LLMs for the task of SR. We propose In-Context Symbolic Regression (ICSR), an SR method which iteratively refines a functional form with an LLM and determines its coefficients with an external optimizer. ICSR leverages LLMs' strong mathematical prior both to propose an initial set of possible functions given the observations and to refine them based on their errors. Our findings reveal that LLMs are able to successfully find symbolic equations that fit the given data, matching or outperforming the overall performance of the best SR baselines on four popular benchmarks, while yielding simpler equations with better out of distribution generalization.

Many Haskell textbooks explain the evaluation of pure functional programs as a process of stepwise rewriting using equations. However, usual implementation techniques perform program transformations that make producing the corresponding tracing evaluations difficult. This paper presents a tracing interpreter for a subset of Haskell based on the pattern matching calculus of Kahl. We start from a big-step semantics in the style of Launchbury and develop a small-step semantics in the style of Sestoft's machines. This machine is used in the implementation of a step-by-step educational interpreter. We also discuss some implementation decisions and present illustrative examples.

This paper presents a general scheme for enhancing the convergence and performance of DETR (DEtection TRansformer). We investigate the slow convergence problem in transformers from a new perspective, suggesting that it arises from the self-attention that introduces no structural bias over inputs. To address this issue, we explore incorporating position relation prior as attention bias to augment object detection, following the verification of its statistical significance using a proposed quantitative macroscopic correlation (MC) metric. Our approach, termed Relation-DETR, introduces an encoder to construct position relation embeddings for progressive attention refinement, which further extends the traditional streaming pipeline of DETR into a contrastive relation pipeline to address the conflicts between non-duplicate predictions and positive supervision. Extensive experiments on both generic and task-specific datasets demonstrate the effectiveness of our approach. Under the same configurations, Relation-DETR achieves a significant improvement (+2.0% AP compared to DINO), state-of-the-art performance (51.7% AP for 1x and 52.1% AP for 2x settings), and a remarkably faster convergence speed (over 40% AP with only 2 training epochs) than existing DETR detectors on COCO val2017. Moreover, the proposed relation encoder serves as a universal plug-in-and-play component, bringing clear improvements for theoretically any DETR-like methods. Furthermore, we introduce a class-agnostic detection dataset, SA-Det-100k. The experimental results on the dataset illustrate that the proposed explicit position relation achieves a clear improvement of 1.3% AP, highlighting its potential towards universal object detection. The code and dataset are available at //github.com/xiuqhou/Relation-DETR.

Encoding information from 2D views of an object into a 3D representation is crucial for generalized 3D feature extraction. Such features can then enable 3D reconstruction, 3D generation, and other applications. We propose GOEmbed (Gradient Origin Embeddings) that encodes input 2D images into any 3D representation, without requiring a pre-trained image feature extractor; unlike typical prior approaches in which input images are either encoded using 2D features extracted from large pre-trained models, or customized features are designed to handle different 3D representations; or worse, encoders may not yet be available for specialized 3D neural representations such as MLPs and hash-grids. We extensively evaluate our proposed GOEmbed under different experimental settings on the OmniObject3D benchmark. First, we evaluate how well the mechanism compares against prior encoding mechanisms on multiple 3D representations using an illustrative experiment called Plenoptic-Encoding. Second, the efficacy of the GOEmbed mechanism is further demonstrated by achieving a new SOTA FID of 22.12 on the OmniObject3D generation task using a combination of GOEmbed and DFM (Diffusion with Forward Models), which we call GOEmbedFusion. Finally, we evaluate how the GOEmbed mechanism bolsters sparse-view 3D reconstruction pipelines.

In this paper, we present a convergence analysis of the Group Projected Subspace Pursuit (GPSP) algorithm proposed by He et al. [HKL+23] (Group Projected subspace pursuit for IDENTification of variable coefficient differential equations (GP-IDENT), Journal of Computational Physics, 494, 112526) and extend its application to general tasks of block sparse signal recovery. We prove that when the sampling matrix satisfies the Block Restricted Isometry Property (BRIP) with a sufficiently small Block Restricted Isometry Constant (BRIC), GPSP exactly recovers the true block sparse signals. When the observations are noisy, this convergence property of GPSP remains valid if the magnitude of true signal is sufficiently large. GPSP selects the features by subspace projection criterion (SPC) for candidate inclusion and response magnitude criterion (RMC) for candidate exclusion. We compare these criteria with counterparts of other state-of-the-art greedy algorithms. Our theoretical analysis and numerical ablation studies reveal that SPC is critical to the superior performances of GPSP, and that RMC can enhance the robustness of feature identification when observations contain noises. We test and compare GPSP with other methods in diverse settings, including heterogeneous random block matrices, inexact observations, face recognition, and PDE identification. We find that GPSP outperforms the other algorithms in most cases for various levels of block sparsity and block sizes, justifying its effectiveness for general applications.

This paper proposes a novel method for sparse latent factor modeling using a new sparse asymptotic Principal Component Analysis (APCA). This approach analyzes the co-movements of large-dimensional panel data systems over time horizons within a general approximate factor model framework. Unlike existing sparse factor modeling approaches based on sparse PCA, which assume sparse loading matrices, our sparse APCA assumes that factor processes are sparse over the time horizon, while the corresponding loading matrices are not necessarily sparse. This development is motivated by the observation that the assumption of sparse loadings may not be appropriate for financial returns, where exposure to market factors is generally universal and non-sparse. We propose a truncated power method to estimate the first sparse factor process and a sequential deflation method for multi-factor cases. Additionally, we develop a data-driven approach to identify the sparsity of risk factors over the time horizon using a novel cross-sectional cross-validation method. Theoretically, we establish that our estimators are consistent under mild conditions. Monte Carlo simulations demonstrate that the proposed method performs well in finite samples. Empirically, we analyze daily stock returns for a balanced panel of S&P 500 stocks from January 2004 to December 2016. Through textual analysis, we examine specific events associated with the identified sparse factors that systematically influence the stock market. Our approach offers a new pathway for economists to study and understand the systematic risks of economic and financial systems over time.

We present a tool called HHLPar for verifying hybrid systems modelled in Hybrid Communicating Sequential Processes (HCSP). HHLPar is built upon a Hybrid Hoare Logic for HCSP, which is able to reason about continuous-time properties of differential equations, as well as communication and parallel composition of parallel HCSP processes with the help of parameterised trace assertions and their synchronization. The logic was formalised and proved to be sound in Isabelle/HOL, which constitutes a trustworthy foundation for the verification conducted by HHLPar. HHLPar implements the Hybrid Hoare Logic in Python and supports automated verification: On one hand, it provides functions for symbolically decomposing HCSP processes, generating specifications for separate sequential processes and then composing them via synchronization to obtain the final specification for the whole parallel HCSP processes; On the other hand, it is integrated with external solvers for handling differential equations and real arithmetic properties. We have conducted experiments on a simplified cruise control system to validate the performance of the tool.

Text Classification is the most essential and fundamental problem in Natural Language Processing. While numerous recent text classification models applied the sequential deep learning technique, graph neural network-based models can directly deal with complex structured text data and exploit global information. Many real text classification applications can be naturally cast into a graph, which captures words, documents, and corpus global features. In this survey, we bring the coverage of methods up to 2023, including corpus-level and document-level graph neural networks. We discuss each of these methods in detail, dealing with the graph construction mechanisms and the graph-based learning process. As well as the technological survey, we look at issues behind and future directions addressed in text classification using graph neural networks. We also cover datasets, evaluation metrics, and experiment design and present a summary of published performance on the publicly available benchmarks. Note that we present a comprehensive comparison between different techniques and identify the pros and cons of various evaluation metrics in this survey.

Graph Neural Networks (GNNs) are widely used for analyzing graph-structured data. Most GNN methods are highly sensitive to the quality of graph structures and usually require a perfect graph structure for learning informative embeddings. However, the pervasiveness of noise in graphs necessitates learning robust representations for real-world problems. To improve the robustness of GNN models, many studies have been proposed around the central concept of Graph Structure Learning (GSL), which aims to jointly learn an optimized graph structure and corresponding representations. Towards this end, in the presented survey, we broadly review recent progress of GSL methods for learning robust representations. Specifically, we first formulate a general paradigm of GSL, and then review state-of-the-art methods classified by how they model graph structures, followed by applications that incorporate the idea of GSL in other graph tasks. Finally, we point out some issues in current studies and discuss future directions.

北京阿比特科技有限公司