亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Space-air-ground integrated networks (SAGINs) will play a key role in 6G communication systems. They are considered a promising technology to enhance the network capacity in highly dense agglomerations and to provide connectivity in rural areas. The multi-layer and heterogeneous nature of SAGINs necessitates an innovative design of their multi-tier associations. We propose a modeling of the SAGINs association problem using multi-sided matching theory. Our aim is to provide a reliable, asynchronous and fully distributed approach that associates nodes across the layers so that the total end-to-end rate of the assigned agents is maximized. To this end, our problem is modeled as a multi-sided many-to-one matching game. A randomized matching algorithm with low information exchange is proposed. The algorithm is shown to reach an efficient and stable association between nodes in adjacent layers. Our simulation results show that the proposed approach achieves significant gain compared to the greedy and distance-based algorithms.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志(zhi)。 Publisher:Elsevier。 SIT:

In a multiple partners matching problem the agents can have multiple partners up to their capacities. In this paper we consider both the two-sided many-to-many stable matching problem and the one-sided stable fixtures problem under lexicographic preferences. We study strong core and Pareto-optimal solutions for this setting from a computational point of view. First we provide an example to show that the strong core can be empty even under these severe restrictions for many-to-many problems, and that deciding the non-emptiness of the strong core is NP-hard. We also show that for a given matching checking Pareto-optimality and the strong core properties are co-NP-complete problems for the many-to-many problem, and deciding the existence of a complete Pareto-optimal matching is also NP-hard for the fixtures problem. On the positive side, we give efficient algorithms for finding a near feasible strong core solution, where the capacities are only violated by at most one unit for each agent, and also for finding a half-matching in the strong core of fractional matchings. These polynomial time algorithms are based on the Top Trading Cycle algorithm. Finally, we also show that finding a maximum size matching that is Pareto-optimal can be done efficiently for many-to-many problems, which is in contrast with the hardness result for the fixtures problem.

Maximal exact matches (MEMs) have been widely used in bioinformatics at least since Li (2013) presented BWA-MEM. Building on work by Bannai, Gagie and I (2018), Rossi et al.\ (2022) recently built an index called MONI, based on the run-length compressed Burrows-Wheeler Transform, that can find MEMs efficiently with respect to pangenomes. In this paper we define $k$-MEMs to be maximal substrings of a pattern that each occur exactly at $k$ times in a text (so a MEM is a 1-MEM) and show that, when $k$ is given at construction time, MONI can find $k$-MEMs efficiently as well.

A matching $M$ is a $\mathscr{P}$-matching if the subgraph induced by the endpoints of the edges of $M$ satisfies property $\mathscr{P}$. As examples, for appropriate choices of $\mathscr{P}$, the problems Induced Matching, Uniquely Restricted Matching, Connected Matching and Disconnected Matching arise. For many of these problems, finding a maximum $\mathscr{P}$-matching is a knowingly NP-Hard problem, with few exceptions, such as connected matchings, which has the same time complexity as usual Maximum Matching problem. The weighted variant of Maximum Matching has been studied for decades, with many applications, including the well-known Assignment problem. Motivated by this fact, in addition to some recent researches in weighted versions of acyclic and induced matchings, we study the Maximum Weight Connected Matching. In this problem, we want to find a matching $M$ such that the endpoint vertices of its edges induce a connected subgraph and the sum of the edge weights of $M$ is maximum. Unlike the unweighted Connected Matching problem, which is in P for general graphs, we show that Maximum Weight Connected Matching is NP-Hard even for bounded diameter bipartite graphs, starlike graphs, planar bipartite, and bounded degree planar graphs, while solvable in linear time for trees and subcubic graphs. When we restrict edge weights to be non negative only, we show that the problem turns to be polynomially solvable for chordal graphs, while it remains NP-Hard for most of the cases when weights can be negative. Our final contributions are on parameterized complexity. On the positive side, we present a single exponential time algorithm when parameterized by treewidth. In terms of kernelization, we show that, even when restricted to binary weights, Weighted Connected Matching does not admit a polynomial kernel when parameterized by vertex cover under standard complexity-theoretical hypotheses.

Siamese tracking has achieved groundbreaking performance in recent years, where the essence is the efficient matching operator cross-correlation and its variants. Besides the remarkable success, it is important to note that the heuristic matching network design relies heavily on expert experience. Moreover, we experimentally find that one sole matching operator is difficult to guarantee stable tracking in all challenging environments. Thus, in this work, we introduce six novel matching operators from the perspective of feature fusion instead of explicit similarity learning, namely Concatenation, Pointwise-Addition, Pairwise-Relation, FiLM, Simple-Transformer and Transductive-Guidance, to explore more feasibility on matching operator selection. The analyses reveal these operators' selective adaptability on different environment degradation types, which inspires us to combine them to explore complementary features. To this end, we propose binary channel manipulation (BCM) to search for the optimal combination of these operators. BCM determines to retrain or discard one operator by learning its contribution to other tracking steps. By inserting the learned matching networks to a strong baseline tracker Ocean, our model achieves favorable gains by $67.2 \rightarrow 71.4$, $52.6 \rightarrow 58.3$, $70.3 \rightarrow 76.0$ success on OTB100, LaSOT, and TrackingNet, respectively. Notably, Our tracker, dubbed AutoMatch, uses less than half of training data/time than the baseline tracker, and runs at 50 FPS using PyTorch. Code and model will be released at //github.com/JudasDie/SOTS.

Recent works leveraging Graph Neural Networks to approach graph matching tasks have shown promising results. Recent progress in learning discrete distributions poses new opportunities for learning graph matching models. In this work, we propose a new model, Stochastic Iterative Graph MAtching (SIGMA), to address the graph matching problem. Our model defines a distribution of matchings for a graph pair so the model can explore a wide range of possible matchings. We further introduce a novel multi-step matching procedure, which learns how to refine a graph pair's matching results incrementally. The model also includes dummy nodes so that the model does not have to find matchings for nodes without correspondence. We fit this model to data via scalable stochastic optimization. We conduct extensive experiments across synthetic graph datasets as well as biochemistry and computer vision applications. Across all tasks, our results show that SIGMA can produce significantly improved graph matching results compared to state-of-the-art models. Ablation studies verify that each of our components (stochastic training, iterative matching, and dummy nodes) offers noticeable improvement.

Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

The field of Multi-Agent System (MAS) is an active area of research within Artificial Intelligence, with an increasingly important impact in industrial and other real-world applications. Within a MAS, autonomous agents interact to pursue personal interests and/or to achieve common objectives. Distributed Constraint Optimization Problems (DCOPs) have emerged as one of the prominent agent architectures to govern the agents' autonomous behavior, where both algorithms and communication models are driven by the structure of the specific problem. During the last decade, several extensions to the DCOP model have enabled them to support MAS in complex, real-time, and uncertain environments. This survey aims at providing an overview of the DCOP model, giving a classification of its multiple extensions and addressing both resolution methods and applications that find a natural mapping within each class of DCOPs. The proposed classification suggests several future perspectives for DCOP extensions, and identifies challenges in the design of efficient resolution algorithms, possibly through the adaptation of strategies from different areas.

We introduce negative binomial matrix factorization (NBMF), a matrix factorization technique specially designed for analyzing over-dispersed count data. It can be viewed as an extension of Poisson matrix factorization (PF) perturbed by a multiplicative term which models exposure. This term brings a degree of freedom for controlling the dispersion, making NBMF more robust to outliers. We show that NBMF allows to skip traditional pre-processing stages, such as binarization, which lead to loss of information. Two estimation approaches are presented: maximum likelihood and variational Bayes inference. We test our model with a recommendation task and show its ability to predict user tastes with better precision than PF.

This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.

北京阿比特科技有限公司