亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The task of community detection, which aims to partition a network into clusters of nodes to summarize its large-scale structure, has spawned the development of many competing algorithms with varying objectives. Some community detection methods are inferential, explicitly deriving the clustering objective through a probabilistic generative model, while other methods are descriptive, dividing a network according to an objective motivated by a particular application, making it challenging to compare these methods on the same scale. Here we present a solution to this problem that associates any community detection objective, inferential or descriptive, with its corresponding implicit network generative model. This allows us to compute the description length of a network and its partition under arbitrary objectives, providing a principled measure to compare the performance of different algorithms without the need for "ground truth" labels. Our approach also gives access to instances of the community detection problem that are optimal to any given algorithm, and in this way reveals intrinsic biases in popular descriptive methods, explaining their tendency to overfit. Using our framework, we compare a number of community detection methods on artificial networks, and on a corpus of over 500 structurally diverse empirical networks. We find that more expressive community detection methods exhibit consistently superior compression performance on structured data instances, without having degraded performance on a minority of situations where more specialized algorithms perform optimally. Our results undermine the implications of the "no free lunch" theorem for community detection, both conceptually and in practice, since it is confined to unstructured data instances, unlike relevant community detection problems which are structured by requirement.

相關內容

在網絡中發現社區(稱為社區檢測/發現)是網絡科學中的一個基本問題,在過去的幾十年中引起了很多關注。 近年來,隨著對大數據的大量研究,另一個相關但又不同的問題(稱為社區搜索)旨在尋找包含查詢節點的最有可能的社區,這已引起了學術界和工業界的廣泛關注,它是社區檢測問題的依賴查詢的變體。

Dealing with uncertainty in optimization parameters is an important and longstanding challenge. Typically, uncertain parameters are predicted accurately, and then a deterministic optimization problem is solved. However, the decisions produced by this so-called \emph{predict-then-optimize} procedure can be highly sensitive to uncertain parameters. In this work, we contribute to recent efforts in producing \emph{decision-focused} predictions, i.e., to build predictive models that are constructed with the goal of minimizing a \emph{regret} measure on the decisions taken with them. We formulate the exact expected regret minimization as a pessimistic bilevel optimization model. Then, using duality arguments, we reformulate it as a non-convex quadratic optimization problem. Finally, we show various computational techniques to achieve tractability. We report extensive computational results on shortest-path instances with uncertain cost vectors. Our results indicate that our approach can improve training performance over the approach of Elmachtoub and Grigas (2022), a state-of-the-art method for decision-focused learning.

We prove that a maintenance problem on frequency-constrained maintenance jobs with a hierarchical structure is integer-factorization hard. This result holds even on simple systems with just two components to maintain. As a corollary, we provide a first hardness result for Levi et al.'s modular maintenance scheduling problem (Naval Research Logistics 61, 472-488, 2014).

Blockchain based federated learning is a distributed learning scheme that allows model training without participants sharing their local data sets, where the blockchain components eliminate the need for a trusted central server compared to traditional Federated Learning algorithms. In this paper we propose a softmax aggregation blockchain based federated learning framework. First, we propose a new blockchain based federated learning architecture that utilizes the well-tested proof-of-stake consensus mechanism on an existing blockchain network to select validators and miners to aggregate the participants' updates and compute the blocks. Second, to ensure the robustness of the aggregation process, we design a novel softmax aggregation method based on approximated population loss values that relies on our specific blockchain architecture. Additionally, we show our softmax aggregation technique converges to the global minimum in the convex setting with non-restricting assumptions. Our comprehensive experiments show that our framework outperforms existing robust aggregation algorithms in various settings by large margins.

Deep feedforward and recurrent rate-based neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.

We discuss a class of coupled system of nonlocal balance laws modeling multilane traffic, with the nonlocality present in both convective and source terms. The uniqueness and existence of the entropy solution is proven via doubling of the variables arguments and convergent finite volume approximations, respectively. The numerical approximations are proven to converge to the unique entropy solution of the system at the rate $\sqrt{\Delta t}$. The applicability of the proven theory to a general class of systems of nonlocal balance laws coupled strongly through the convective part and weakly through the source part, is also indicated. Numerical simulations illustrating the theory and the behavior of the entropy solution as the support of the kernel goes to zero(nonlocal to local limit), are shown.

Measures of association between cortical regions based on activity signals provide useful information for studying brain functional connectivity. Difficulties occur with signals of electric neuronal activity, where an observed signal is a mixture, i.e. an instantaneous weighted average of the true, unobserved signals from all regions, due to volume conduction and low spatial resolution. This is why measures of lagged association are of interest, since at least theoretically, "lagged association" is of physiological origin. In contrast, the actual physiological instantaneous zero-lag association is masked and confounded by the mixing artifact. A minimum requirement for a measure of lagged association is that it must not tend to zero with an increase of strength of true instantaneous physiological association. Such biased measures cannot tell apart if a change in its value is due to a change in lagged or a change in instantaneous association. An explicit testable definition for frequency domain lagged connectivity between two multivariate time series is proposed. It is endowed with two important properties: it is invariant to non-singular linear transformations of each vector time series separately, and it is invariant to instantaneous association. As a first sanity check: in the case of two univariate time series, the new definition leads back to the bivariate lagged coherence of 2007 (eqs 25 and 26 in //doi.org/10.48550/arXiv.0706.1776). As a second stronger sanity check: in the case of a univariate and multivariate vector time series, the new measure presented here leads back to the original multivariate lagged coherence in equation 31 of the same 2007 publication (which trivially includes the bivariate case).

A new loss function for speaker recognition with deep neural network is proposed, based on Jeffreys Divergence. Adding this divergence to the cross-entropy loss function allows to maximize the target value of the output distribution while smoothing the non-target values. This objective function provides highly discriminative features. Beyond this effect, we propose a theoretical justification of its effectiveness and try to understand how this loss function affects the model, in particular the impact on dataset types (i.e. in-domain or out-of-domain w.r.t the training corpus). Our experiments show that Jeffreys loss consistently outperforms the state-of-the-art for speaker recognition, especially on out-of-domain data, and helps limit false alarms.

The statistical analysis of group studies in neuroscience is particularly challenging due to the complex spatio-temporal nature of the data, its multiple levels and the inter-individual variability in brain responses. In this respect, traditional ANOVA-based studies and linear mixed effects models typically provide only limited exploration of the dynamic of the group brain activity and variability of the individual responses potentially leading to overly simplistic conclusions and/or missing more intricate patterns. In this study we propose a novel method based on functional Principal Components Analysis and Bayesian model-based clustering to simultaneously assess group effects and individual deviations over the most important temporal features in the data. This method provides a thorough exploration of group differences and individual deviations in neuroscientific group studies without compromising on the spatio-temporal nature of the data. By means of a simulation study we demonstrate that the proposed model returns correct classification in different clustering scenarios under low and high of noise levels in the data. Finally we consider a case study using Electroencephalogram data recorded during an object recognition task where our approach provides new insights into the underlying brain mechanisms generating the data and their variability.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.

北京阿比特科技有限公司