亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Graph sampling plays an important role in data mining for large networks. Specifically, larger networks often correspond to lower sampling rates. Under the situation, traditional traversal-based samplings for large networks usually have an excessive preference for densely-connected network core nodes. Aim at this issue, this paper proposes a sampling method for unknown networks at low sampling rates, called SLSR, which first adopts a random node sampling to evaluate a degree threshold, utilized to distinguish the core from periphery, and the average degree in unknown networks, and then runs a double-layer sampling strategy on the core and periphery. SLSR is simple that results in a high time efficiency, but experimental evaluation confirms that the proposed method can accurately preserve many critical structures of unknown large networks with low variances and low sampling rates.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

Hyper spectral images have drawn the attention of the researchers for its complexity to classify. It has nonlinear relation between the materials and the spectral information provided by the HSI image. Deep learning methods have shown superiority in learning this nonlinearity in comparison to traditional machine learning methods. Use of 3-D CNN along with 2-D CNN have shown great success for learning spatial and spectral features. However, it uses comparatively large number of parameters. Moreover, it is not effective to learn inter layer information. Hence, this paper proposes a neural network combining 3-D CNN, 2-D CNN and Bi-LSTM. The performance of this model has been tested on Indian Pines(IP) University of Pavia(PU) and Salinas Scene(SA) data sets. The results are compared with the state of-the-art deep learning-based models. This model performed better in all three datasets. It could achieve 99.83, 99.98 and 100 percent accuracy using only 30 percent trainable parameters of the state-of-art model in IP, PU and SA datasets respectively.

Mixed-signal neuromorphic processors provide extremely low-power operation for edge inference workloads, taking advantage of sparse asynchronous computation within Spiking Neural Networks (SNNs). However, deploying robust applications to these devices is complicated by limited controllability over analog hardware parameters, as well as unintended parameter and dynamical variations of analog circuits due to fabrication non-idealities. Here we demonstrate a novel methodology for ofDine training and deployment of spiking neural networks (SNNs) to the mixed-signal neuromorphic processor DYNAP-SE2. The methodology utilizes gradient-based training using a differentiable simulation of the mixed-signal device, coupled with an unsupervised weight quantization method to optimize the network's parameters. Parameter noise injection during training provides robustness to the effects of quantization and device mismatch, making the method a promising candidate for real-world applications under hardware constraints and non-idealities. This work extends Rockpool, an open-source deep-learning library for SNNs, with support for accurate simulation of mixed-signal SNN dynamics. Our approach simplifies the development and deployment process for the neuromorphic community, making mixed-signal neuromorphic processors more accessible to researchers and developers.

In coherent imaging systems, speckle is a signal-dependent noise that visually strongly degrades images' appearance. A huge amount of SAR data has been acquired from different sensors with different wavelengths, resolutions, incidences and polarizations. We extend the nonlocal filtering strategy to the temporal domain and propose a patch-based adaptive temporal filter (PATF) to take advantage of well-registered multi-temporal SAR images. A patch-based generalised likelihood ratio test is processed to suppress the changed object effects on the multitemporal denoising results. Then, the similarities are transformed into corresponding weights with an exponential function. The denoised value is calculated with a temporal weighted average. Spatial adaptive denoising methods can improve the patch-based weighted temporal average image when the time series is limited. The spatial adaptive denoising step is optional when the time series is large enough. Without reference image, we propose using a patch-based auto-covariance residual evaluation method to examine the ratio image between the noisy and denoised images and look for possible remaining structural contents. It can process automatically and does not rely on a supervised selection of homogeneous regions. It also provides a global score for the whole image. Numerous results demonstrate the effectiveness of the proposed time series denoising method and the usefulness of the residual evaluation method.

Following initial work by JaJa and Ahlswede/Cai, and inspired by a recent renewed surge in interest in deterministic identification via noisy channels, we consider the problem in its generality for memoryless channels with finite output, but arbitrary input alphabets. Such a channel is essentially given by (the closure of) the subset of its output distributions in the probability simplex. Our main findings are that the maximum number of messages thus identifiable scales super-exponentially as $2^{R\,n\log n}$ with the block length $n$, and that the optimal rate $R$ is upper and lower bounded in terms of the covering (aka Minkowski, or Kolmogorov, or entropy) dimension $d$ of the output set: $\frac14 d \leq R \leq d$. Leading up to the general case, we treat the important special case of the so-called Bernoulli channel with input alphabet $[0;1]$ and binary output, which has $d=1$, to gain intuition. Along the way, we show a certain Hypothesis Testing Lemma (generalising an earlier insight of Ahlswede regarding the intersection of typical sets) that implies that for the construction of a deterministic identification code, it is sufficient to ensure pairwise reliable distinguishability of the output distributions. These results are then shown to generalise directly to classical-quantum channels with finite-dimensional output quantum system (but arbitrary input alphabet), and in particular to quantum channels on finite-dimensional quantum systems under the constraint that the identification code can only use tensor product inputs.

The volumetric representation of human interactions is one of the fundamental domains in the development of immersive media productions and telecommunication applications. Particularly in the context of the rapid advancement of Extended Reality (XR) applications, this volumetric data has proven to be an essential technology for future XR elaboration. In this work, we present a new multimodal database to help advance the development of immersive technologies. Our proposed database provides ethically compliant and diverse volumetric data, in particular 27 participants displaying posed facial expressions and subtle body movements while speaking, plus 11 participants wearing head-mounted displays (HMDs). The recording system consists of a volumetric capture (VoCap) studio, including 31 synchronized modules with 62 RGB cameras and 31 depth cameras. In addition to textured meshes, point clouds, and multi-view RGB-D data, we use one Lytro Illum camera for providing light field (LF) data simultaneously. Finally, we also provide an evaluation of our dataset employment with regard to the tasks of facial expression classification, HMDs removal, and point cloud reconstruction. The dataset can be helpful in the evaluation and performance testing of various XR algorithms, including but not limited to facial expression recognition and reconstruction, facial reenactment, and volumetric video. HEADSET and its all associated raw data and license agreement will be publicly available for research purposes.

Reconciliation enforces coherence between hierarchical forecasts, in order to satisfy a set of linear constraints. While most works focus on the reconciliation of the point forecasts, we consider probabilistic reconciliation and we analyze the properties of the distributions reconciled via conditioning. We provide a formal analysis of the variance of the reconciled distribution, treating separately the case of Gaussian forecasts and count forecasts. We also study the reconciled upper mean in the case of 1-level hierarchies; also in this case we analyze separately the case of Gaussian forecasts and count forecasts. We then show experiments on the reconciliation of intermittent time series related to the count of extreme market events. The experiments confirm our theoretical results and show that reconciliation largely improves the performance of probabilistic forecasting.

Graphs and networks play an important role in modeling and analyzing complex interconnected systems such as transportation networks, integrated circuits, power grids, citation graphs, and biological and artificial neural networks. Graph clustering algorithms can be used to detect groups of strongly connected vertices and to derive coarse-grained models. We define transfer operators such as the Koopman operator and the Perron-Frobenius operator on graphs, study their spectral properties, introduce Galerkin projections of these operators, and illustrate how reduced representations can be estimated from data. In particular, we show that spectral clustering of undirected graphs can be interpreted in terms of eigenfunctions of the Koopman operator and propose novel clustering algorithms for directed graphs based on generalized transfer operators. We demonstrate the efficacy of the resulting algorithms on several benchmark problems and provide different interpretations of clusters.

This paper is concerned with the problem of sampling and interpolation involving derivatives in shift-invariant spaces and the error analysis of the derivative sampling expansions for fundamentally large classes of functions. A new type of polynomials based on derivative samples is introduced, which is different from the Euler-Frobenius polynomials for the multiplicity $r>1$. A complete characterization of uniform sampling with derivatives is given using Laurent operators. The rate of approximation of a signal (not necessarily continuous) by the derivative sampling expansions in shift-invariant spaces generated by compactly supported functions is established in terms of $L^p$- average modulus of smoothness. Finally, several typical examples illustrating the various problems are discussed in detail.

The current routing protocol used in the internet backbone is based on manual configuration, making it susceptible to errors. To mitigate these configuration-related issues, it becomes imperative to validate the accuracy and convergence of the algorithm, ensuring a seamless operation devoid of problems. However, the process of network verification faces challenges related to privacy and scalability. This paper addresses these challenges by introducing a novel approach: leveraging privacy-preserving computation, specifically multiparty computation (MPC), to verify the correctness of configurations in the internet backbone, governed by the BGP protocol. Not only does our proposed solution effectively address scalability concerns, but it also establishes a robust privacy framework. Through rigorous analysis, we demonstrate that our approach maintains privacy by not disclosing any information beyond the query result, thus providing a comprehensive and secure solution to the intricacies associated with routing protocol verification in large-scale networks.

In large-scale studies with parallel signal-plus-noise observations, the local false discovery rate is a summary statistic that is often presumed to be equal to the posterior probability that the signal is null. We prefer to call the latter quantity the local null-signal rate to emphasize our view that a null signal and a false discovery are not identical events. The local null-signal rate is commonly estimated through empirical Bayes procedures that build on the `zero density assumption', which attributes the density of observations near zero entirely to null signals. In this paper, we argue that this strategy does not furnish estimates of the local null-signal rate, but instead of a quantity we call the complementary local activity rate (clar). Although it is likely to be small, an inactive signal is not necessarily zero. The local activity rate addresses two shortcomings of the local null-signal rate. First, it is a weakly continuous functional of the signal distribution, and second, it takes on sensible values when the signal is sparse but not exactly zero. Our findings clarify the interpretation of local false-discovery rates estimated under the zero density assumption.

北京阿比特科技有限公司