亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Topological signal processing (TSP) utilizes simplicial complexes to model structures with higher order than vertices and edges. In this paper, we study the transferability of TSP via a generalized higher-order version of graphon, known as complexon. We recall the notion of a complexon as the limit of a simplicial complex sequence [1]. Inspired by the graphon shift operator and message-passing neural network, we construct a marginal complexon and complexon shift operator (CSO) according to components of all possible dimensions from the complexon. We investigate the CSO's eigenvalues and eigenvectors and relate them to a new family of weighted adjacency matrices. We prove that when a simplicial complex signal sequence converges to a complexon signal, the eigenvalues, eigenspaces, and Fourier transform of the corresponding CSOs converge to that of the limit complexon signal. This conclusion is further verified by two numerical experiments. These results hint at learning transferability on large simplicial complexes or simplicial complex sequences, which generalize the graphon signal processing framework.

相關內容

信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li)期刊采(cai)用了(le)理(li)(li)(li)論(lun)與(yu)實踐(jian)的(de)各個方面的(de)信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li)。它(ta)以原始研究工(gong)作,教程和評(ping)論(lun)文(wen)章(zhang)以及實際發展情況為特(te)色。它(ta)旨(zhi)在將知識和經驗快速傳(chuan)播(bo)給(gei)從事信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li)研究,開發或實際應用的(de)工(gong)程師和科學(xue)家。該期刊涵蓋的(de)主(zhu)題(ti)領域包(bao)括:信號(hao)(hao)(hao)(hao)(hao)(hao)理(li)(li)(li)論(lun);隨機(ji)過程; 檢(jian)測和估計;光譜分析(xi);過濾;信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li)系統;軟(ruan)件開發;圖像處(chu)理(li)(li)(li); 模式識別(bie); 光信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li);數(shu)字信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li); 多維信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li);通信信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li);生物醫學(xue)信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li);地(di)球物理(li)(li)(li)和天體信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li);地(di)球資源信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li);聲音和振(zhen)動(dong)信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li);數(shu)據處(chu)理(li)(li)(li); 遙感; 信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li)技術;雷(lei)達信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li);聲納信號(hao)(hao)(hao)(hao)(hao)(hao)處(chu)理(li)(li)(li);工(gong)業(ye)應用;新的(de)應用程序。 官網地(di)址:

Evaluating classifications is crucial in statistics and machine learning, as it influences decision-making across various fields, such as patient prognosis and therapy in critical conditions. The Matthews correlation coefficient (MCC) is recognized as a performance metric with high reliability, offering a balanced measurement even in the presence of class imbalances. Despite its importance, there remains a notable lack of comprehensive research on the statistical inference of MCC. This deficiency often leads to studies merely validating and comparing MCC point estimates, a practice that, while common, overlooks the statistical significance and reliability of results. Addressing this research gap, our paper introduces and evaluates several methods to construct asymptotic confidence intervals for the single MCC and the differences between MCCs in paired designs. Through simulations across various scenarios, we evaluate the finite-sample behavior of these methods and compare their performances. Furthermore, through real data analysis, we illustrate the potential utility of our findings in comparing binary classifiers, highlighting the possible contributions of our research in this field.

Causal dynamics models (CDMs) have demonstrated significant potential in addressing various challenges in reinforcement learning. To learn CDMs, recent studies have performed causal discovery to capture the causal dependencies among environmental variables. However, the learning of CDMs is still confined to small-scale environments due to computational complexity and sample efficiency constraints. This paper aims to extend CDMs to large-scale object-oriented environments, which consist of a multitude of objects classified into different categories. We introduce the Object-Oriented CDM (OOCDM) that shares causalities and parameters among objects belonging to the same class. Furthermore, we propose a learning method for OOCDM that enables it to adapt to a varying number of objects. Experiments on large-scale tasks indicate that OOCDM outperforms existing CDMs in terms of causal discovery, prediction accuracy, generalization, and computational efficiency.

Reduced order models are becoming increasingly important for rendering complex and multiscale spatio-temporal dynamics computationally tractable. The computational efficiency of such surrogate models is especially important for design, exhaustive exploration and physical understanding. Plasma simulations, in particular those applied to the study of ${\bf E}\times {\bf B}$ plasma discharges and technologies, such as Hall thrusters, require substantial computational resources in order to resolve the multidimentional dynamics that span across wide spatial and temporal scales. Although high-fidelity computational tools are available to simulate such systems over limited conditions and in highly simplified geometries, simulations of full-size systems and/or extensive parametric studies over many geometric configurations and under different physical conditions are computationally intractable with conventional numerical tools. Thus, scientific studies and industrially oriented modeling of plasma systems, including the important ${\bf E}\times {\bf B}$ technologies, stand to significantly benefit from reduced order modeling algorithms. We develop a model reduction scheme based upon a {\em Shallow REcurrent Decoder} (SHRED) architecture. The scheme uses a neural network for encoding limited sensor measurements in time (sequence-to-sequence encoding) to full state-space reconstructions via a decoder network. Based upon the theory of separation of variables, the SHRED architecture is capable of (i) reconstructing full spatio-temporal fields with as little as three point sensors, even the fields that are not measured with sensor feeds but that are in dynamic coupling with the measured field, and (ii) forecasting the future state of the system using neural network roll-outs from the trained time encoding model.

The exploration of network structures through the lens of graph theory has become a cornerstone in understanding complex systems across diverse fields. Identifying densely connected subgraphs within larger networks is crucial for uncovering functional modules in biological systems, cohesive groups within social networks, and critical paths in technological infrastructures. The most representative approach, the SM algorithm, cannot locate subgraphs with large sizes, therefore cannot identify dense subgraphs; while the SA algorithm previously used by researchers combines simulated annealing and efficient moves for the Markov chain. However, the global optima cannot be guaranteed to be located by the simulated annealing methods including SA unless a logarithmic cooling schedule is used. To this end, our study introduces and evaluates the performance of the Simulated Annealing Algorithm (SAA), which combines simulated annealing with the stochastic approximation Monte Carlo algorithm. The performance of SAA against two other numerical algorithms-SM and SA, is examined in the context of identifying these critical subgraph structures using simulated graphs with embeded cliques. We have found that SAA outperforms both SA and SM by 1) the number of iterations to find the densest subgraph and 2) the percentage of time the algorithm is able to find a clique after 10,000 iterations, and 3) computation time. The promising result of the SAA algorithm could offer a robust tool for dissecting complex systems and potentially transforming our approach to solving problems in interdisciplinary fields.

Non-reversible parallel tempering (NRPT) is an effective algorithm for sampling from target distributions with complex geometry, such as those arising from posterior distributions of weakly identifiable and high-dimensional Bayesian models. In this work we establish the uniform (geometric) ergodicity of NRPT under a model of efficient local exploration. The uniform ergodicity log rates are inversely proportional to an easily-estimable divergence, the global communication barrier (GCB), which was recently introduced in the literature. We obtain analogous ergodicity results for classical reversible parallel tempering, providing new evidence that NRPT dominates its reversible counterpart. Our results are based on an analysis of the hitting time of a continuous-time persistent random walk, which is also of independent interest. The rates that we obtain reflect real experiments well for distributions where global exploration is not possible without parallel tempering.

The classical theory of Kosambi-Cartan-Chern (KCC) developed in differential geometry provides a powerful method for analyzing the behaviors of dynamical systems. In the KCC theory, the properties of a dynamical system are described in terms of five geometrical invariants, of which the second corresponds to the so-called Jacobi stability of the system. Different from that of the Lyapunov stability that has been studied extensively in the literature, the analysis of the Jacobi stability has been investigated more recently using geometrical concepts and tools. It turns out that the existing work on the Jacobi stability analysis remains theoretical and the problem of algorithmic and symbolic treatment of Jacobi stability analysis has yet to be addressed. In this paper, we initiate our study on the problem for a class of ODE systems of arbitrary dimension and propose two algorithmic schemes using symbolic computation to check whether a nonlinear dynamical system may exhibit Jacobi stability. The first scheme, based on the construction of the complex root structure of a characteristic polynomial and on the method of quantifier elimination, is capable of detecting the existence of the Jacobi stability of the given dynamical system. The second algorithmic scheme exploits the method of semi-algebraic system solving and allows one to determine conditions on the parameters for a given dynamical system to have a prescribed number of Jacobi stable fixed points. Several examples are presented to demonstrate the effectiveness of the proposed algorithmic schemes.

Copyright infringement may occur when a generative model produces samples substantially similar to some copyrighted data that it had access to during the training phase. The notion of access usually refers to including copyrighted samples directly in the training dataset, which one may inspect to identify an infringement. We argue that such visual auditing largely overlooks a concealed copyright infringement, where one constructs a disguise that looks drastically different from the copyrighted sample yet still induces the effect of training Latent Diffusion Models on it. Such disguises only require indirect access to the copyrighted material and cannot be visually distinguished, thus easily circumventing the current auditing tools. In this paper, we provide a better understanding of such disguised copyright infringement by uncovering the disguises generation algorithm, the revelation of the disguises, and importantly, how to detect them to augment the existing toolbox. Additionally, we introduce a broader notion of acknowledgment for comprehending such indirect access.

Traditionally, classical numerical schemes have been employed to solve partial differential equations (PDEs) using computational methods. Recently, neural network-based methods have emerged. Despite these advancements, neural network-based methods, such as physics-informed neural networks (PINNs) and neural operators, exhibit deficiencies in robustness and generalization. To address these issues, numerous studies have integrated classical numerical frameworks with machine learning techniques, incorporating neural networks into parts of traditional numerical methods. In this study, we focus on hyperbolic conservation laws by replacing traditional numerical fluxes with neural operators. To this end, we developed loss functions inspired by established numerical schemes related to conservation laws and approximated numerical fluxes using Fourier neural operators (FNOs). Our experiments demonstrated that our approach combines the strengths of both traditional numerical schemes and FNOs, outperforming standard FNO methods in several respects. For instance, we demonstrate that our method is robust, has resolution invariance, and is feasible as a data-driven method. In particular, our method can make continuous predictions over time and exhibits superior generalization capabilities with out-of-distribution (OOD) samples, which are challenges that existing neural operator methods encounter.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.

北京阿比特科技有限公司