亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Multilayer networks are a network data structure in which elements in a population of interest have multiple modes of interaction or relation, represented by multiple networks called layers. We propose a novel class of models for cross-layer dependence in multilayer networks, aiming to learn how interactions in one or more layers may influence interactions in other layers of the multilayer network, by developing a class of network separable models which separate the network formation process from the layer formation process. In our framework, we are able to extend existing single layer network models to a multilayer network model with cross-layer dependence. We establish non-asymptotic bounds on the error of estimators and demonstrate rates of convergence for both maximum likelihood estimators and maximum pseudolikelihood estimators in scenarios of increasing parameter dimension. We additionally establish non-asymptotic error bounds on the multivariate normal approximation and elaborate a method for model selection which controls the false discovery rate. We conduct simulation studies which demonstrate that our framework and method work well in realistic settings which might be encountered in applications. Lastly, we illustrate the utility of our method through an application to the Lazega lawyers network.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

The universal approximation property of width-bounded networks has been studied as a dual of the classical universal approximation theorem for depth-bounded ones. There were several attempts to characterize the minimum width $w_{\min}$ enabling the universal approximation property; however, only a few of them found the exact values. In this work, we show that the minimum width for the universal approximation of $L^p$ functions from $[0,1]^{d_x}$ to $\mathbb R^{d_y}$ is exactly $\max\{d_x,d_y,2\}$ if an activation function is ReLU-Like (e.g., ReLU, GELU, Softplus). Compared to the known result $w_{\min}=\max\{d_x+1,d_y\}$ when the domain is ${\mathbb R^{d_x}}$, our result first shows that approximation on a compact domain requires smaller width than on ${\mathbb R^{d_x}}$. We next prove a lower bound on $w_{\min}$ for uniform approximation using general activation functions including ReLU: $w_{\min}\ge d_y+1$ if $d_x<d_y\le2d_x$. Together with our first result, this shows a dichotomy between $L^p$ and uniform approximations for general activation functions and input/output dimensions.

We investigate the dynamics of chemical reaction networks (CRNs) with the goal of deriving an upper bound on their reaction rates. This task is challenging due to the nonlinear nature and discrete structure inherent in CRNs. To address this, we employ an information geometric approach, using the natural gradient, to develop a nonlinear system that yields an upper bound for CRN dynamics. We validate our approach through numerical simulations, demonstrating faster convergence in a specific class of CRNs. This class is characterized by the number of chemicals, the maximum value of stoichiometric coefficients of the chemical reactions, and the number of reactions. We also compare our method to a conventional approach, showing that the latter cannot provide an upper bound on reaction rates of CRNs. While our study focuses on CRNs, the ubiquity of hypergraphs in fields from natural sciences to engineering suggests that our method may find broader applications, including in information science.

The analysis of functional connectivity (FC) networks in resting-state functional magnetic resonance imaging (rs-fMRI) has recently evolved to a dynamic FC approach, where the functional networks are presumed to vary throughout a scanning session. Central challenges in dFC analysis involve partitioning rs-fMRI into segments of static FC and achieving high replicability while controlling for false positives. In this work we propose Rank-Adapative Covariance Changepoint detection (RACC), a changepoint detection method to address these challenges. RACC utilizes a binary segmentation procedure with novel test statistics able to detect changes in covariances driven by low-rank latent factors, which are useful for understanding changes occurring within and between functional networks. A permutation scheme is used to address the high dimensionality of the data and to provide false positive control. RACC improves upon existing rs-fMRI changepoint detection methods by explicitly controlling Type 1 error and improving sensitivity in estimating dFC at the whole-brain level. We conducted extensive simulation studies across a variety of data generating scenarios, and applied RACC to a rs-fMRI dataset of subjects with schizophrenia spectrum disorder and healthy controls to highlight our findings.

Ordinary differential equations (ODEs) are widely used to model complex dynamics that arises in biology, chemistry, engineering, finance, physics, etc. Calibration of a complicated ODE system using noisy data is generally very difficult. In this work, we propose a two-stage nonparametric approach to address this problem. We first extract the de-noised data and their higher order derivatives using boundary kernel method, and then feed them into a sparsely connected deep neural network with ReLU activation function. Our method is able to recover the ODE system without being subject to the curse of dimensionality and complicated ODE structure. When the ODE possesses a general modular structure, with each modular component involving only a few input variables, and the network architecture is properly chosen, our method is proven to be consistent. Theoretical properties are corroborated by an extensive simulation study that demonstrates the validity and effectiveness of the proposed method. Finally, we use our method to simultaneously characterize the growth rate of Covid-19 infection cases from 50 states of the USA.

In the analysis of spatial point patterns on linear networks, a critical statistical objective is estimating the first-order intensity function, representing the expected number of points within specific subsets of the network. Typically, non-parametric approaches employing heating kernels are used for this estimation. However, a significant challenge arises in selecting appropriate bandwidths before conducting the estimation. We study an intensity estimation mechanism that overcomes this limitation using adaptive estimators, where bandwidths adapt to the data points in the pattern. While adaptive estimators have been explored in other contexts, their application in linear networks remains underexplored. We investigate the adaptive intensity estimator within the linear network context and extend a partitioning technique based on bandwidth quantiles to expedite the estimation process significantly. Through simulations, we demonstrate the efficacy of this technique, showing that the partition estimator closely approximates the direct estimator while drastically reducing computation time. As a practical application, we employ our method to estimate the intensity of traffic accidents in a neighbourhood in Medellin, Colombia, showcasing its real-world relevance and efficiency.

Graph sparsification is an area of interest in computer science and applied mathematics. Sparsification of a graph, in general, aims to reduce the number of edges in the network while preserving specific properties of the graph, like cuts and subgraph counts. Computing the sparsest cuts of a graph is known to be NP-hard, and sparsification routines exists for generating linear sized sparsifiers in almost quadratic running time $O(n^{2 + \epsilon})$. Consequently, obtaining a sparsifier can be a computationally demanding task and the complexity varies based on the level of sparsity required. In this study, we extend the concept of sparsification to the realm of reaction-diffusion complex systems. We aim to address the challenge of reducing the number of edges in the network while preserving the underlying flow dynamics. To tackle this problem, we adopt a relaxed approach considering only a subset of trajectories. We map the network sparsification problem to a data assimilation problem on a Reduced Order Model (ROM) space with constraints targeted at preserving the eigenmodes of the Laplacian matrix under perturbations. The Laplacian matrix ($L = D - A$) is the difference between the diagonal matrix of degrees ($D$) and the graph's adjacency matrix ($A$). We propose approximations to the eigenvalues and eigenvectors of the Laplacian matrix subject to perturbations for computational feasibility and include a custom function based on these approximations as a constraint on the data assimilation framework. We demonstrate the extension of our framework to achieve sparsity in parameter sets for Neural Ordinary Differential Equations (neural ODEs).

Complex networks are used to model many real-world systems. However, the dimensionality of these systems can make them challenging to analyze. Dimensionality reduction techniques like POD can be used in such cases. However, these models are susceptible to perturbations in the input data. We propose an algorithmic framework that combines techniques from pattern recognition (PR) and stochastic filtering theory to enhance the output of such models. The results of our study show that our method can improve the accuracy of the surrogate model under perturbed inputs. Deep Neural Networks (DNNs) are susceptible to adversarial attacks. However, recent research has revealed that Neural Ordinary Differential Equations (neural ODEs) exhibit robustness in specific applications. We benchmark our algorithmic framework with the neural ODE-based approach as a reference.

We consider the weighted least squares spline approximation of a noisy dataset. By interpreting the weights as a probability distribution, we maximize the associated entropy subject to the constraint that the mean squared error is prescribed to a desired (small) value. Acting on this error yields a robust regression method that automatically detects and removes outliers from the data during the fitting procedure, by assigning them a very small weight. We discuss the use of both spline functions and spline curves. A number of numerical illustrations have been included to disclose the potentialities of the maximal-entropy approach in different application fields.

Testing cross-sectional independence in panel data models is of fundamental importance in econometric analysis with high-dimensional panels. Recently, econometricians began to turn their attention to the problem in the presence of serial dependence. The existing procedure for testing cross-sectional independence with serial correlation is based on the sum of the sample cross-sectional correlations, which generally performs well when the alternative has dense cross-sectional correlations, but suffers from low power against sparse alternatives. To deal with sparse alternatives, we propose a test based on the maximum of the squared sample cross-sectional correlations. Furthermore, we propose a combined test to combine the p-values of the max based and sum based tests, which performs well under both dense and sparse alternatives. The combined test relies on the asymptotic independence of the max based and sum based test statistics, which we show rigorously. We show that the proposed max based and combined tests have attractive theoretical properties and demonstrate the superior performance via extensive simulation results. We apply the two new tests to analyze the weekly returns on the securities in the S\&P 500 index under the Fama-French three-factor model, and confirm the usefulness of the proposed combined test in detecting cross-sectional independence.

We are upgrading the Python-version of RTNI, which symbolically integrates tensor networks over the Haar-distributed unitary matrices. Now, PyRTNI2 can treat the Haar-distributed orthogonal matrices and the real and complex normal Gaussian tensors as well. Moreover, it can export tensor networks in the format of TensorNetwork so that one can make further calculations with concrete tensors, even for low dimensions, where the Weingarten functions differ from the ones for high dimensions. The tutorial notebooks are found at GitHub: //github.com/MotohisaFukuda/PyRTNI2. In this paper, we explain maths behind the program and show what kind of tensor network calculations can be made with it. For the former, we interpret the element-wise moment calculus of the above random matrices and tensors in terms of tensor network diagrams, and argue that the view is natural, relating delta functions in the calculus to edges in tensor network diagrams.

北京阿比特科技有限公司