亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Deep neural networks (DNNs) have garnered significant attention in various fields of science and technology in recent years. Activation functions define how neurons in DNNs process incoming signals for them. They are essential for learning non-linear transformations and for performing diverse computations among successive neuron layers. In the last few years, researchers have investigated the approximation ability of DNNs to explain their power and success. In this paper, we explore the approximation ability of DNNs using a different activation function, called SignReLU. Our theoretical results demonstrate that SignReLU networks outperform rational and ReLU networks in terms of approximation performance. Numerical experiments are conducted comparing SignReLU with the existing activations such as ReLU, Leaky ReLU, and ELU, which illustrate the competitive practical performance of SignReLU.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

Sustainability of common-pool resources hinges on the interplay between human and environmental systems. However, there is still a lack of a novel and comprehensive framework for modelling extraction of common-pool resources and cooperation of human agents that can account for different factors that shape the system behavior and outcomes. In particular, we still lack a critical value for ensuring resource sustainability under different scenarios. In this paper, we present a novel framework for studying resource extraction and cooperation in human-environmental systems for common-pool resources. We explore how different factors, such as resource availability and conformity effect, influence the players' decisions and the resource outcomes. We identify critical values for ensuring resource sustainability under various scenarios. We demonstrate the observed phenomena are robust to the complexity and assumptions of the models and discuss implications of our study for policy and practice, as well as the limitations and directions for future research.

Although there is extensive literature on the application of artificial neural networks (NNs) in quality control (QC), to monitor the conformity of a process to quality specifications, at least five QC measurements are required, increasing the related cost. To explore the application of neural networks to samples of QC measurements of very small size, four one-dimensional (1-D) convolutional neural networks (CNNs) were designed, trained, and tested with datasets of $ n $-tuples of simulated standardized normally distributed QC measurements, for $ 1 \leq n \leq 4$. The designed neural networks were compared to statistical QC functions with equal probabilities for false rejection, applied to samples of the same size. When the $ n $-tuples included at least two QC measurements distributed as $ \mathcal{N}(\mu, \sigma^2) $, where $ 0.2 < |\mu| \leq 6.0 $, and $ 1.0 < \sigma \leq 7.0 $, the designed neural networks outperformed the respective statistical QC functions. Therefore, 1-D CNNs applied to samples of 2-4 quality control measurements can be used to increase the probability of detection of the nonconformity of a process to the quality specifications, with lower cost.

In Bayesian theory, the role of information is central. The influence exerted by prior information on posterior outcomes often jeopardizes Bayesian studies, due to the potentially subjective nature of the prior choice. When the studied model is not enriched with sufficiently a priori information, the reference prior theory emerges as a proficient tool. Based on the mutual information criterion, the theory handles the construction of a non informative prior whose choice could be called objective. We propose a generalization of the mutual information definition, arguing our choice on an interpretation based on an analogy with Global Sensitivity Analysis. A class of our generalized metrics is studied and our results reinforce the Jeffreys' prior choice which satisfies our extended definition of reference prior.

Bipartite networks are a natural representation of the interactions between entities from two different types. The organization (or topology) of such networks gives insight to understand the systems they describe as a whole. Here, we rely on motifs which provide a meso-scale description of the topology. Moreover, we consider the bipartite expected degree distribution (B-EDD) model which accounts for both the density of the network and possible imbalances between the degrees of the nodes. Under the B-EDD model, we prove the asymptotic normality of the count of any given motif, considering sparsity conditions. We also provide close-form expressions for the mean and the variance of this count. This allows to avoid computationally prohibitive resampling procedures. Based on these results, we define a goodness-of-fit test for the B-EDD model and propose a family of tests for network comparisons. We assess the asymptotic normality of the test statistics and the power of the proposed tests on synthetic experiments and illustrate their use on ecological data sets.

The elusive nature of gradient-based optimization in neural networks is tied to their loss landscape geometry, which is poorly understood. However recent work has brought solid evidence that there is essentially no loss barrier between the local solutions of gradient descent, once accounting for weight-permutations that leave the network's computation unchanged. This raises questions for approximate inference in Bayesian neural networks (BNNs), where we are interested in marginalizing over multiple points in the loss landscape. In this work, we first extend the formalism of marginalized loss barrier and solution interpolation to BNNs, before proposing a matching algorithm to search for linearly connected solutions. This is achieved by aligning the distributions of two independent approximate Bayesian solutions with respect to permutation matrices. We build on the results of Ainsworth et al. (2023), reframing the problem as a combinatorial optimization one, using an approximation to the sum of bilinear assignment problem. We then experiment on a variety of architectures and datasets, finding nearly zero marginalized loss barriers for linearly connected solutions.

Identifying replicable signals across different studies provides stronger scientific evidence and more powerful inference. Existing literature on high dimensional applicability analysis either imposes strong modeling assumptions or has low power. We develop a powerful and robust empirical Bayes approach for high dimensional replicability analysis. Our method effectively borrows information from different features and studies while accounting for heterogeneity. We show that the proposed method has better power than competing methods while controlling the false discovery rate, both empirically and theoretically. Analyzing datasets from the genome-wide association studies reveals new biological insights that otherwise cannot be obtained by using existing methods.

Data on neighbourhood characteristics are not typically collected in epidemiological studies. They are however useful in the study of small-area health inequalities. Neighbourhood characteristics are collected in some surveys and could be linked to the data of other studies. We propose to use kriging based on semi-variogram models to predict values at non-observed locations with the aim of constructing bespoke indices of neighbourhood characteristics to be linked to data from epidemiological studies. We perform a simulation study to assess the feasibility of the method as well as a case study using data from the RECORD study. Apart from having enough observed data at small distances to the non-observed locations, a good fitting semi-variogram, a larger range and the absence of nugget effects for the semi-variogram models are factors leading to a higher reliability.

Recurrent neural networks (RNNs) have yielded promising results for both recognizing objects in challenging conditions and modeling aspects of primate vision. However, the representational dynamics of recurrent computations remain poorly understood, especially in large-scale visual models. Here, we studied such dynamics in RNNs trained for object classification on MiniEcoset, a novel subset of ecoset. We report two main insights. First, upon inference, representations continued to evolve after correct classification, suggesting a lack of the notion of being ``done with classification''. Second, focusing on ``readout zones'' as a way to characterize the activation trajectories, we observe that misclassified representations exhibit activation patterns with lower L2 norm, and are positioned more peripherally in the readout zones. Such arrangements help the misclassified representations move into the correct zones as time progresses. Our findings generalize to networks with lateral and top-down connections, and include both additive and multiplicative interactions with the bottom-up sweep. The results therefore contribute to a general understanding of RNN dynamics in naturalistic tasks. We hope that the analysis framework will aid future investigations of other types of RNNs, including understanding of representational dynamics in primate vision.

Dynamic networks consist of a sequence of time-varying networks, and it is of great importance to detect the network change points. Most existing methods focus on detecting abrupt change points, necessitating the assumption that the underlying network probability matrix remains constant between adjacent change points. This paper introduces a new model that allows the network probability matrix to undergo continuous shifting, while the latent network structure, represented via the embedding subspace, only changes at certain time points. Two novel statistics are proposed to jointly detect these network subspace change points, followed by a carefully refined detection procedure. Theoretically, we show that the proposed method is asymptotically consistent in terms of change point detection, and also establish the impossibility region for detecting these network subspace change points. The advantage of the proposed method is also supported by extensive numerical experiments on both synthetic networks and a UK politician social network.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司