亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The two-layer computer simulators are commonly used to mimic multi-physics phenomena or systems. Usually, the outputs of the first-layer simulator (also called the inner simulator) are partial inputs of the second-layer simulator (also called the outer simulator). How to design experiments by considering the space-filling properties of inner and outer simulators simultaneously is a significant challenge that has received scant attention in the literature. To address this problem, we propose a new sequential optimal Latin hypercube design (LHD) by using the maximin integrating mixed distance criterion. A corresponding sequential algorithm for efficiently generating such designs is also developed. Numerical simulation results show that the new method can effectively improve the space-filling property of the outer computer inputs. The case study about composite structures assembly simulation demonstrates that the proposed method can outperform the benchmark methods.

相關內容

 設計(ji)是對現有狀的一種(zhong)重(zhong)新認識和打(da)破重(zhong)組的過程(cheng),設計(ji)讓一切變得更美。

Leveraging the low-power, event-driven computation and the inherent temporal dynamics, spiking neural networks (SNNs) are potentially ideal solutions for processing dynamic and asynchronous signals from event-based sensors. However, due to the challenges in training and the restrictions in architectural design, there are limited examples of competitive SNNs in the realm of event-based dense prediction when compared to artificial neural networks (ANNs). In this paper, we present an efficient spiking encoder-decoder network designed for large-scale event-based semantic segmentation tasks. This is achieved by optimizing the encoder using a hierarchical search method. To enhance learning from dynamic event streams, we harness the inherent adaptive threshold of spiking neurons to modulate network activation. Moreover, we introduce a dual-path Spiking Spatially-Adaptive Modulation (SSAM) block, specifically designed to enhance the representation of sparse events, thereby considerably improving network performance. Our proposed network achieves a 72.57% mean intersection over union (MIoU) on the DDD17 dataset and a 57.22% MIoU on the recently introduced, larger DSEC-Semantic dataset. This performance surpasses the current state-of-the-art ANNs by 4%, whilst consuming significantly less computational resources. To the best of our knowledge, this is the first study demonstrating SNNs outperforming ANNs in demanding event-based semantic segmentation tasks, thereby establishing the vast potential of SNNs in the field of event-based vision. Our source code will be made publicly accessible.

Exploring the origin and properties of magnetic fields is crucial to the development of many fields such as physics, astronomy and meteorology. We focus on the edge element approximation and theoretical analysis of celestial dynamo system with quasi-vacuum boundary conditions. The system not only ensures that the magnetic field on the spherical shell is generated from the dynamo model, but also provides convenience for the application of the edge element method. We demonstrate the existence, uniqueness and stability of the solution to the system by the fixed point theorem. Then, we approximate the system using the edge element method, which is more efficient in dealing with electromagnetic field problems. Moreover, we also discuss the stability of the corresponding discrete scheme. And the convergence is demonstrated by later numerical tests. Finally, we simulate the three-dimensional time evolution of the spherical interface dynamo model, and the characteristics of the simulated magnetic field are consistent with existing work.

In this paper, we present a method to encrypt dynamic controllers that can be implemented through most homomorphic encryption schemes, including somewhat, leveled fully, and fully homomorphic encryption. To this end, we represent the output of the given controller as a linear combination of a fixed number of previous inputs and outputs. As a result, the encrypted controller involves only a limited number of homomorphic multiplications on every encrypted data, assuming that the output is re-encrypted and transmitted back from the actuator. A guidance for parameter choice is also provided, ensuring that the encrypted controller achieves predefined performance for an infinite time horizon. Furthermore, we propose a customization of the method for Ring-Learning With Errors (Ring-LWE) based cryptosystems, where a vector of messages can be encrypted into a single ciphertext and operated simultaneously, thus reducing computation and communication loads. Unlike previous results, the proposed customization does not require extra algorithms such as rotation, other than basic addition and multiplication. Simulation results demonstrate the effectiveness of the proposed method.

Graph neural network (GNN) has gained increasing popularity in recent years owing to its capability and flexibility in modeling complex graph structure data. Among all graph learning methods, hypergraph learning is a technique for exploring the implicit higher-order correlations when training the embedding space of the graph. In this paper, we propose a hypergraph learning framework named LFH that is capable of dynamic hyperedge construction and attentive embedding update utilizing the heterogeneity attributes of the graph. Specifically, in our framework, the high-quality features are first generated by the pairwise fusion strategy that utilizes explicit graph structure information when generating initial node embedding. Afterwards, a hypergraph is constructed through the dynamic grouping of implicit hyperedges, followed by the type-specific hypergraph learning process. To evaluate the effectiveness of our proposed framework, we conduct comprehensive experiments on several popular datasets with eleven state-of-the-art models on both node classification and link prediction tasks, which fall into categories of homogeneous pairwise graph learning, heterogeneous pairwise graph learning, and hypergraph learning. The experiment results demonstrate a significant performance gain (average 12.5% in node classification and 13.3% in link prediction) compared with recent state-of-the-art methods.

The $\boldsymbol{\beta}$-model for random graphs is commonly used for representing pairwise interactions in a network with degree heterogeneity. Going beyond pairwise interactions, Stasi et al. (2014) introduced the hypergraph $\boldsymbol{\beta}$-model for capturing degree heterogeneity in networks with higher-order (multi-way) interactions. In this paper we initiate the rigorous study of the hypergraph $\boldsymbol{\beta}$-model with multiple layers, which allows for hyperedges of different sizes across the layers. To begin with, we derive the rates of convergence of the maximum likelihood (ML) estimate and establish their minimax rate optimality. We also derive the limiting distribution of the ML estimate and construct asymptotically valid confidence intervals for the model parameters. Next, we consider the goodness-of-fit problem in the hypergraph $\boldsymbol{\beta}$-model. Specifically, we establish the asymptotic normality of the likelihood ratio (LR) test under the null hypothesis, derive its detection threshold, and also its limiting power at the threshold. Interestingly, the detection threshold of the LR test turns out to be minimax optimal, that is, all tests are asymptotically powerless below this threshold. The theoretical results are further validated in numerical experiments. In addition to developing the theoretical framework for estimation and inference for hypergraph $\boldsymbol{\beta}$-models, the above results fill a number of gaps in the graph $\boldsymbol{\beta}$-model literature, such as the minimax optimality of the ML estimates and the non-null properties of the LR test, which, to the best of our knowledge, have not been studied before.

Time series data, spanning applications ranging from climatology to finance to healthcare, presents significant challenges in data mining due to its size and complexity. One open issue lies in time series clustering, which is crucial for processing large volumes of unlabeled time series data and unlocking valuable insights. Traditional and modern analysis methods, however, often struggle with these complexities. To address these limitations, we introduce R-Clustering, a novel method that utilizes convolutional architectures with randomly selected parameters. Through extensive evaluations, R-Clustering demonstrates superior performance over existing methods in terms of clustering accuracy, computational efficiency and scalability. Empirical results obtained using the UCR archive demonstrate the effectiveness of our approach across diverse time series datasets. The findings highlight the significance of R-Clustering in various domains and applications, contributing to the advancement of time series data mining.

Few-shot learning aims to learn novel categories from very few samples given some base categories with sufficient training samples. The main challenge of this task is the novel categories are prone to dominated by color, texture, shape of the object or background context (namely specificity), which are distinct for the given few training samples but not common for the corresponding categories (see Figure 1). Fortunately, we find that transferring information of the correlated based categories can help learn the novel concepts and thus avoid the novel concept being dominated by the specificity. Besides, incorporating semantic correlations among different categories can effectively regularize this information transfer. In this work, we represent the semantic correlations in the form of structured knowledge graph and integrate this graph into deep neural networks to promote few-shot learning by a novel Knowledge Graph Transfer Network (KGTN). Specifically, by initializing each node with the classifier weight of the corresponding category, a propagation mechanism is learned to adaptively propagate node message through the graph to explore node interaction and transfer classifier information of the base categories to those of the novel ones. Extensive experiments on the ImageNet dataset show significant performance improvement compared with current leading competitors. Furthermore, we construct an ImageNet-6K dataset that covers larger scale categories, i.e, 6,000 categories, and experiments on this dataset further demonstrate the effectiveness of our proposed model.

Graph Neural Networks (GNNs) for representation learning of graphs broadly follow a neighborhood aggregation framework, where the representation vector of a node is computed by recursively aggregating and transforming feature vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs in capturing different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.

We propose a new method for event extraction (EE) task based on an imitation learning framework, specifically, inverse reinforcement learning (IRL) via generative adversarial network (GAN). The GAN estimates proper rewards according to the difference between the actions committed by the expert (or ground truth) and the agent among complicated states in the environment. EE task benefits from these dynamic rewards because instances and labels yield to various extents of difficulty and the gains are expected to be diverse -- e.g., an ambiguous but correctly detected trigger or argument should receive high gains -- while the traditional RL models usually neglect such differences and pay equal attention on all instances. Moreover, our experiments also demonstrate that the proposed framework outperforms state-of-the-art methods, without explicit feature engineering.

Image segmentation is an important component of many image understanding systems. It aims to group pixels in a spatially and perceptually coherent manner. Typically, these algorithms have a collection of parameters that control the degree of over-segmentation produced. It still remains a challenge to properly select such parameters for human-like perceptual grouping. In this work, we exploit the diversity of segments produced by different choices of parameters. We scan the segmentation parameter space and generate a collection of image segmentation hypotheses (from highly over-segmented to under-segmented). These are fed into a cost minimization framework that produces the final segmentation by selecting segments that: (1) better describe the natural contours of the image, and (2) are more stable and persistent among all the segmentation hypotheses. We compare our algorithm's performance with state-of-the-art algorithms, showing that we can achieve improved results. We also show that our framework is robust to the choice of segmentation kernel that produces the initial set of hypotheses.

北京阿比特科技有限公司