亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The compact muon solenoid (CMS) experiment is a general-purpose detector for high-energy collision at the large hadron collider (LHC) at CERN. It employs an online data quality monitoring (DQM) system to promptly spot and diagnose particle data acquisition problems to avoid data quality loss. In this study, we present semi-supervised spatio-temporal anomaly detection (AD) monitoring for the physics particle reading channels of the hadronic calorimeter (HCAL) of the CMS using three-dimensional digi-occupancy map data of the DQM. We propose the GraphSTAD system, which employs convolutional and graph neural networks to learn local spatial characteristics induced by particles traversing the detector, and global behavior owing to shared backend circuit connections and housing boxes of the channels, respectively. Recurrent neural networks capture the temporal evolution of the extracted spatial features. We have validated the accuracy of the proposed AD system in capturing diverse channel fault types using the LHC Run-2 collision data sets. The GraphSTAD system has achieved production-level accuracy and is being integrated into the CMS core production system--for real-time monitoring of the HCAL. We have also provided a quantitative performance comparison with alternative benchmark models to demonstrate the promising leverage of the presented system.

相關內容

CMS:內容管理系統

A sequential detection and tracking (SDT) approach is proposed for detection and tracking of very low signal-to-noise (SNR) objects. The proposed approach is compared with two existing particle filter track-before-track (TBD) methods. It is shown that the former outperforms the latter. A conventional detection and tracking (CDT) approach, based on one-data-frame thresholding, is considered as a benchmark for comparison. Simulations demonstrate the performance.

Traditional clustering algorithms often struggle to capture the complex relationships within graphs and generalise to arbitrary clustering criteria. The emergence of graph neural networks (GNNs) as a powerful framework for learning representations of graph data provides new approaches to solving the problem. Previous work has shown GNNs to be capable of proposing partitionings using a variety of criteria, however, these approaches have not yet been extended to work on Markov chains or kinetic networks. These arise frequently in the study of molecular systems and are of particular interest to the biochemical modelling community. In this work, we propose several GNN-based architectures to tackle the graph partitioning problem for Markov Chains described as kinetic networks. This approach aims to minimize how much a proposed partitioning changes the Kemeny constant. We propose using an encoder-decoder architecture and show how simple GraphSAGE-based GNNs with linear layers can outperform much larger and more expressive attention-based models in this context. As a proof of concept, we first demonstrate the method's ability to cluster randomly connected graphs. We also use a linear chain architecture corresponding to a 1D free energy profile as our kinetic network. Subsequently, we demonstrate the effectiveness of our method through experiments on a data set derived from molecular dynamics. We compare the performance of our method to other partitioning techniques such as PCCA+. We explore the importance of feature and hyperparameter selection and propose a general strategy for large-scale parallel training of GNNs for discovering optimal graph partitionings.

In stochastic zeroth-order optimization, a problem of practical relevance is understanding how to fully exploit the local geometry of the underlying objective function. We consider a fundamental setting in which the objective function is quadratic, and provide the first tight characterization of the optimal Hessian-dependent sample complexity. Our contribution is twofold. First, from an information-theoretic point of view, we prove tight lower bounds on Hessian-dependent complexities by introducing a concept called energy allocation, which captures the interaction between the searching algorithm and the geometry of objective functions. A matching upper bound is obtained by solving the optimal energy spectrum. Then, algorithmically, we show the existence of a Hessian-independent algorithm that universally achieves the asymptotic optimal sample complexities for all Hessian instances. The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions, which are enabled by a truncation method.

One persistent challenge in deep learning based speech emotion recognition (SER) is the unconscious encoding of emotion-irrelevant factors (e.g., speaker or phonetic variability), which limits the generalization of SER in practical use. In this paper, we propose DSNet, a Disentangled Siamese Network with neutral calibration, to meet the demand for a more robust and explainable SER model. Specifically, we introduce an orthogonal feature disentanglement module to explicitly project the high-level representation into two distinct subspaces. Later, we propose a novel neutral calibration mechanism to encourage one subspace to capture sufficient emotion-irrelevant information. In this way, the other one can better isolate and emphasize the emotion-relevant information within speech signals. Experimental results on two popular benchmark datasets demonstrate the superiority of DSNet over various state-of-the-art methods for speaker-independent SER.

Multiphysics incompressible fluid dynamics simulations play a crucial role in understanding intricate behaviors of many complex engineering systems that involve interactions between solids, fluids, and various phases like liquid and gas. Numerical modeling of these interactions has generated significant research interest in recent decades and has led to the development of open source simulation tools and commercial software products targeting specific applications or general problem classes in computational fluid dynamics. As the demand increases for these simulations to adapt to platform heterogeneity, ensure composability between different physics models, and effectively utilize inheritance within partial differentiation systems, a fundamental reconsideration of numerical solver design becomes imperative. The discussion presented in this paper emphasizes the importance of these considerations and introduces the Flash-X approach as a potential solution. The software design strategies outlined in the article serve as a guide for Flash-X developers, providing insights into complexities associated with performance portability, composability, and sustainable development. These strategies provide a foundation for improving design of both new and existing simulation tools grappling with these challenges. By incorporating the principles outlined in the Flash-X approach, engineers and researchers can enhance the adaptability, efficiency, and overall effectiveness of their numerical solvers in the ever-evolving field of multiphysics simulations.

Many of the most fundamental laws of nature can be formulated as partial differential equations (PDEs). Understanding these equations is, therefore, of exceptional importance for many branches of modern science and engineering. However, since the general solution of many PDEs is unknown, the efficient approximate solution of these equations is one of humanity's greatest challenges. While multigrid represents one of the most effective methods for solving PDEs numerically, in many cases, the design of an efficient or at least working multigrid solver is an open problem. This thesis demonstrates that grammar-guided genetic programming, an evolutionary program synthesis technique, can discover multigrid methods of unprecedented structure that achieve a high degree of efficiency and generalization. For this purpose, we develop a novel context-free grammar that enables the automated generation of multigrid methods in a symbolically-manipulable formal language, based on which we can apply the same multigrid-based solver to problems of different sizes without having to adapt its internal structure. Treating the automated design of an efficient multigrid method as a program synthesis task allows us to find novel sequences of multigrid operations, including the combination of different smoothing and coarse-grid correction steps on each level of the discretization hierarchy. To prove the feasibility of this approach, we present its implementation in the form of the Python framework EvoStencils, which is freely available as open-source software. This implementation comprises all steps from representing the algorithmic sequence of a multigrid method in the form of a directed acyclic graph of Python objects to its automatic generation and optimization using the capabilities of the code generation framework ExaStencils and the evolutionary computation library DEAP.

Traditional clustering algorithms often struggle to capture the complex relationships within graphs and generalise to arbitrary clustering criteria. The emergence of graph neural networks (GNNs) as a powerful framework for learning representations of graph data provides new approaches to solving the problem. Previous work has shown GNNs to be capable of proposing partitionings using a variety of criteria, however, these approaches have not yet been extended to work on Markov chains or kinetic networks. These arise frequently in the study of molecular systems and are of particular interest to the biochemical modelling community. In this work, we propose several GNN-based architectures to tackle the graph partitioning problem for Markov Chains described as kinetic networks. This approach aims to minimize how much a proposed partitioning changes the Kemeny constant. We propose using an encoder-decoder architecture and show how simple GraphSAGE-based GNNs with linear layers can outperform much larger and more expressive attention-based models in this context. As a proof of concept, we first demonstrate the method's ability to cluster randomly connected graphs. We also use a linear chain architecture corresponding to a 1D free energy profile as our kinetic network. Subsequently, we demonstrate the effectiveness of our method through experiments on a data set derived from molecular dynamics. We compare the performance of our method to other partitioning techniques such as PCCA+. We explore the importance of feature and hyperparameter selection and propose a general strategy for large-scale parallel training of GNNs for discovering optimal graph partitionings.

We consider optimal experimental design (OED) for nonlinear Bayesian inverse problems governed by large-scale partial differential equations (PDEs). For the optimality criteria of Bayesian OED, we consider both expected information gain and summary statistics including the trace and determinant of the information matrix that involves the evaluation of the parameter-to-observable (PtO) map and its derivatives. However, it is prohibitive to compute and optimize these criteria when the PDEs are very expensive to solve, the parameters to estimate are high-dimensional, and the optimization problem is combinatorial, high-dimensional, and non-convex. To address these challenges, we develop an accurate, scalable, and efficient computational framework to accelerate the solution of Bayesian OED. In particular, the framework is developed based on derivative-informed neural operator (DINO) surrogates with proper dimension reduction techniques and a modified swapping greedy algorithm. We demonstrate the high accuracy of the DINO surrogates in the computation of the PtO map and the optimality criteria compared to high-fidelity finite element approximations. We also show that the proposed method is scalable with increasing parameter dimensions. Moreover, we demonstrate that it achieves high efficiency with over 1000X speedup compared to a high-fidelity Bayesian OED solution for a three-dimensional PDE example with tens of thousands of parameters, including both online evaluation and offline construction costs of the surrogates.

Meta-Bayesian optimisation (meta-BO) aims to improve the sample efficiency of Bayesian optimisation by leveraging data from related tasks. While previous methods successfully meta-learn either a surrogate model or an acquisition function independently, joint training of both components remains an open challenge. This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures. We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data. Early on, we notice that training transformer-based neural processes from scratch with RL is challenging due to insufficient supervision, especially when rewards are sparse. We formalise this claim with a combinatorial analysis showing that the widely used notion of regret as a reward signal exhibits a logarithmic sparsity pattern in trajectory lengths. To tackle this problem, we augment the RL objective with an auxiliary task that guides part of the architecture to learn a valid probabilistic model as an inductive bias. We demonstrate that our method achieves state-of-the-art regret results against various baselines in experiments on standard hyperparameter optimisation tasks and also outperforms others in the real-world problems of mixed-integer programming tuning, antibody design, and logic synthesis for electronic design automation.

Super-resolution (SR) techniques have recently been proposed to upscale the outputs of neural radiance fields (NeRF) and generate high-quality images with enhanced inference speeds. However, existing NeRF+SR methods increase training overhead by using extra input features, loss functions, and/or expensive training procedures such as knowledge distillation. In this paper, we aim to leverage SR for efficiency gains without costly training or architectural changes. Specifically, we build a simple NeRF+SR pipeline that directly combines existing modules, and we propose a lightweight augmentation technique, random patch sampling, for training. Compared to existing NeRF+SR methods, our pipeline mitigates the SR computing overhead and can be trained up to 23x faster, making it feasible to run on consumer devices such as the Apple MacBook. Experiments show our pipeline can upscale NeRF outputs by 2-4x while maintaining high quality, increasing inference speeds by up to 18x on an NVIDIA V100 GPU and 12.8x on an M1 Pro chip. We conclude that SR can be a simple but effective technique for improving the efficiency of NeRF models for consumer devices.

北京阿比特科技有限公司