亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Gaussian processes (GPs) are generally regarded as the gold standard surrogate model for emulating computationally expensive computer-based simulators. However, the problem of training GPs as accurately as possible with a minimum number of model evaluations remains challenging. We address this problem by suggesting a novel adaptive sampling criterion called VIGF (variance of improvement for global fit). The improvement function at any point is a measure of the deviation of the GP emulator from the nearest observed model output. At each iteration of the proposed algorithm, a new run is performed where VIGF is the largest. Then, the new sample is added to the design and the emulator is updated accordingly. A batch version of VIGF is also proposed which can save the user time when parallel computing is available. Additionally, VIGF is extended to the multi-fidelity case where the expensive high-fidelity model is predicted with the assistance of a lower fidelity simulator. This is performed via hierarchical kriging. The applicability of our method is assessed on a bunch of test functions and its performance is compared with several sequential sampling strategies. The results suggest that our method has a superior performance in predicting the benchmark functions in most cases.

相關內容

Linear structural vector autoregressive models can be identified statistically without imposing restrictions on the model if the shocks are mutually independent and at most one of them is Gaussian. We show that this result extends to structural threshold and smooth transition vector autoregressive models incorporating a time-varying impact matrix defined as a weighted sum of the impact matrices of the regimes. Our empirical application studies the effects of the climate policy uncertainty shock on the U.S. macroeconomy. In a structural logistic smooth transition vector autoregressive model consisting of two regimes, we find that a positive climate policy uncertainty shock decreases production in times of low economic policy uncertainty but slightly increases it in times of high economic policy uncertainty. The introduced methods are implemented to the accompanying R package sstvars.

The shift from monolithic applications to composition of distributed software initiated in the early twentieth, is based on the vision of software-as-service. This vision, found in many technologies such as RESTful APIs, advocates globally available services cooperating through an infrastructure providing (access to) distributed computational resources. Choreographies can support this vision by abstracting away local computation and rendering interoperability with message-passing: cooperation is achieved by sending and receiving messages. Following this choreographic paradigm, we develop SEArch, after Service Execution Architecture, a language-independent execution infrastructure capable of performing transparent dynamic reconfiguration of software artefacts. Choreographic mechanisms are used in SEArch to specify interoperability contracts, thus providing the support needed for automatic discovery and binding of services at runtime.

Realizing computationally complex quantum circuits in the presence of noise and imperfections is a challenging task. While fault-tolerant quantum computing provides a route to reducing noise, it requires a large overhead for generic algorithms. Here, we develop and analyze a hardware-efficient, fault-tolerant approach to realizing complex sampling circuits. We co-design the circuits with the appropriate quantum error correcting codes for efficient implementation in a reconfigurable neutral atom array architecture, constituting what we call a fault-tolerant compilation of the sampling algorithm. Specifically, we consider a family of $[[2^D , D, 2]]$ quantum error detecting codes whose transversal and permutation gate set can realize arbitrary degree-$D$ instantaneous quantum polynomial (IQP) circuits. Using native operations of the code and the atom array hardware, we compile a fault-tolerant and fast-scrambling family of such IQP circuits in a hypercube geometry, realized recently in the experiments by Bluvstein et al. [Nature 626, 7997 (2024)]. We develop a theory of second-moment properties of degree-$D$ IQP circuits for analyzing hardness and verification of random sampling by mapping to a statistical mechanics model. We provide evidence that sampling from hypercube IQP circuits is classically hard to simulate and analyze the linear cross-entropy benchmark (XEB) in comparison to the average fidelity. To realize a fully scalable approach, we first show that Bell sampling from degree-$4$ IQP circuits is classically intractable and can be efficiently validated. We further devise new families of $[[O(d^D),D,d]]$ color codes of increasing distance $d$, permitting exponential error suppression for transversal IQP sampling. Our results highlight fault-tolerant compiling as a powerful tool in co-designing algorithms with specific error-correcting codes and realistic hardware.

We develop a novel deep learning technique, termed Deep Orthogonal Decomposition (DOD), for dimensionality reduction and reduced order modeling of parameter dependent partial differential equations. The approach consists in the construction of a deep neural network model that approximates the solution manifold through a continuously adaptive local basis. In contrast to global methods, such as Principal Orthogonal Decomposition (POD), the adaptivity allows the DOD to overcome the Kolmogorov barrier, making the approach applicable to a wide spectrum of parametric problems. Furthermore, due to its hybrid linear-nonlinear nature, the DOD can accommodate both intrusive and nonintrusive techniques, providing highly interpretable latent representations and tighter control on error propagation. For this reason, the proposed approach stands out as a valuable alternative to other nonlinear techniques, such as deep autoencoders. The methodology is discussed both theoretically and practically, evaluating its performances on problems featuring nonlinear PDEs, singularities, and parametrized geometries.

We provide rigorous theoretical bounds for Anderson acceleration (AA) that allow for approximate calculations when applied to solve linear problems. We show that, when the approximate calculations satisfy the provided error bounds, the convergence of AA is maintained while the computational time could be reduced. We also provide computable heuristic quantities, guided by the theoretical error bounds, which can be used to automate the tuning of accuracy while performing approximate calculations. For linear problems, the use of heuristics to monitor the error introduced by approximate calculations, combined with the check on monotonicity of the residual, ensures the convergence of the numerical scheme within a prescribed residual tolerance. Motivated by the theoretical studies, we propose a reduced variant of AA, which consists in projecting the least-squares used to compute the Anderson mixing onto a subspace of reduced dimension. The dimensionality of this subspace adapts dynamically at each iteration as prescribed by the computable heuristic quantities. We numerically show and assess the performance of AA with approximate calculations on: (i) linear deterministic fixed-point iterations arising from the Richardson's scheme to solve linear systems with open-source benchmark matrices with various preconditioners and (ii) non-linear deterministic fixed-point iterations arising from non-linear time-dependent Boltzmann equations.

The complex information processing system of humans generates a lot of objective and subjective evaluations, making the exploration of human cognitive products of great cutting-edge theoretical value. In recent years, deep learning technologies, which are inspired by biological brain mechanisms, have made significant strides in the application of psychological or cognitive scientific research, particularly in the memorization and recognition of facial data. This paper investigates through experimental research how neural networks process and store facial expression data and associate these data with a range of psychological attributes produced by humans. Researchers utilized deep learning model VGG16, demonstrating that neural networks can learn and reproduce key features of facial data, thereby storing image memories. Moreover, the experimental results reveal the potential of deep learning models in understanding human emotions and cognitive processes and establish a manifold visualization interpretation of cognitive products or psychological attributes from a non-Euclidean space perspective, offering new insights into enhancing the explainability of AI. This study not only advances the application of AI technology in the field of psychology but also provides a new psychological theoretical understanding the information processing of the AI. The code is available in here: //github.com/NKUShaw/Psychoinformatics.

This paper introduces an innovative approach to the design of efficient decoders that meet the rigorous requirements of modern communication systems, particularly in terms of ultra-reliability and low-latency. We enhance an established hybrid decoding framework by proposing an ordered statistical decoding scheme augmented with a sliding window technique. This novel component replaces a key element of the current architecture, significantly reducing average complexity. A critical aspect of our scheme is the integration of a pre-trained neural network model that dynamically determines the progression or halt of the sliding window process. Furthermore, we present a user-defined soft margin mechanism that adeptly balances the trade-off between decoding accuracy and complexity. Empirical results, supported by a thorough complexity analysis, demonstrate that the proposed scheme holds a competitive advantage over existing state-of-the-art decoders, notably in addressing the decoding failures prevalent in neural min-sum decoders. Additionally, our research uncovers that short LDPC codes can deliver performance comparable to that of short classical linear codes within the critical waterfall region of the SNR, highlighting their potential for practical applications.

High-dimensional, higher-order tensor data are gaining prominence in a variety of fields, including but not limited to computer vision and network analysis. Tensor factor models, induced from noisy versions of tensor decompositions or factorizations, are natural potent instruments to study a collection of tensor-variate objects that may be dependent or independent. However, it is still in the early stage of developing statistical inferential theories for the estimation of various low-rank structures, which are customary to play the role of signals of tensor factor models. In this paper, we attempt to ``decode" the estimation of a higher-order tensor factor model by leveraging tensor matricization. Specifically, we recast it into mode-wise traditional high-dimensional vector/fiber factor models, enabling the deployment of conventional principal components analysis (PCA) for estimation. Demonstrated by the Tucker tensor factor model (TuTFaM), which is induced from the noisy version of the widely-used Tucker decomposition, we summarize that estimations on signal components are essentially mode-wise PCA techniques, and the involvement of projection and iteration will enhance the signal-to-noise ratio to various extent. We establish the inferential theory of the proposed estimators, conduct rich simulation experiments, and illustrate how the proposed estimations can work in tensor reconstruction, and clustering for independent video and dependent economic datasets, respectively.

We study a query model of computation in which an n-vertex k-hypergraph can be accessed only via its independence oracle or via its colourful independence oracle, and each oracle query may incur a cost depending on the size of the query. In each of these models, we obtain oracle algorithms to approximately count the hypergraph's edges, and we unconditionally prove that no oracle algorithm for this problem can have significantly smaller worst-case oracle cost than our algorithms.

We consider covariance parameter estimation for Gaussian processes with functional inputs. From an increasing-domain asymptotics perspective, we prove the asymptotic consistency and normality of the maximum likelihood estimator. We extend these theoretical guarantees to encompass scenarios accounting for approximation errors in the inputs, which allows robustness of practical implementations relying on conventional sampling methods or projections onto a functional basis. Loosely speaking, both consistency and normality hold when the approximation error becomes negligible, a condition that is often achieved as the number of samples or basis functions becomes large. These later asymptotic properties are illustrated through analytical examples, including one that covers the case of non-randomly perturbed grids, as well as several numerical illustrations.

北京阿比特科技有限公司