亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the classical context, the cooperative game theory concept of the Shapley value has been adapted for post hoc explanations of machine learning models. However, this approach does not easily translate to eXplainable Quantum ML (XQML). Finding Shapley values can be highly computationally complex. We propose quantum algorithms which can extract Shapley values within some confidence interval. Our results perform in polynomial time. We demonstrate the validity of each approach under specific examples of cooperative voting games.

相關內容

A new online multiple testing procedure is described in the context of anomaly detection, which controls the False Discovery Rate (FDR). An accurate anomaly detector must control the false positive rate at a prescribed level while keeping the false negative rate as low as possible. However in the online context, such a constraint remains highly challenging due to the usual lack of FDR control: the online framework makes it impossible to use classical multiple testing approaches such as the Benjamini-Hochberg (BH) procedure, which would require knowing the entire time series. The developed strategy relies on exploiting the local control of the ``modified FDR'' (mFDR) criterion. It turns out that the local control of mFDR enables global control of the FDR over the full series up to additional modifications of the multiple testing procedures. An important ingredient in this control is the cardinality of the calibration dataset used to compute the empirical p-values. A dedicated strategy for tuning this parameter is designed for achieving the prescribed FDR control over the entire time series. The good statistical performance of the full strategy is analyzed by theoretical guarantees. Its practical behavior is assessed by several simulation experiments which support our conclusions.

We propose a multiscale approach to time series autoregression, in which linear regressors for the process in question include features of its own path that live on multiple timescales. We take these multiscale features to be the recent averages of the process over multiple timescales, whose number or spans are not known to the analyst and are estimated from the data via a change-point detection technique. The resulting construction, termed Adaptive Multiscale AutoRegression (AMAR) enables adaptive regularisation of linear autoregression of large orders. The AMAR model is designed to offer simplicity and interpretability on the one hand, and modelling flexibility on the other. Our theory permits the longest timescale to increase with the sample size. A simulation study is presented to show the usefulness of our approach. Some possible extensions are also discussed, including the Adaptive Multiscale Vector AutoRegressive model (AMVAR) for multivariate time series, which demonstrates promising performance in the data example on UK and US unemployment rates. The R package amar provides an efficient implementation of the AMAR framework.

Much like classical supercomputers, scaling up quantum computers requires an optical interconnect. However, signal attenuation leads to irreversible qubit loss, making quantum interconnect design guidelines and metrics different from conventional computing. Inspired by the classical Dragonfly topology, we propose a multi-group structure where the group switch routes photons emitted by computational end nodes to the group's shared pool of Bell state analyzers (which conduct the entanglement swapping that creates end-to-end entanglement) or across a low-diameter path to another group. We present a full-stack analysis of system performance, a combination of distributed and centralized protocols, and a resource scheduler that plans qubit placement and communications for large-scale, fault-tolerant systems. We implement a prototype three-node switched interconnect and create two-hop entanglement with fidelities of at least 0.6. Our design emphasizes reducing network hops and optical components to simplify system stabilization while flexibly adjusting optical path lengths. Based on evaluated loss and infidelity budgets, we find that moderate-radix switches enable systems meeting expected near-term needs, and large systems are feasible. Our design is expected to be effective for a variety of quantum computing technologies, including ion traps and superconducting qubits with appropriate wavelength transduction.

A challenge in high-dimensional inverse problems is developing iterative solvers to find the accurate solution of regularized optimization problems with low computational cost. An important example is computed tomography (CT) where both image and data sizes are large and therefore the forward model is costly to evaluate. Since several years algorithms from stochastic optimization are used for tomographic image reconstruction with great success by subsampling the data. Here we propose a novel way how stochastic optimization can be used to speed up image reconstruction by means of image domain sketching such that at each iteration an image of different resolution is being used. Hence, we coin this algorithm ImaSk. By considering an associated saddle-point problem, we can formulate ImaSk as a gradient-based algorithm where the gradient is approximated in the same spirit as the stochastic average gradient am\'elior\'e (SAGA) and uses at each iteration one of these multiresolution operators at random. We prove that ImaSk is linearly converging for linear forward models with strongly convex regularization functions. Numerical simulations on CT show that ImaSk is effective and increasing the number of multiresolution operators reduces the computational time to reach the modeled solution.

Feature selection is crucial for pinpointing relevant features in high-dimensional datasets, mitigating the 'curse of dimensionality,' and enhancing machine learning performance. Traditional feature selection methods for classification use data from all classes to select features for each class. This paper explores feature selection methods that select features for each class separately, using class models based on low-rank generative methods and introducing a signal-to-noise ratio (SNR) feature selection criterion. This novel approach has theoretical true feature recovery guarantees under certain assumptions and is shown to outperform some existing feature selection methods on standard classification datasets.

In structured additive distributional regression, the conditional distribution of the response variables given the covariate information and the vector of model parameters is modelled using a P-parametric probability density function where each parameter is modelled through a linear predictor and a bijective response function that maps the domain of the predictor into the domain of the parameter. We present a method to perform inference in structured additive distributional regression using stochastic variational inference. We propose two strategies for constructing a multivariate Gaussian variational distribution to estimate the posterior distribution of the regression coefficients. The first strategy leverages covariate information and hyperparameters to learn both the location vector and the precision matrix. The second strategy tackles the complexity challenges of the first by initially assuming independence among all smooth terms and then introducing correlations through an additional set of variational parameters. Furthermore, we present two approaches for estimating the smoothing parameters. The first treats them as free parameters and provides point estimates, while the second accounts for uncertainty by applying a variational approximation to the posterior distribution. Our model was benchmarked against state-of-the-art competitors in logistic and gamma regression simulation studies. Finally, we validated our approach by comparing its posterior estimates to those obtained using Markov Chain Monte Carlo on a dataset of patents from the biotechnology/pharmaceutics and semiconductor/computer sectors.

Holographic multiple-input multiple-output (HMIMO) utilizes a compact antenna array to form a nearly continuous aperture, thereby enhancing higher capacity and more flexible configurations compared with conventional MIMO systems, making it attractive in current scientific research. Key questions naturally arise regarding the potential of HMIMO to surpass Shannon's theoretical limits and how far its capabilities can be extended. However, the traditional Shannon information theory falls short in addressing these inquiries because it only focuses on the information itself while neglecting the underlying carrier, electromagnetic (EM) waves, and environmental interactions. To fill up the gap between the theoretical analysis and the practical application for HMIMO systems, we introduce electromagnetic information theory (EIT) in this paper. This paper begins by laying the foundation for HMIMO-oriented EIT, encompassing EM wave equations and communication regions. In the context of HMIMO systems, the resultant physical limitations are presented, involving Chu's limit, Harrington's limit, Hannan's limit, and the evaluation of coupling effects. Field sampling and HMIMO-assisted oversampling are also discussed to guide the optimal HMIMO design within the EIT framework. To comprehensively depict the EM-compliant propagation process, we present the approximate and exact channel modeling approaches in near-/far-field zones. Furthermore, we discuss both traditional Shannon's information theory, employing the probabilistic method, and Kolmogorov information theory, utilizing the functional analysis, for HMIMO-oriented EIT systems.

Gaussian Process differential equations (GPODE) have recently gained momentum due to their ability to capture dynamics behavior of systems and also represent uncertainty in predictions. Prior work has described the process of training the hyperparameters and, thereby, calibrating GPODE to data. How to design efficient algorithms to collect data for training GPODE models is still an open field of research. Nevertheless high-quality training data is key for model performance. Furthermore, data collection leads to time-cost and financial-cost and might in some areas even be safety critical to the system under test. Therefore, algorithms for safe and efficient data collection are central for building high quality GPODE models. Our novel Safe Active Learning (SAL) for GPODE algorithm addresses this challenge by suggesting a mechanism to propose efficient and non-safety-critical data to collect. SAL GPODE does so by sequentially suggesting new data, measuring it and updating the GPODE model with the new data. In this way, subsequent data points are iteratively suggested. The core of our SAL GPODE algorithm is a constrained optimization problem maximizing information of new data for GPODE model training constrained by the safety of the underlying system. We demonstrate our novel SAL GPODE's superiority compared to a standard, non-active way of measuring new data on two relevant examples.

Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.

Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).

北京阿比特科技有限公司