亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Continuous-time measurements are instrumental for a multitude of tasks in quantum engineering and quantum control, including the estimation of dynamical parameters of open quantum systems monitored through the environment. However, such measurements do not extract the maximum amount of information available in the output state, so finding alternative optimal measurement strategies is a major open problem. In this paper we solve this problem in the setting of discrete-time input-output quantum Markov chains. We present an efficient algorithm for optimal estimation of one-dimensional dynamical parameters which consists of an iterative procedure for updating a `measurement filter' operator and determining successive measurement bases for the output units. A key ingredient of the scheme is the use of a coherent quantum absorber as a way to post-process the output after the interaction with the system. This is designed adaptively such that the joint system and absorber stationary state is pure at a reference parameter value. The scheme offers an exciting prospect for optimal continuous-time adaptive measurements, but more work is needed to find realistic practical implementations.

相關內容

We study the fundamental limits of classical communication using quantum states that decohere as they traverse through a network of queues. We consider a network of Markovian queues, known as a Jackson network, with a single source or multiple sources and a single destination. Qubits are communicated through this network with inevitable buffering at intermediate nodes. We model each node as a `queue-channel,' wherein as the qubits wait in buffer, they continue to interact with the environment and suffer a waiting time-dependent noise. Focusing on erasures, we first obtain explicit classical capacity expressions for simple topologies such as tandem queue-channel and parallel queue-channel. Using these as building blocks, we characterize the classical capacity of a general quantum Jackson network with waiting time-dependent erasures. Throughout, we study two types of quantum networks, namely, (i) Repeater-assisted and (ii) Repeater-less. We also obtain optimal pumping rates and routing probabilities to maximize capacity in simple topologies. More broadly, our work quantifies the impact of delay-induced decoherence on the fundamental limits of classical communication over quantum networks.

Graph-based data structures have drawn great attention in recent years. The large and rapidly growing trend on developing graph processing systems focuses mostly on improving the performance by preprocessing the input graph and modifying its layout. These systems usually take several hours to days to complete processing a single graph on high-end machines, let alone the overhead of pre-processing which most of the time can be dominant. Yet for most graph applications the exact answer is not always crucial, and providing a rough estimate of the final result is adequate. Approximate computing is introduced to trade off accuracy of results for computation or energy savings that could not be achieved by conventional techniques alone. In this work, we design, implement and evaluate GraphGuess, inspired from the domain of approximate graph theory and extend it to a general, practical graph processing system. GraphGuess is essentially an approximate graph processing technique with adaptive correction, which can be implemented on top of any graph processing system. We build a vertex-centric processing system based on GraphGuess, where it allows the user to trade off accuracy for better performance. Our experimental studies show that using GraphGuess can significantly reduce the processing time for large scale graphs while maintaining high accuracy.

Quantitative Ultrasound (QUS) provides important information about the tissue properties. QUS parametric image can be formed by dividing the envelope data into small overlapping patches and computing different speckle statistics such as parameters of the Nakagami and Homodyned K-distributions (HK-distribution). The calculated QUS parametric images can be erroneous since only a few independent samples are available inside the patches. Another challenge is that the envelope samples inside the patch are assumed to come from the same distribution, an assumption that is often violated given that the tissue is usually not homogenous. In this paper, we propose a method based on Convolutional Neural Networks (CNN) to estimate QUS parametric images without patching. We construct a large dataset sampled from the HK-distribution, having regions with random shapes and QUS parameter values. We then use a well-known network to estimate QUS parameters in a multi-task learning fashion. Our results confirm that the proposed method is able to reduce errors and improve border definition in QUS parametric images.

Reconfigurable Intelligent Surfaces (RISs) are envisioned to play a key role in future wireless communications, enabling programmable radio propagation environments. They are usually considered as almost passive planar structures that operate as adjustable reflectors, giving rise to a multitude of implementation challenges, including the inherent difficulty in estimating the underlying wireless channels. In this paper, we focus on the recently conceived concept of Hybrid Reconfigurable Intelligent Surfaces (HRISs), which do not solely reflect the impinging waveform in a controllable fashion, but are also capable of sensing and processing an adjustable portion of it. We first present implementation details for this metasurface architecture and propose a convenient mathematical model for characterizing its dual operation. As an indicative application of HRISs in wireless communications, we formulate the individual channel estimation problem for the uplink of a multi-user HRIS-empowered communication system. Considering first a noise-free setting, we theoretically quantify the advantage of HRISs in notably reducing the amount of pilots needed for channel estimation, as compared to the case of purely reflective RISs. We then present closed-form expressions for the MSE performance in estimating the individual channels at the HRISs and the base station for the noisy model. Based on these derivations, we propose an automatic differentiation-based first-order optimization approach to efficiently determine the HRIS phase and power splitting configurations for minimizing the weighted sum-MSE performance. Our numerical evaluations demonstrate that HRISs do not only enable the estimation of the individual channels in HRIS-empowered communication systems, but also improve the ability to recover the cascaded channel, as compared to existing methods using passive and reflective RISs.

Factorization of matrices where the rank of the two factors diverges linearly with their sizes has many applications in diverse areas such as unsupervised representation learning, dictionary learning or sparse coding. We consider a setting where the two factors are generated from known component-wise independent prior distributions, and the statistician observes a (possibly noisy) component-wise function of their matrix product. In the limit where the dimensions of the matrices tend to infinity, but their ratios remain fixed, we expect to be able to derive closed form expressions for the optimal mean squared error on the estimation of the two factors. However, this remains a very involved mathematical and algorithmic problem. A related, but simpler, problem is extensive-rank matrix denoising, where one aims to reconstruct a matrix with extensive but usually small rank from noisy measurements. In this paper, we approach both these problems using high-temperature expansions at fixed order parameters. This allows to clarify how previous attempts at solving these problems failed at finding an asymptotically exact solution. We provide a systematic way to derive the corrections to these existing approximations, taking into account the structure of correlations particular to the problem. Finally, we illustrate our approach in detail on the case of extensive-rank matrix denoising. We compare our results with known optimal rotationally-invariant estimators, and show how exact asymptotic calculations of the minimal error can be performed using extensive-rank matrix integrals.

In high-dimensional prediction settings, it remains challenging to reliably estimate the test performance. To address this challenge, a novel performance estimation framework is presented. This framework, called Learn2Evaluate, is based on learning curves by fitting a smooth monotone curve depicting test performance as a function of the sample size. Learn2Evaluate has several advantages compared to commonly applied performance estimation methodologies. Firstly, a learning curve offers a graphical overview of a learner. This overview assists in assessing the potential benefit of adding training samples and it provides a more complete comparison between learners than performance estimates at a fixed subsample size. Secondly, a learning curve facilitates in estimating the performance at the total sample size rather than a subsample size. Thirdly, Learn2Evaluate allows the computation of a theoretically justified and useful lower confidence bound. Furthermore, this bound may be tightened by performing a bias correction. The benefits of Learn2Evaluate are illustrated by a simulation study and applications to omics data.

In this work, we compare the performance of the Quantum Approximate Optimization Algorithm (QAOA) with state-of-the-art classical solvers such as Gurobi and MQLib to solve the combinatorial optimization problem MaxCut on 3-regular graphs. The goal is to identify under which conditions QAOA can achieve "quantum advantage" over classical algorithms, in terms of both solution quality and time to solution. One might be able to achieve quantum advantage on hundreds of qubits and moderate depth $p$ by sampling the QAOA state at a frequency of order 10 kHz. We observe, however, that classical heuristic solvers are capable of producing high-quality approximate solutions in $\textit{linear}$ time complexity. In order to match this quality for $\textit{large}$ graph sizes $N$, a quantum device must support depth $p>11$. Otherwise, we demonstrate that the number of required samples grows exponentially with $N$, hindering the scalability of QAOA with $p\leq11$. These results put challenging bounds on achieving quantum advantage for QAOA MaxCut on 3-regular graphs. Other problems, such as different graphs, weighted MaxCut, maximum independent set, and 3-SAT, may be better suited for achieving quantum advantage on near-term quantum devices.

The crude Monte Carlo approximates the integral $$S(f)=\int_a^b f(x)\,\mathrm dx$$ with expected error (deviation) $\sigma(f)N^{-1/2},$ where $\sigma(f)^2$ is the variance of $f$ and $N$ is the number of random samples. If $f\in C^r$ then special variance reduction techniques can lower this error to the level $N^{-(r+1/2)}.$ In this paper, we consider methods of the form $$\overline M_{N,r}(f)=S(L_{m,r}f)+M_n(f-L_{m,r}f),$$ where $L_{m,r}$ is the piecewise polynomial interpolation of $f$ of degree $r-1$ using a partition of the interval $[a,b]$ into $m$ subintervals, $M_n$ is a Monte Carlo approximation using $n$ samples of $f,$ and $N$ is the total number of function evaluations used. We derive asymptotic error formulas for the methods $\overline M_{N,r}$ that use nonadaptive as well as adaptive partitions. Although the convergence rate $N^{-(r+1/2)}$ cannot be beaten, the asymptotic constants make a huge difference. For example, for $\int_0^1(x+d)^{-1}\mathrm dx$ and $r=4$ the best adaptive methods overcome the nonadaptive ones roughly $10^{12}$ times if $d=10^{-4},$ and $10^{29}$ times if $d=10^{-8}.$ In addition, the proposed adaptive methods are easily implementable and can be well used for automatic integration. We believe that the obtained results can be generalized to multivariate integration.

Dynamic time warping (DTW) is an effective dissimilarity measure in many time series applications. Despite its popularity, it is prone to noises and outliers, which leads to singularity problem and bias in the measurement. The time complexity of DTW is quadratic to the length of time series, making it inapplicable in real-time applications. In this paper, we propose a novel time series dissimilarity measure named RobustDTW to reduce the effects of noises and outliers. Specifically, the RobustDTW estimates the trend and optimizes the time warp in an alternating manner by utilizing our designed temporal graph trend filtering. To improve efficiency, we propose a multi-level framework that estimates the trend and the warp function at a lower resolution, and then repeatedly refines them at a higher resolution. Based on the proposed RobustDTW, we further extend it to periodicity detection and outlier time series detection. Experiments on real-world datasets demonstrate the superior performance of RobustDTW compared to DTW variants in both outlier time series detection and periodicity detection.

User engagement is a critical metric for evaluating the quality of open-domain dialogue systems. Prior work has focused on conversation-level engagement by using heuristically constructed features such as the number of turns and the total time of the conversation. In this paper, we investigate the possibility and efficacy of estimating utterance-level engagement and define a novel metric, {\em predictive engagement}, for automatic evaluation of open-domain dialogue systems. Our experiments demonstrate that (1) human annotators have high agreement on assessing utterance-level engagement scores; (2) conversation-level engagement scores can be predicted from properly aggregated utterance-level engagement scores. Furthermore, we show that the utterance-level engagement scores can be learned from data. These scores can improve automatic evaluation metrics for open-domain dialogue systems, as shown by correlation with human judgements. This suggests that predictive engagement can be used as a real-time feedback for training better dialogue models.

北京阿比特科技有限公司