亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a quantum soft-covering problem for a given general quantum channel and one of its output states, which consists in finding the minimum rank of an input state needed to approximate the given channel output. We then prove a one-shot quantum covering lemma in terms of smooth min-entropies by leveraging decoupling techniques from quantum Shannon theory. This covering result is shown to be equivalent to a coding theorem for rate distortion under a posterior (reverse) channel distortion criterion [Atif, Sohail, Pradhan, arXiv:2302.00625]. Both one-shot results directly yield corollaries about the i.i.d. asymptotics, in terms of the coherent information of the channel. The power of our quantum covering lemma is demonstrated by two additional applications: first, we formulate a quantum channel resolvability problem, and provide one-shot as well as asymptotic upper and lower bounds. Secondly, we provide new upper bounds on the unrestricted and simultaneous identification capacities of quantum channels, in particular separating for the first time the simultaneous identification capacity from the unrestricted one, proving a long-standing conjecture of the last author.

相關內容

Multi-reference alignment (MRA) is the problem of recovering a signal from its multiple noisy copies, each acted upon by a random group element. MRA is mainly motivated by single-particle cryo-electron microscopy (cryo-EM) that has recently joined X-ray crystallography as one of the two leading technologies to reconstruct biological molecular structures. Previous papers have shown that in the high noise regime, the sample complexity of MRA and cryo-EM is $n=\omega(\sigma^{2d})$, where $n$ is the number of observations, $\sigma^2$ is the variance of the noise, and $d$ is the lowest-order moment of the observations that uniquely determines the signal. In particular, it was shown that in many cases, $d=3$ for generic signals, and thus the sample complexity is $n=\omega(\sigma^6)$. In this paper, we analyze the second moment of the MRA and cryo-EM models. First, we show that in both models the second moment determines the signal up to a set of unitary matrices, whose dimension is governed by the decomposition of the space of signals into irreducible representations of the group. Second, we derive sparsity conditions under which a signal can be recovered from the second moment, implying sample complexity of $n=\omega(\sigma^4)$. Notably, we show that the sample complexity of cryo-EM is $n=\omega(\sigma^4)$ if at most one third of the coefficients representing the molecular structure are non-zero; this bound is near-optimal. The analysis is based on tools from representation theory and algebraic geometry. We also derive bounds on recovering a sparse signal from its power spectrum, which is the main computational problem of X-ray crystallography.

We show that any application of the technique of unbiased simulation becomes perfect simulation when coalescence of the two coupled Markov chains can be practically assured in advance. This happens when a fixed number of iterations is high enough that the probability of needing any more to achieve coalescence is negligible; we suggest a value of $10^{-20}$. This finding enormously increases the range of problems for which perfect simulation, which exactly follows the target distribution, can be implemented. We design a new algorithm to make practical use of the high number of iterations by producing extra perfect sample points with little extra computational effort, at a cost of a small, controllable amount of serial correlation within sample sets of about 20 points. Different sample sets remain completely independent. The algorithm includes maximal coupling for continuous processes, to bring together chains that are already close. We illustrate the methodology on a simple, two-state Markov chain and on standard normal distributions up to 20 dimensions. Our technical formulation involves a nonzero probability, which can be made arbitrarily small, that a single perfect sample point may have its place taken by a "string" of many points which are assigned weights, each equal to $\pm 1$, that sum to~$1$. A point with a weight of $-1$ is a "hole", which is an object that can be cancelled by an equivalent point that has the same value but opposite weight $+1$.

Hopfield networks are an attractive choice for solving many types of computational problems because they provide a biologically plausible mechanism. The Self-Optimization (SO) model adds to the Hopfield network by using a biologically founded Hebbian learning rule, in combination with repeated network resets to arbitrary initial states, for optimizing its own behavior towards some desirable goal state encoded in the network. In order to better understand that process, we demonstrate first that the SO model can solve concrete combinatorial problems in SAT form, using two examples of the Liars problem and the map coloring problem. In addition, we show how under some conditions critical information might get lost forever with the learned network producing seemingly optimal solutions that are in fact inappropriate for the problem it was tasked to solve. What appears to be an undesirable side-effect of the SO model, can provide insight into its process for solving intractable problems.

Venkatachala on the one hand, and Avdispahi\'c & Zejnulahi on the other, both studiied integer sequences with an unusual sum property defined in a greedy way, and proved many results about them. However, their proofs were rather lengthy and required numerous cases. In this paper, I provide a different approach, via finite automata, that can prove the same results (and more) in a simple, unified way. Instead of case analysis, we use a decision procedure implemented in the free software Walnut. Using these ideas, we can prove a conjecture of Quet and find connections between Quet's sequence and the "married" functions of Hofstadter.

We study the problem of estimating non-linear functionals of discrete distributions in the context of local differential privacy. The initial data $x_1,\ldots,x_n \in [K]$ are supposed i.i.d. and distributed according to an unknown discrete distribution $p = (p_1,\ldots,p_K)$. Only $\alpha$-locally differentially private (LDP) samples $z_1,...,z_n$ are publicly available, where the term 'local' means that each $z_i$ is produced using one individual attribute $x_i$. We exhibit privacy mechanisms (PM) that are interactive (i.e. they are allowed to use already published confidential data) or non-interactive. We describe the behavior of the quadratic risk for estimating the power sum functional $F_{\gamma} = \sum_{k=1}^K p_k^{\gamma}$, $\gamma >0$ as a function of $K, \, n$ and $\alpha$. In the non-interactive case, we study two plug-in type estimators of $F_{\gamma}$, for all $\gamma >0$, that are similar to the MLE analyzed by Jiao et al. (2017) in the multinomial model. However, due to the privacy constraint the rates we attain are slower and similar to those obtained in the Gaussian model by Collier et al. (2020). In the interactive case, we introduce for all $\gamma >1$ a two-step procedure which attains the faster parametric rate $(n \alpha^2)^{-1/2}$ when $\gamma \geq 2$. We give lower bounds results over all $\alpha$-LDP mechanisms and all estimators using the private samples.

Formation control of multi-agent systems has been a prominent research topic, spanning both theoretical and practical domains over the past two decades. Our study delves into the leader-follower framework, addressing two critical, previously overlooked aspects. Firstly, we investigate the impact of an unknown nonlinear manifold, introducing added complexity to the formation control challenge. Secondly, we address the practical constraint of limited follower sensing range, posing difficulties in accurately localizing the leader for followers. Our core objective revolves around employing Koopman operator theory and Extended Dynamic Mode Decomposition to craft a reliable prediction algorithm for the follower robot to anticipate the leader's position effectively. Our experimentation on an elliptical paraboloid manifold, utilizing two omni-directional wheeled robots, validates the prediction algorithm's effectiveness.

Operator splitting is a popular divide-and-conquer strategy for solving differential equations. Typically, the right-hand side of the differential equation is split into a number of parts that are then integrated separately. Many methods are known that split the right-hand side into two parts. This approach is limiting, however, and there are situations when 3-splitting is more natural and ultimately more advantageous. The second-order Strang operator-splitting method readily generalizes to a right-hand side splitting into any number of operators. It is arguably the most popular method for 3-splitting because of its efficiency, ease of implementation, and intuitive nature. Other 3-splitting methods exist, but they are less well-known, and \rev{analysis and} evaluation of their performance in practice are scarce. We demonstrate the effectiveness of some alternative 3-split, second-order methods to Strang splitting on two problems: the reaction-diffusion Brusselator, which can be split into three parts that each have closed-form solutions, and the kinetic Vlasov--Poisson equations that is used in semi-Lagrangian plasma simulations. We find alternative second-order 3-operator-splitting methods that realize efficiency gains of 10\%--20\% over traditional Strang splitting. Our analysis for the practical assessment of efficiency of operator-splitting methods includes the computational cost of the integrators and can be used in method design.

We present a robust deep incremental learning framework for regression tasks on financial temporal tabular datasets which is built upon the incremental use of commonly available tabular and time series prediction models to adapt to distributional shifts typical of financial datasets. The framework uses a simple basic building block (decision trees) to build self-similar models of any required complexity to deliver robust performance under adverse situations such as regime changes, fat-tailed distributions, and low signal-to-noise ratios. As a detailed study, we demonstrate our scheme using XGBoost models trained on the Numerai dataset and show that a two layer deep ensemble of XGBoost models over different model snapshots delivers high quality predictions under different market regimes. We also show that the performance of XGBoost models with different number of boosting rounds in three scenarios (small, standard and large) is monotonically increasing with respect to model size and converges towards the generalisation upper bound. We also evaluate the robustness of the model under variability of different hyperparameters, such as model complexity and data sampling settings. Our model has low hardware requirements as no specialised neural architectures are used and each base model can be independently trained in parallel.

We describe a novel algorithm for solving general parametric (nonlinear) eigenvalue problems. Our method has two steps: first, high-accuracy solutions of non-parametric versions of the problem are gathered at some values of the parameters; these are then combined to obtain global approximations of the parametric eigenvalues. To gather the non-parametric data, we use non-intrusive contour-integration-based methods, which, however, cannot track eigenvalues that migrate into/out of the contour as the parameter changes. Special strategies are described for performing the combination-over-parameter step despite having only partial information on such "migrating" eigenvalues. Moreover, we dedicate a special focus to the approximation of eigenvalues that undergo bifurcations. Finally, we propose an adaptive strategy that allows one to effectively apply our method even without any a priori information on the behavior of the sought-after eigenvalues. Numerical tests are performed, showing that our algorithm can achieve remarkably high approximation accuracy.

Stochastic inverse problems are typically encountered when it is wanted to quantify the uncertainty affecting the inputs of computer models. They consist in estimating input distributions from noisy, observable outputs, and such problems are increasingly examined in Bayesian contexts where the targeted inputs are affected by stochastic uncertainties. In this regard, a stochastic input can be qualified as meaningful if it explains most of the output uncertainty. While such inverse problems are characterized by identifiability conditions, constraints of "signal to noise", that can formalize this meaningfulness, should be accounted for within the definition of the model, prior to inference. This article investigates the possibility of forcing a solution to be meaningful in the context of parametric uncertainty quantification, through the tools of global sensitivity analysis and information theory (variance, entropy, Fisher information). Such forcings have mainly the nature of constraints placed on the input covariance, and can be made explicit by considering linear or linearizable models. Simulated experiments indicate that, when injected into the modeling process, these constraints can limit the influence of measurement or process noise on the estimation of the input distribution, and let hope for future extensions in a full non-linear framework, for example through the use of linear Gaussian mixtures.

北京阿比特科技有限公司