亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the sample complexity of the prototypical tasks quantum purity estimation and quantum inner product estimation. In purity estimation, we are to estimate $tr(\rho^2)$ of an unknown quantum state $\rho$ to additive error $\epsilon$. Meanwhile, for quantum inner product estimation, Alice and Bob are to estimate $tr(\rho\sigma)$ to additive error $\epsilon$ given copies of unknown quantum state $\rho$ and $\sigma$ using classical communication and restricted quantum communication. In this paper, we show a strong connection between the sample complexity of purity estimation with bounded quantum memory and inner product estimation with bounded quantum communication and unentangled measurements. We propose a protocol that solves quantum inner product estimation with $k$-qubit one-way quantum communication and unentangled local measurements using $O(median\{1/\epsilon^2,2^{n/2}/\epsilon,2^{n-k}/\epsilon^2\})$ copies of $\rho$ and $\sigma$. Our protocol can be modified to estimate the purity of an unknown quantum state $\rho$ using $k$-qubit quantum memory with the same complexity. We prove that arbitrary protocols with $k$-qubit quantum memory that estimate purity to error $\epsilon$ require $\Omega(median\{1/\epsilon^2,2^{n/2}/\sqrt{\epsilon},2^{n-k}/\epsilon^2\})$ copies of $\rho$. This indicates the same lower bound for quantum inner product estimation with one-way $k$-qubit quantum communication and classical communication, and unentangled local measurements. For purity estimation, we further improve the lower bound to $\Omega(\max\{1/\epsilon^2,2^{n/2}/\epsilon\})$ for any protocols using an identical single-copy projection-valued measurement. Additionally, we investigate a decisional variant of quantum distributed inner product estimation without quantum communication for mixed state and provide a lower bound on the sample complexity.

相關內容

Confidence sequences are sequences of confidence sets that adapt to incoming data while maintaining validity. Recent advances have introduced an algorithmic formulation for constructing some of the tightest confidence sequences for bounded real random variables. These approaches use a coin-betting framework, where a player sequentially bets on differences between potential mean values and observed data. This letter establishes that such coin-betting formulation is optimal among all possible algorithmic frameworks for constructing confidence sequences that build on e-variables and sequential hypothesis testing.

Causality plays an important role in understanding intelligent behavior, and there is a wealth of literature on mathematical models for causality, most of which is focused on causal graphs. Causal graphs are a powerful tool for a wide range of applications, in particular when the relevant variables are known and at the same level of abstraction. However, the given variables can also be unstructured data, like pixels of an image. Meanwhile, the causal variables, such as the positions of objects in the image, can be arbitrary deterministic functions of the given variables. Moreover, the causal variables may form a hierarchy of abstractions, in which the macro-level variables are deterministic functions of the micro-level variables. Causal graphs are limited when it comes to modeling this kind of situation. In the presence of deterministic relationships there is generally no causal graph that satisfies both the Markov condition and the faithfulness condition. We introduce factored space models as an alternative to causal graphs which naturally represent both probabilistic and deterministic relationships at all levels of abstraction. Moreover, we introduce structural independence and establish that it is equivalent to statistical independence in every distribution that factorizes over the factored space. This theorem generalizes the classical soundness and completeness theorem for d-separation.

The thermal conductivity of Functionally Graded Materials (FGMs) can be efficiently designed through topology optimization to obtain thermal meta-structures that actively steer the heat flow. Compared to conventional analytical design methods, topology optimization allows handling arbitrary geometries, boundary conditions and design requirements; and producing alternate designs for non-unique problems. Additionally, as far as the design of meta-structures is concerned, topology optimization does not need intuition-based coordinate transformation or the form invariance of governing equations, as in the case of transformation thermotics. We explore isogeometric density-based topology optimization in the continuous setting, which perfectly aligns with FGMs. In this formulation, the density field, geometry and solution of the governing equations are parameterized using non-uniform rational basis spline entities. Accordingly, the heat conduction problem is solved using Isogeometric Analysis. We design various 2D & 3D thermal meta-structures under different design scenarios to showcase the effectiveness and versatility of our approach. We also design thermal meta-structures based on architected cellular materials, a special class of FGMs, using their empirical material laws calculated via numerical homogenization.

We study a family of reachability problems under waiting-time restrictions in temporal and vertex-colored temporal graphs. Given a temporal graph and a set of source vertices, we find the set of vertices that are reachable from a source via a time-respecting path, where the difference in timestamps between consecutive edges is at most a resting time. Given a vertex-colored temporal graph and a multiset query of colors, we find the set of vertices reachable from a source via a time-respecting path such that the vertex colors of the path agree with the multiset query and the difference in timestamps between consecutive edges is at most a resting time. These kind of problems have applications in understanding the spread of a disease in a network, tracing contacts in epidemic outbreaks, finding signaling pathways in the brain network, and recommending tours for tourists, among other. We present an algebraic algorithmic framework based on constrained multi\-linear sieving for solving the restless reachability problems we propose. In particular, parameterized by the length $k$ of a path sought, we show that the proposed problems can be solved in $O(2^k k m \Delta)$ time and $O(n \Delta)$ space, where $n$ is the number of vertices, $m$ the number of edges, and $\Delta$ the maximum resting time of an input temporal graph. In addition, we prove that our algorithms for the restless reachability problems in vertex-colored temporal graphs are optimal under plausible complexity-theoretic assumptions. Finally, with an open-source implementation, we demonstrate that our algorithm scales to large graphs with up to one billion temporal edges, despite the problems being NP-hard. Specifically, we present extensive experiments to evaluate our scalability claims both on synthetic and real-world graphs. Our implementation is efficiently engineered and highly optimized.

We study the identification of binary choice models with fixed effects. We provide a condition called sign saturation and show that this condition is sufficient for the identification of the model. In particular, we can guarantee identification even with bounded regressors. We also show that without this condition, the model is not identified unless the error distribution belongs to a small class. The same sign saturation condition is also essential for identifying the sign of treatment effects. A test is provided to check the sign saturation condition and can be implemented using existing algorithms for the maximum score estimator.

Several hypothesis testing methods have been proposed to validate the assumption of isotropy in spatial point patterns. A majority of these methods are characterised by an unknown distribution of the test statistic under the null hypothesis of isotropy. Parametric approaches to approximating the distribution involve simulation of patterns from a user-specified isotropic model. Alternatively, nonparametric replicates of the test statistic under isotropy can be used to waive the need for specifying a model. In this paper, we first develop a general framework which allows for the integration of a selected nonparametric replication method into isotropy testing. We then conduct a large simulation study comprising application-like scenarios to assess the performance of tests with different parametric and nonparametric replication methods. In particular, we explore distortions in test size and power caused by model misspecification, and demonstrate the advantages of nonparametric replication in such scenarios.

We present an efficient algorithm for the application of sequences of planar rotations to a matrix. Applying such sequences efficiently is important in many numerical linear algebra algorithms for eigenvalues. Our algorithm is novel in three main ways. First, we introduce a new kernel that is optimized for register reuse in a novel way. Second, we introduce a blocking and packing scheme that improves the cache efficiency of the algorithm. Finally, we thoroughly analyze the memory operations of the algorithm which leads to important theoretical insights and makes it easier to select good parameters. Numerical experiments show that our algorithm outperforms the state-of-the-art and achieves a flop rate close to the theoretical peak on modern hardware.

Block majorization-minimization (BMM) is a simple iterative algorithm for constrained nonconvex optimization that sequentially minimizes majorizing surrogates of the objective function in each block while the others are held fixed. BMM entails a large class of optimization algorithms such as block coordinate descent and its proximal-point variant, expectation-minimization, and block projected gradient descent. We first establish that for general constrained nonsmooth nonconvex optimization, BMM with $\rho$-strongly convex and $L_g$-smooth surrogates can produce an $\epsilon$-approximate first-order optimal point within $\widetilde{O}((1+L_g+\rho^{-1})\epsilon^{-2})$ iterations and asymptotically converges to the set of first-order optimal points. Next, we show that BMM combined with trust-region methods with diminishing radius has an improved complexity of $\widetilde{O}((1+L_g) \epsilon^{-2})$, independent of the inverse strong convexity parameter $\rho^{-1}$, allowing improved theoretical and practical performance with `flat' surrogates. Our results hold robustly even when the convex sub-problems are solved as long as the optimality gaps are summable. Central to our analysis is a novel continuous first-order optimality measure, by which we bound the worst-case sub-optimality in each iteration by the first-order improvement the algorithm makes. We apply our general framework to obtain new results on various algorithms such as the celebrated multiplicative update algorithm for nonnegative matrix factorization by Lee and Seung, regularized nonnegative tensor decomposition, and the classical block projected gradient descent algorithm. Lastly, we numerically demonstrate that the additional use of diminishing radius can improve the convergence rate of BMM in many instances.

There exist multiple regression applications in engineering, industry and medicine where the outcomes follow an adaptive experimental design in which the next measurement depends on the previous observations, so that the observations are not conditionally independent given the covariates. In the existing literature on such adaptive designs, results asserting asymptotic normality of the maximum likelihood estimator require regularity conditions involving the second or third derivatives of the log-likelihood. Here we instead extend the theory of differentiability in quadratic mean (DQM) to the setting of adaptive designs, which requires strictly fewer regularity assumptions than the classical theory. In doing so, we discover a new DQM assumption, which we call summable differentiability in quadratic mean (S-DQM). As applications, we first verify asymptotic normality for a classical adaptive designs, namely the Bruceton design, before moving on to a complicated problem, namely a Markovian version of the Langlie design.

We study the numerical approximation of SDEs with singular drifts (including distributions) driven by a fractional Brownian motion. Under the Catellier-Gubinelli condition that imposes the regularity of the drift to be strictly greater than $1-1/(2H)$, we obtain an explicit rate of convergence of a tamed Euler scheme towards the SDE, extending results for bounded drifts. Beyond this regime, when the regularity of the drift is $1-1/(2H)$, we derive a non-explicit rate. As a byproduct, strong well-posedness for these equations is recovered. Proofs use new regularising properties of discrete-time fBm and a new critical Gr\"onwall-type lemma. We present examples and simulations.

北京阿比特科技有限公司