亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Pseudorandom quantum states (PRSs) and pseudorandom unitaries (PRUs) possess the dual nature of being efficiently constructible while appearing completely random to any efficient quantum algorithm. In this study, we establish fundamental bounds on pseudorandomness. We show that PRSs and PRUs exist only when the probability that an error occurs is negligible, ruling out their generation on noisy intermediate-scale and early fault-tolerant quantum computers. Additionally, we derive lower bounds on the imaginarity and coherence of PRSs and PRUs, rule out the existence of sparse or real PRUs, and show that PRUs are more difficult to generate than PRSs. Our work also establishes rigorous bounds on the efficiency of property testing, demonstrating the exponential complexity in distinguishing real quantum states from imaginary ones, in contrast to the efficient measurability of unitary imaginarity. Furthermore, we prove lower bounds on the testing of coherence. Lastly, we show that the transformation from a complex to a real model of quantum computation is inefficient, in contrast to the reverse process, which is efficient. Overall, our results establish fundamental limits on property testing and provide valuable insights into quantum pseudorandomness.

相關內容

Riemannian submanifold optimization with momentum is computationally challenging because, to ensure that the iterates remain on the submanifold, we often need to solve difficult differential equations. Here, we simplify such difficulties for a class of sparse or structured symmetric positive-definite matrices with the affine-invariant metric. We do so by proposing a generalized version of the Riemannian normal coordinates that dynamically orthonormalizes the metric and locally converts the problem into an unconstrained problem in the Euclidean space. We use our approach to simplify existing approaches for structured covariances and develop matrix-inverse-free $2^\text{nd}$-order optimizers for deep learning with low precision by using only matrix multiplications. Code: //github.com/yorkerlin/StructuredNGD-DL

Neural network approaches to approximate the ground state of quantum hamiltonians require the numerical solution of a highly nonlinear optimization problem. We introduce a statistical learning approach that makes the optimization trivial by using kernel methods. Our scheme is an approximate realization of the power method, where supervised learning is used to learn the next step of the power iteration. We show that the ground state properties of arbitrary gapped quantum hamiltonians can be reached with polynomial resources under the assumption that the supervised learning is efficient. Using kernel ridge regression, we provide numerical evidence that the learning assumption is verified by applying our scheme to find the ground states of several prototypical interacting many-body quantum systems, both in one and two dimensions, showing the flexibility of our approach.

Virtual reality or cyber-sickness is a serious usability problem. Postural (or balance) instability theory has emerged as one of the primary hypotheses for the cause of VR sickness. In this paper, we conducted a two-week-long experiment to observe the trends in user balance learning and sickness tolerance under different experimental conditions to analyze the potential inter-relationship between them. The experimental results have shown, aside from the obvious improvement in balance performance itself, that accompanying balance training had a stronger effect of increasing tolerance to VR sickness than mere exposure to VR. In addition, training in VR was found to be more effective than using the 2D-based medium, especially for the transfer effect to other non-training VR content.

The problem of reconstructing brain activity from electric potential measurements performed on the surface of a human head is not an easy task: not just because the solution of the related inverse problem is fundamentally ill-posed (not unique), but because the methods utilized in constructing a synthetic forward solution themselves contain many inaccuracies. One of these is the fact that the usual method of modelling primary currents in the human head via dipoles brings about at least 2 modelling errors: one from the singularity introduced by the dipole, and one from placing such dipoles near conductivity discontinuities in the active brain layer boundaries. In this article we observe how the removal of possible source locations from the surfaces of active brain layers affects the localisation accuracy of two inverse methods, sLORETA and Dipole Scan, at different signal-to-noise ratios (SNR), when the H(div) source model is used. We also describe the finite element forward solver used to construct the synthetic EEG data, that was fed to the inverse methods as input, in addition to the meshes that were used as the domains of the forward and inverse solvers. Our results suggest that there is a slight general improvement in the localisation results, especially at lower noise levels. The applied inverse algorithm and brain compartment under observation also affect the accuracy.

This article re-examines Lawvere's abstract, category-theoretic proof of the fixed-point theorem whose contrapositive is a `universal' diagonal argument. The main result is that the necessary axioms for both the fixed-point theorem and the diagonal argument can be stripped back further, to a semantic analogue of a weak substructural logic lacking weakening or exchange.

Scientific optical 3D modeling requires the possibility to implement highly flexible and customizable mathematical models as well as high computing power. However, established ray tracing software for optical design and modeling purposes often has limitations in terms of access to underlying mathematical models and the possibility of accelerating the mostly CPU-based computation. To address these limitations, we propose the use of NVIDIA's OptiX Ray Tracing Engine as a highly flexible and high-performing alternative. OptiX offers a highly customizable ray tracing framework with onboard GPU support for parallel computing, as well as access to optimized ray tracing algorithms for accelerated computation. To demonstrate the capabilities of our approach, a realistic focus variation instrument is modeled, describing optical instrument components (light sources, lenses, detector, etc.) as well as the measuring sample surface mathematically or as meshed files. Using this focus variation instrument model, exemplary virtual measurements of arbitrary and standardized sample surfaces are carried out, generating image stacks of more than 100 images and tracing more than 1E9 light rays per image. The performance and accuracy of the simulations are qualitatively evaluated, and virtually generated detector images are compared with images acquired by a respective physical measuring device.

Invariance against rotations of 3D objects is an important property in analyzing 3D point set data. Conventional 3D point set DNNs having rotation invariance typically obtain accurate 3D shape features via supervised learning by using labeled 3D point sets as training samples. However, due to the rapid increase in 3D point set data and the high cost of labeling, a framework to learn rotation-invariant 3D shape features from numerous unlabeled 3D point sets is required. This paper proposes a novel self-supervised learning framework for acquiring accurate and rotation-invariant 3D point set features at object-level. Our proposed lightweight DNN architecture decomposes an input 3D point set into multiple global-scale regions, called tokens, that preserve the spatial layout of partial shapes composing the 3D object. We employ a self-attention mechanism to refine the tokens and aggregate them into an expressive rotation-invariant feature per 3D point set. Our DNN is effectively trained by using pseudo-labels generated by a self-distillation framework. To facilitate the learning of accurate features, we propose to combine multi-crop and cut-mix data augmentation techniques to diversify 3D point sets for training. Through a comprehensive evaluation, we empirically demonstrate that, (1) existing rotation-invariant DNN architectures designed for supervised learning do not necessarily learn accurate 3D shape features under a self-supervised learning scenario, and (2) our proposed algorithm learns rotation-invariant 3D point set features that are more accurate than those learned by existing algorithms. Code will be available at //github.com/takahikof/RIPT_SDMM

The curse-of-dimensionality (CoD) taxes computational resources heavily with exponentially increasing computational cost as the dimension increases. This poses great challenges in solving high-dimensional PDEs as Richard Bellman first pointed out over 60 years ago. While there has been some recent success in solving numerically partial differential equations (PDEs) in high dimensions, such computations are prohibitively expensive, and true scaling of general nonlinear PDEs to high dimensions has never been achieved. In this paper, we develop a new method of scaling up physics-informed neural networks (PINNs) to solve arbitrary high-dimensional PDEs. The new method, called Stochastic Dimension Gradient Descent (SDGD), decomposes a gradient of PDEs into pieces corresponding to different dimensions and samples randomly a subset of these dimensional pieces in each iteration of training PINNs. We theoretically prove the convergence guarantee and other desired properties of the proposed method. We experimentally demonstrate that the proposed method allows us to solve many notoriously hard high-dimensional PDEs, including the Hamilton-Jacobi-Bellman (HJB) and the Schr\"{o}dinger equations in thousands of dimensions very fast on a single GPU using the PINNs mesh-free approach. For instance, we solve nontrivial nonlinear PDEs (one HJB equation and one Black-Scholes equation) in 100,000 dimensions in 6 hours on a single GPU using SDGD with PINNs. Since SDGD is a general training methodology of PINNs, SDGD can be applied to any current and future variants of PINNs to scale them up for arbitrary high-dimensional PDEs.

Spiking Neural Networks (SNN) are characterised by their unique temporal dynamics, but the properties and advantages of such computations are still not well understood. In order to provide answers, in this work we demonstrate how Spiking neurons can enable temporal feature extraction in feed-forward neural networks without the need for recurrent synapses, showing how their bio-inspired computing principles can be successfully exploited beyond energy efficiency gains and evidencing their differences with respect to conventional neurons. This is demonstrated by proposing a new task, DVS-Gesture-Chain (DVS-GC), which allows, for the first time, to evaluate the perception of temporal dependencies in a real event-based action recognition dataset. Our study proves how the widely used DVS Gesture benchmark could be solved by networks without temporal feature extraction, unlike the new DVS-GC which demands an understanding of the ordering of the events. Furthermore, this setup allowed us to unveil the role of the leakage rate in spiking neurons for temporal processing tasks and demonstrated the benefits of "hard reset" mechanisms. Additionally, we also show how time-dependent weights and normalization can lead to understanding order by means of temporal attention.

The idea of the restricted mean has been used to establish a significantly improved version of Markov's inequality that does not require any new assumptions. The result immediately extends on Chebyshev's inequalities and Chernoff's bound. The improved Markov inequality yields a bound that is hundreds or thousands of times more accurate than the original Markov bound for high quantiles in the most prevalent and diverse situations. The Markov inequality benefits from being model-independent, and the long-standing issue of its imprecision is solved. Practically speaking, avoidance of model risk is decisive when multiple competing models are present in a real-world situation.

北京阿比特科技有限公司