亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We employ random matrix theory to establish consistency of generalized cross validation (GCV) for estimating prediction risks of sketched ridge regression ensembles, enabling efficient and consistent tuning of regularization and sketching parameters. Our results hold for a broad class of asymptotically free sketches under very mild data assumptions. For squared prediction risk, we provide a decomposition into an unsketched equivalent implicit ridge bias and a sketching-based variance, and prove that the risk can be globally optimized by only tuning sketch size in infinite ensembles. For general subquadratic prediction risk functionals, we extend GCV to construct consistent risk estimators, and thereby obtain distributional convergence of the GCV-corrected predictions in Wasserstein-2 metric. This in particular allows construction of prediction intervals with asymptotically correct coverage conditional on the training data. We also propose an "ensemble trick" whereby the risk for unsketched ridge regression can be efficiently estimated via GCV using small sketched ridge ensembles. We empirically validate our theoretical results using both synthetic and real large-scale datasets with practical sketches including CountSketch and subsampled randomized discrete cosine transforms.

相關內容

Rate split multiple access (RSMA) has been proven as an effective communication scheme for 5G and beyond, especially in vehicular scenarios. However, RSMA requires complicated iterative algorithms for proper resource allocation, which cannot fulfill the stringent latency requirement in resource constrained vehicles. Although data driven approaches can alleviate this issue, they suffer from poor generalizability and scarce training data. In this paper, we propose a fractional programming (FP) based deep unfolding (DU) approach to address resource allocation problem for a weighted sum rate optimization in RSMA. By carefully designing the penalty function, we couple the variable update with projected gradient descent algorithm (PGD). Following the structure of PGD, we embed few learnable parameters in each layer of the DU network. Through extensive simulation, we have shown that the proposed model-based neural networks has similar performance as optimal results given by traditional algorithm but with much lower computational complexity, less training data, and higher resilience to test set data and out-of-distribution (OOD) data.

Solutions to the governing partial differential equations obtained from a discrete numerical scheme can have significant errors, especially near shocks when the discrete representation of the solution cannot fully capture the discontinuity in the solution. A recent approach to shock tracking [1, 2] has been to implicitly align the faces of mesh elements with the shock, yielding accurate solutions on coarse meshes. In engineering applications, the solution field is often used to evaluate a scalar functional of interest, such as lift or drag over an airfoil. While functionals are sensitive to errors in the flow solution, certain regions in the domain are more important for accurate evaluation of the functional than the rest. Using this fact, we formulate a goal-oriented implicit shock tracking approach that captures a segment of the shock that is important for evaluating the functional. Shock tracking is achieved using Lagrange-Newton-Krylov-Schur (LNKS) full space optimizer, with the objective of minimizing the adjoint-weighted residual error indicator. We also present a method to evaluate the sensitivity and the Hessian of the functional error. Using available block preconditioners for LNKS [3, 4] makes the full space approach scalable. The method is applied to test cases of two-dimensional advection and inviscid compressible flows to demonstrate functional-dependent shock tracking. Tracking the entire shock without using artificial dissipation results in the error converging at the orders of $\mathcal{O}(h^{p+1})$.

We consider the problem of the discrete-time approximation of the solution of a one-dimensional SDE with piecewise locally Lipschitz drift and continuous diffusion coefficients with polynomial growth. In this paper, we study the strong convergence of a (semi-explicit) exponential-Euler scheme previously introduced in Bossy et al. (2021). We show the usual 1/2 rate of convergence for the exponential-Euler scheme when the drift is continuous. When the drift is discontinuous, the convergence rate is penalised by a factor {$\epsilon$} decreasing with the time-step. We examine the case of the diffusion coefficient vanishing at zero, which adds a positivity preservation condition and a convergence analysis that exploits the negative moments and exponential moments of the scheme with the help of change of time technique introduced in Berkaoui et al. (2008). Asymptotic behaviour and theoretical stability of the exponential scheme, as well as numerical experiments, are also presented.

Single-particle entangled states (SPES) can offer a more secure way of encoding and processing quantum information than their multi-particle counterparts. The SPES generated via a 2D alternate quantum-walk setup from initially separable states can be either 3-way or 2-way entangled. This letter shows that the generated genuine three-way and nonlocal two-way SPES can be used as cryptographic keys to securely encode two distinct messages simultaneously. We detail the message encryption-decryption steps and show the resilience of the 3-way and 2-way SPES-based cryptographic protocols against eavesdropper attacks like intercept-and-resend and man-in-the-middle. We also detail how these protocols can be experimentally realized using single photons, with the three degrees of freedom being OAM, path, and polarization. These have unparalleled security for quantum communication tasks. The ability to simultaneously encode two distinct messages using the generated SPES showcases the versatility and efficiency of the proposed cryptographic protocol. This capability could significantly improve the throughput of quantum communication systems.

The problem of optimizing discrete phases in a reconfigurable intelligent surface (RIS) to maximize the received power at a user equipment is addressed. Necessary and sufficient conditions to achieve this maximization are given. These conditions are employed in an algorithm to achieve the maximization. New versions of the algorithm are given that are proven to achieve convergence in N or fewer steps whether the direct link is completely blocked or not, where N is the number of the RIS elements, whereas previously published results achieve this in KN or 2N number of steps where K is the number of discrete phases. Thus, for a discrete-phase RIS, the techniques presented in this paper achieve the optimum received power in the smallest number of steps published in the literature. In addition, in each of those N steps, the techniques presented in this paper determine only one or a small number of phase shifts with a simple elementwise update rule, which result in a substantial reduction of computation time, as compared to the algorithms in the literature. As a secondary result, we define the uniform polar quantization (UPQ) algorithm which is an intuitive quantization algorithm that can approximate the continuous solution with an approximation ratio of sinc^2(1/K) and achieve low time-complexity, given perfect knowledge of the channel.

Behavioural distances of transition systems modelled as coalgebras for endofunctors generalize the traditional notions of behavioural equivalence to a quantitative setting, in which states are equipped with a measure of how (dis)similar they are. Endowing transition systems with such distances essentially relies on the ability to lift functors describing the one-step behavior of the transition systems to the category of pseudometric spaces. We consider the Kantorovich lifting of a functor on quantale-valued relations, which subsumes equivalences, preorders and (directed) metrics. We use tools from fibred category theory, which allow one to see the Kantorovich lifting as arising from an appropriate fibred adjunction. Our main contributions are compositionality results for the Kantorovich lifting, where we show that that the lifting of a composed functor coincides with the composition of the liftings. In addition we describe how to lift distributive laws in the case where one of the two functors is polynomial. These results are essential ingredients for adopting up-to-techniques to the case of quantale-valued behavioural distances. Up-to techniques are a well-known coinductive technique for efficiently showing lower bounds for behavioural distances. We conclude by illustrating the results of our paper in two case studies.

We present a comparative computational study of two stabilized Reduced Order Models (ROMs) for the simulation of convection-dominated incompressible flow (Reynolds number of the order of a few thousands). Representative solutions in the parameter space, which includes either time only or time and Reynolds number, are computed with a Finite Volume method and used to generate a reduced basis via Proper Orthogonal Decomposition (POD). Galerkin projection of the Navier-Stokes equations onto the reduced space is used to compute the ROM solution. To ensure computational efficiency, the number of POD modes is truncated and ROM solution accuracy is recovered through two stabilization methods: i) adding a global constant artificial viscosity to the reduced dimensional model, and ii) adding a different value of artificial viscosity for the different POD modes. We test the stabilized ROMs for fluid flow in an idealized medical device consisting of a conical convergent, a narrow throat, and a sudden expansion. Both stabilization methods significantly improve the ROM solution accuracy over a standard (non-stabilized) POD-Galerkin model.

This paper develops an in-depth treatment concerning the problem of approximating the Gaussian smoothing and Gaussian derivative computations in scale-space theory for application on discrete data. With close connections to previous axiomatic treatments of continuous and discrete scale-space theory, we consider three main ways discretizing these scale-space operations in terms of explicit discrete convolutions, based on either (i) sampling the Gaussian kernels and the Gaussian derivative kernels, (ii) locally integrating the Gaussian kernels and the Gaussian derivative kernels over each pixel support region and (iii) basing the scale-space analysis on the discrete analogue of the Gaussian kernel, and then computing derivative approximations by applying small-support central difference operators to the spatially smoothed image data. We study the properties of these three main discretization methods both theoretically and experimentally, and characterize their performance by quantitative measures, including the results they give rise to with respect to the task of scale selection, investigated for four different use cases, and with emphasis on the behaviour at fine scales. The results show that the sampled Gaussian kernels and derivatives as well as the integrated Gaussian kernels and derivatives perform very poorly at very fine scales. At very fine scales, the discrete analogue of the Gaussian kernel with its corresponding discrete derivative approximations performs substantially better. The sampled Gaussian kernel and the sampled Gaussian derivatives do, on the other hand, lead to numerically very good approximations of the corresponding continuous results, when the scale parameter is sufficiently large, in the experiments presented in the paper, when the scale parameter is greater than a value of about 1, in units of the grid spacing.

We randomize the implicit two-stage Runge-Kutta scheme in order to improve the rate of convergence (with respect to a deterministic scheme) and stability of the approximate solution (with respect to the solution generated by the explicit scheme). For stability analysis, we use Dahlquist's concept of A-stability, adopted to randomized schemes by considering three notions of stability: asymptotic, mean-square, and in probability. The randomized implicit RK2 scheme proves to be A-stable asymptotically and in probability but not in the mean-square sense.

Realizing computationally complex quantum circuits in the presence of noise and imperfections is a challenging task. While fault-tolerant quantum computing provides a route to reducing noise, it requires a large overhead for generic algorithms. Here, we develop and analyze a hardware-efficient, fault-tolerant approach to realizing complex sampling circuits. We co-design the circuits with the appropriate quantum error correcting codes for efficient implementation in a reconfigurable neutral atom array architecture, constituting what we call a fault-tolerant compilation of the sampling algorithm. Specifically, we consider a family of $[[2^D , D, 2]]$ quantum error detecting codes whose transversal and permutation gate set can realize arbitrary degree-$D$ instantaneous quantum polynomial (IQP) circuits. Using native operations of the code and the atom array hardware, we compile a fault-tolerant and fast-scrambling family of such IQP circuits in a hypercube geometry, realized recently in the experiments by Bluvstein et al. [Nature 626, 7997 (2024)]. We develop a theory of second-moment properties of degree-$D$ IQP circuits for analyzing hardness and verification of random sampling by mapping to a statistical mechanics model. We provide evidence that sampling from hypercube IQP circuits is classically hard to simulate and analyze the linear cross-entropy benchmark (XEB) in comparison to the average fidelity. To realize a fully scalable approach, we first show that Bell sampling from degree-$4$ IQP circuits is classically intractable and can be efficiently validated. We further devise new families of $[[O(d^D),D,d]]$ color codes of increasing distance $d$, permitting exponential error suppression for transversal IQP sampling. Our results highlight fault-tolerant compiling as a powerful tool in co-designing algorithms with specific error-correcting codes and realistic hardware.

北京阿比特科技有限公司