亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Learning Parity with Noise (LPN) problem underlines several classic cryptographic primitives. Researchers have endeavored to demonstrate the algorithmic difficulty of this problem by attempting to find a reduction from the decoding problem of linear codes, for which several hardness results exist. Earlier studies used code smoothing as a technical tool to achieve such reductions, showing that they are possible for codes with vanishing rate. This has left open the question of attaining a reduction with positive-rate codes. Addressing this case, we characterize the efficiency of the reduction in terms of the parameters of the decoding and LPN problems. As a conclusion, we isolate the parameter regimes for which a meaningful reduction is possible and the regimes for which its existence is unlikely.

相關內容

For subjective tasks such as hate detection, where people perceive hate differently, the Large Language Model's (LLM) ability to represent diverse groups is unclear. By including additional context in prompts, we comprehensively analyze LLM's sensitivity to geographical priming, persona attributes, and numerical information to assess how well the needs of various groups are reflected. Our findings on two LLMs, five languages, and six datasets reveal that mimicking persona-based attributes leads to annotation variability. Meanwhile, incorporating geographical signals leads to better regional alignment. We also find that the LLMs are sensitive to numerical anchors, indicating the ability to leverage community-based flagging efforts and exposure to adversaries. Our work provides preliminary guidelines and highlights the nuances of applying LLMs in culturally sensitive cases.

Quantum error correction (QEC) is essential for operating quantum computers in the presence of noise. Here, we accurately decode arbitrary Calderbank-Shor-Steane (CSS) codes via the maximum satisfiability (MaxSAT) problem. We show how to map quantum maximum likelihood problem of CSS codes of arbitrary geometry and parity check weight into MaxSAT problems. We incorporate the syndrome measurements as hard clauses, while qubit and measurement error probabilities, including biased and non-uniform, are encoded as soft MaxSAT clauses. For the code capacity of color codes on a hexagonal lattice, our decoder has a higher threshold and superior scaling in noise suppression compared to belief propagation with ordered statistics post-processing (BP-OSD), while showing similar scaling in computational cost. Further, we decode surface codes and recently proposed bivariate quantum low-density parity check (QLDPC) codes where we find lower error rates than BP-OSD. Finally, we connect the complexity of MaxSAT decoding to a computational phase transition controlled by the clause density of the MaxSAT problem, where we show that our mapping is always in the computationally ''easy`` phase. Our MaxSAT decoder can be further parallelised or implemented on ASICs and FPGAs, promising potential further speedups of several orders of magnitude. Our work provides a flexible platform towards practical applications on quantum computers.

This is a preliminary version. Markov chain Monte Carlo samplers based on discretizations of (overdamped) Langevin dynamics are commonly used in the Bayesian inference and computational statistical physics literature to estimate high-dimensional integrals. One can introduce a non-constant diffusion matrix to precondition these dynamics, and recent works have optimized it in order to sooner reach stationarity by overcoming entropic and energy barriers. However, the methodology introduced to compute these optimal diffusions is not suited to high-dimensional settings, as it relies on costly optimization procedures. In this work, we propose a class of diffusion matrices, based on one-dimensional collective variables (CVs), which helps dynamics explore the latent space defined by the CV. The form of the diffusion matrix is such that the effective dynamics, which are approximations of the processes as observed on the latent space, are governed by the optimal effective diffusion coefficient in a homogenized limit, which possesses an analytical expression. We describe how this class of diffusion matrices can be constructed and learned during the simulation. We provide implementations of the Metropolis--Adjusted Langevin Algorithm and Riemann Manifold (Generalized) Hamiltonian Monte Carlo algorithms, and discuss numerical optimizations in the case when the CV depends only on a few number of components of the position of the system. We illustrate the efficiency gains of using this class of diffusion by computing mean transition durations between two configurations of a dimer in a solvent.

This work presents GALAEXI as a novel, energy-efficient flow solver for the simulation of compressible flows on unstructured meshes leveraging the parallel computing power of modern Graphics Processing Units (GPUs). GALAEXI implements the high-order Discontinuous Galerkin Spectral Element Method (DGSEM) using shock capturing with a finite-volume subcell approach to ensure the stability of the high-order scheme near shocks. This work provides details on the general code design, the parallelization strategy, and the implementation approach for the compute kernels with a focus on the element local mappings between volume and surface data due to the unstructured mesh. GALAEXI exhibits excellent strong scaling properties up to 1024 GPUs if each GPU is assigned a minimum of one million degrees of freedom degrees of freedom. To verify its implementation, a convergence study is performed that recovers the theoretical order of convergence of the implemented numerical schemes. Moreover, the solver is validated using both the incompressible and compressible formulation of the Taylor-Green-Vortex at a Mach number of 0.1 and 1.25, respectively. A mesh convergence study shows that the results converge to the high-fidelity reference solution and that the results match the original CPU implementation. Finally, GALAEXI is applied to a large-scale wall-resolved large eddy simulation of a linear cascade of the NASA Rotor 37. Here, the supersonic region and shocks at the leading edge are captured accurately and robustly by the implemented shock-capturing approach. It is demonstrated that GALAEXI requires less than half of the energy to carry out this simulation in comparison to the reference CPU implementation. This renders GALAEXI as a potent tool for accurate and efficient simulations of compressible flows in the realm of exascale computing and the associated new HPC architectures.

We introduce GPTreeO, a flexible R package for scalable Gaussian process (GP) regression, particularly tailored to continual learning problems. GPTreeO builds upon the Dividing Local Gaussian Processes (DLGP) algorithm, in which a binary tree of local GP regressors is dynamically constructed using a continual stream of input data. In GPTreeO we extend the original DLGP algorithm by allowing continual optimisation of the GP hyperparameters, incorporating uncertainty calibration, and introducing new strategies for how the local partitions are created. Moreover, the modular code structure allows users to interface their favourite GP library to perform the local GP regression in GPTreeO. The flexibility of GPTreeO gives the user fine-grained control of the balance between computational speed, accuracy, stability and smoothness. We conduct a sensitivity analysis to show how GPTreeO's configurable features impact the regression performance in a continual learning setting.

The Polynomial Learning With Errors problem (PLWE) serves as the background of two of the three cryptosystems standardized in August 2024 by the National Institute of Standards and Technology to replace non-quantum resistant current primitives like those based on RSA, Diffie-Hellman or its elliptic curve analogue. Although PLWE is highly believed to be quantum resistant, this fact has not yet been established, contrariwise to other post-quantum proposals like multivariate and some code based ones. Moreover, several vulnerabilities have been encountered for a number of specific instances. In a search for more flexibility, it becomes fully relevant to study the robustness of PLWE based on other polynomials, not necessarily cyclotomic. In 2015, Elias et al found a good number of attacks based on different features of the roots of the polynomial. In the present work we present an overview of the approximations made against PLWE derived from this and subsequent works, along with several new attacks which refine those by Elias et al. exploiting the order of the trace of roots over finite extensions of the finite field under the three scenarios laid out by Elias et al., allowing to generalize the setting in which the attacks can be carried out.

The Gibbs sampler (a.k.a. Glauber dynamics and heat-bath algorithm) is a popular Markov Chain Monte Carlo algorithm which iteratively samples from the conditional distributions of a probability measure $\pi$ of interest. Under the assumption that $\pi$ is strongly log-concave, we show that the random scan Gibbs sampler contracts in relative entropy and provide a sharp characterization of the associated contraction rate. Assuming that evaluating conditionals is cheap compared to evaluating the joint density, our results imply that the number of full evaluations of $\pi$ needed for the Gibbs sampler to mix grows linearly with the condition number and is independent of the dimension. If $\pi$ is non-strongly log-concave, the convergence rate in entropy degrades from exponential to polynomial. Our techniques are versatile and extend to Metropolis-within-Gibbs schemes and the Hit-and-Run algorithm. A comparison with gradient-based schemes and the connection with the optimization literature are also discussed.

Image Edge detection (ED) is a base task in computer vision. While the performance of the ED algorithm has been improved greatly by introducing CNN-based models, current models still suffer from unsatisfactory precision rates especially when only a low error toleration distance is allowed. Therefore, model architecture for more precise predictions still needs an investigation. On the other hand, the unavoidable noise training data provided by humans would lead to unsatisfactory model predictions even when inputs are edge maps themselves, which also needs a solution. In this paper, more precise ED models are presented with cascaded skipping density blocks (CSDB). Our models obtain state-of-the-art(SOTA) predictions in several datasets, especially in average precision rate (AP), over a high-standard benchmark, which is confirmed by extensive experiments. Also, a novel modification on data augmentation for training is employed, which allows noiseless data to be employed in model training for the first time, and thus further improves the model performance. The relative Python codes can be found on //github.com/Hao-B-Shu/SDPED.

Indoor positioning using UWB technology has gained interest due to its centimeter-level accuracy potential. However, multipath effects and non-line-of-sight conditions cause ranging errors between anchors and tags. Existing approaches for mitigating these ranging errors rely on collecting large labeled datasets, making them impractical for real-world deployments. This paper proposes a novel self-supervised deep reinforcement learning approach that does not require labeled ground truth data. A reinforcement learning agent uses the channel impulse response as a state and predicts corrections to minimize the error between corrected and estimated ranges. The agent learns, self-supervised, by iteratively improving corrections that are generated by combining the predictability of trajectories with filtering and smoothening. Experiments on real-world UWB measurements demonstrate comparable performance to state-of-the-art supervised methods, overcoming data dependency and lack of generalizability limitations. This makes self-supervised deep reinforcement learning a promising solution for practical and scalable UWB-ranging error correction.

Degradation of image quality due to the presence of haze is a very common phenomenon. Existing DehazeNet [3], MSCNN [11] tackled the drawbacks of hand crafted haze relevant features. However, these methods have the problem of color distortion in gloomy (poor illumination) environment. In this paper, a cardinal (red, green and blue) color fusion network for single image haze removal is proposed. In first stage, network fusses color information present in hazy images and generates multi-channel depth maps. The second stage estimates the scene transmission map from generated dark channels using multi channel multi scale convolutional neural network (McMs-CNN) to recover the original scene. To train the proposed network, we have used two standard datasets namely: ImageNet [5] and D-HAZY [1]. Performance evaluation of the proposed approach has been carried out using structural similarity index (SSIM), mean square error (MSE) and peak signal to noise ratio (PSNR). Performance analysis shows that the proposed approach outperforms the existing state-of-the-art methods for single image dehazing.

北京阿比特科技有限公司