亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The problem studied in this work is to determine the higher weight spectra of the Projective Reed-Muller codes associated to the Veronese $3$-fold $\mathcal V$ in $PG(9,q)$, which is the image of the quadratic Veronese embedding of $PG(3,q)$ in $PG(9,q)$. We reduce the problem to the following combinatorial problem in finite geometry: For each subset $S$ of $\mathcal V$, determine the dimension of the linear subspace of $PG(9,q)$ generated by $S$. We develop a systematic method to solve the latter problem. We implement the method for $q=3$, and use it to obtain the higher weight spectra of the associated code. The case of a general finite field $\mathbb F_q$ will be treated in a future work.

相關內容

Pacific Graphics是亞洲(zhou)圖(tu)(tu)(tu)形(xing)協會(hui)(hui)(hui)的(de)(de)(de)(de)旗艦(jian)會(hui)(hui)(hui)議。作為一個(ge)(ge)非常成功(gong)的(de)(de)(de)(de)會(hui)(hui)(hui)議系(xi)(xi)列(lie),太(tai)平洋圖(tu)(tu)(tu)形(xing)公司為太(tai)平洋沿岸以(yi)及(ji)世界各(ge)地的(de)(de)(de)(de)研(yan)究(jiu)人(ren)員(yuan),開(kai)發人(ren)員(yuan),從業人(ren)員(yuan)提(ti)供了一個(ge)(ge)高級(ji)論(lun)(lun)壇,以(yi)介紹和(he)討論(lun)(lun)計算(suan)機(ji)圖(tu)(tu)(tu)形(xing)學(xue)及(ji)相關領域(yu)(yu)的(de)(de)(de)(de)新(xin)問題(ti),解決(jue)方案和(he)技術。太(tai)平洋圖(tu)(tu)(tu)形(xing)會(hui)(hui)(hui)議的(de)(de)(de)(de)目的(de)(de)(de)(de)是召集來自(zi)各(ge)個(ge)(ge)領域(yu)(yu)的(de)(de)(de)(de)研(yan)究(jiu)人(ren)員(yuan),以(yi)展(zhan)示他們的(de)(de)(de)(de)最(zui)新(xin)成果(guo),開(kai)展(zhan)合作并(bing)為研(yan)究(jiu)領域(yu)(yu)的(de)(de)(de)(de)發展(zhan)做(zuo)出貢獻(xian)。會(hui)(hui)(hui)議將包括定期的(de)(de)(de)(de)論(lun)(lun)文討論(lun)(lun)會(hui)(hui)(hui),進行中的(de)(de)(de)(de)討論(lun)(lun)會(hui)(hui)(hui),教(jiao)程以(yi)及(ji)由與(yu)計算(suan)機(ji)圖(tu)(tu)(tu)形(xing)學(xue)和(he)交互(hu)系(xi)(xi)統相關的(de)(de)(de)(de)所有(you)領域(yu)(yu)的(de)(de)(de)(de)國際知名演講者(zhe)的(de)(de)(de)(de)演講。 官網地址(zhi):

We introduce a new class of random Gottesman-Kitaev-Preskill (GKP) codes derived from the cryptanalysis of the so-called NTRU cryptosystem. The derived codes are good in that they exhibit constant rate and average distance scaling $\Delta \propto \sqrt{n}$ with high probability, where $n$ is the number of bosonic modes, which is a distance scaling equivalent to that of a GKP code obtained by concatenating single mode GKP codes into a qubit-quantum error correcting code with linear distance. The derived class of NTRU-GKP codes has the additional property that decoding for a stochastic displacement noise model is equivalent to decrypting the NTRU cryptosystem, such that every random instance of the code naturally comes with an efficient decoder. This construction highlights how the GKP code bridges aspects of classical error correction, quantum error correction as well as post-quantum cryptography. We underscore this connection by discussing the computational hardness of decoding GKP codes and propose, as a new application, a simple public key quantum communication protocol with security inherited from the NTRU cryptosystem.

Machine Learning (ML) in low-data settings remains an underappreciated yet crucial problem. Hence, data augmentation methods to increase the sample size of datasets needed for ML are key to unlocking the transformative potential of ML in data-deprived regions and domains. Unfortunately, the limited training set constrains traditional tabular synthetic data generators in their ability to generate a large and diverse augmented dataset needed for ML tasks. To address this challenge, we introduce CLLM, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime. However, not all the data generated by LLMs will improve downstream utility, as for any generative model. Consequently, we introduce a principled curation mechanism, leveraging learning dynamics, coupled with confidence and uncertainty metrics, to obtain a high-quality dataset. Empirically, on multiple real-world datasets, we demonstrate the superior performance of CLLM in the low-data regime compared to conventional generators. Additionally, we provide insights into the LLM generation and curation mechanism, shedding light on the features that enable them to output high-quality augmented datasets.

We derive the Alternating-Direction Implicit (ADI) method based on a commuting operator split and apply the results to the continuous time algebraic Lyapunov equation with low-rank constant term and approximate solution. Previously, it has been mandatory to start the low-rank ADI (LR-ADI) with an all-zero initial value. Our approach retains the known efficient iteration schemes of low-rank increments and residual to arbitrary low-rank initial values for the LR-ADI method. We further generalize some of the known properties of the LR-ADI for Lyapunov equations to larger classes of algorithms or problems. We investigate the performance of arbitrary initial values using two outer iterations in which LR-ADI is typically called. First, we solve an algebraic Riccati equation with the Newton method. Second, we solve a differential Riccati equation with a first-order Rosenbrock method. Numerical experiments confirm that the proposed new initial value of the alternating-directions implicit (ADI) can lead to a significant reduction in the total number of ADI steps, while also showing a 17% and 8x speed-up over the zero initial value for the two equation types, respectively.

We study the dual of Philo's shortest line segment problem which asks to find the optimal line segments passing through two given points, with a common endpoint, and with the other endpoints on a given line. The provided solution uses multivariable calculus and geometry methods. Interesting connections with the angle bisector of the triangle are explored. A generalization of the problem using $L_p$ ($p\ge 1$) norm is proposed. Particular case $p=\infty$ is studied. Interesting case $p=2$ is proposed as an open problem and related property of a symedian of a triangle is conjectured.

An essential problem in statistics and machine learning is the estimation of expectations involving PDFs with intractable normalizing constants. The self-normalized importance sampling (SNIS) estimator, which normalizes the IS weights, has become the standard approach due to its simplicity. However, the SNIS has been shown to exhibit high variance in challenging estimation problems, e.g, involving rare events or posterior predictive distributions in Bayesian statistics. Further, most of the state-of-the-art adaptive importance sampling (AIS) methods adapt the proposal as if the weights had not been normalized. In this paper, we propose a framework that considers the original task as estimation of a ratio of two integrals. In our new formulation, we obtain samples from a joint proposal distribution in an extended space, with two of its marginals playing the role of proposals used to estimate each integral. Importantly, the framework allows us to induce and control a dependency between both estimators. We propose a construction of the joint proposal that decomposes in two (multivariate) marginals and a coupling. This leads to a two-stage framework suitable to be integrated with existing or new AIS and/or variational inference (VI) algorithms. The marginals are adapted in the first stage, while the coupling can be chosen and adapted in the second stage. We show in several examples the benefits of the proposed methodology, including an application to Bayesian prediction with misspecified models.

In 2021, Casares, Colcombet and Fijalkow introduced the Alternating Cycle Decomposition (ACD), a structure used to define optimal transformations of Muller into parity automata and to obtain theoretical results about the possibility of relabelling automata with different acceptance conditions. In this work, we study the complexity of computing the ACD and its DAG-version, proving that this can be done in polynomial time for suitable representations of the acceptance condition of the Muller automaton. As corollaries, we obtain that we can decide typeness of Muller automata in polynomial time, as well as the parity index of the languages they recognise. Furthermore, we show that we can minimise in polynomial time the number of colours (resp. Rabin pairs) defining a Muller (resp. Rabin) acceptance condition, but that these problems become NP-complete when taking into account the structure of an automaton using such a condition.

Weakly Supervised Semantic Segmentation (WSSS) employs weak supervision, such as image-level labels, to train the segmentation model. Despite the impressive achievement in recent WSSS methods, we identify that introducing weak labels with high mean Intersection of Union (mIoU) does not guarantee high segmentation performance. Existing studies have emphasized the importance of prioritizing precision and reducing noise to improve overall performance. In the same vein, we propose ORANDNet, an advanced ensemble approach tailored for WSSS. ORANDNet combines Class Activation Maps (CAMs) from two different classifiers to increase the precision of pseudo-masks (PMs). To further mitigate small noise in the PMs, we incorporate curriculum learning. This involves training the segmentation model initially with pairs of smaller-sized images and corresponding PMs, gradually transitioning to the original-sized pairs. By combining the original CAMs of ResNet-50 and ViT, we significantly improve the segmentation performance over the single-best model and the naive ensemble model, respectively. We further extend our ensemble method to CAMs from AMN (ResNet-like) and MCTformer (ViT-like) models, achieving performance benefits in advanced WSSS models. It highlights the potential of our ORANDNet as a final add-on module for WSSS models.

The partial information decomposition (PID) aims to quantify the amount of redundant information that a set of sources provides about a target. Here, we show that this goal can be formulated as a type of information bottleneck (IB) problem, termed the "redundancy bottleneck" (RB). The RB formalizes a tradeoff between prediction and compression: it extracts information from the sources that best predict the target, without revealing which source provided the information. It can be understood as a generalization of "Blackwell redundancy", which we previously proposed as a principled measure of PID redundancy. The "RB curve" quantifies the prediction--compression tradeoff at multiple scales. This curve can also be quantified for individual sources, allowing subsets of redundant sources to be identified without combinatorial optimization. We provide an efficient iterative algorithm for computing the RB curve.

A non-linear complex system governed by multi-spatial and multi-temporal physics scales cannot be fully understood with a single diagnostic, as each provides only a partial view and much information is lost during data extraction. Combining multiple diagnostics also results in imperfect projections of the system's physics. By identifying hidden inter-correlations between diagnostics, we can leverage mutual support to fill in these gaps, but uncovering these inter-correlations analytically is too complex. We introduce a groundbreaking machine learning methodology to address this issue. Our multimodal approach generates super resolution data encompassing multiple physics phenomena, capturing detailed structural evolution and responses to perturbations previously unobservable. This methodology addresses a critical problem in fusion plasmas: the Edge Localized Mode (ELM), a plasma instability that can severely damage reactor walls. One method to stabilize ELM is using resonant magnetic perturbation to trigger magnetic islands. However, low spatial and temporal resolution of measurements limits the analysis of these magnetic islands due to their small size, rapid dynamics, and complex interactions within the plasma. With super-resolution diagnostics, we can experimentally verify theoretical models of magnetic islands for the first time, providing unprecedented insights into their role in ELM stabilization. This advancement aids in developing effective ELM suppression strategies for future fusion reactors like ITER and has broader applications, potentially revolutionizing diagnostics in fields such as astronomy, astrophysics, and medical imaging.

This paper presents the first analysis of parameter-uniform convergence for a hybridizable discontinuous Galerkin (HDG) method applied to a singularly perturbed convection-diffusion problem in 2D using a Shishkin mesh. The primary difficulty lies in accurately estimating the convection term in the layer, where existing methods often fall short. To address this, a novel error control technique is employed, along with reasonable assumptions regarding the stabilization function. The results show that, with polynomial degrees not exceeding $k$, the method achieves supercloseness of almost $k+\frac{1}{2}$ order in an energy norm. Numerical experiments confirm the theoretical accuracy and efficiency of the proposed method.

北京阿比特科技有限公司