亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study multiproposal Markov chain Monte Carlo algorithms, such as Multiple-try or generalised Metropolis-Hastings schemes, which have recently received renewed attention due to their amenability to parallel computing. First, we prove that no multiproposal scheme can speed-up convergence relative to the corresponding single proposal scheme by more than a factor of $K$, where $K$ denotes the number of proposals at each iteration. This result applies to arbitrary target distributions and it implies that serial multiproposal implementations are always less efficient than single proposal ones. Secondly, we consider log-concave distributions over Euclidean spaces, proving that, in this case, the speed-up is at most logarithmic in $K$, which implies that even parallel multiproposal implementations are fundamentally limited in the computational gain they can offer. Crucially, our results apply to arbitrary multiproposal schemes and purely rely on the two-step structure of the associated kernels (i.e. first generate $K$ candidate points, then select one among those). Our theoretical findings are validated through numerical simulations.

相關內容

Generalized Reed-Solomon codes form the most prominent class of maximum distance separable (MDS) codes, codes that are optimal in the sense that their minimum distance cannot be improved for a given length and code size. The study of codes that are MDS yet not generalized Reed-Solomon codes, called non-generalized Reed-Solomon MDS codes, started with the work by Roth and Lemple (1989), where the first examples where exhibited. It then gained traction thanks to the work by Beelen (2017), who introduced twisted Reed-Solomon codes, and showed that families of such codes are non-generalized Reed-Solomon MDS codes. Finding non-generalized Reed-Solomon MDS codes is naturally motivated by the classification of MDS codes. In this paper, we provide a generic construction of MDS codes, yielding infinitely many examples. We then explicit families of non-generalized Reed-Solomon MDS codes. Finally we position some of the proposed codes with respect to generalized twisted Reed-Solomon codes, and provide new view points on this family of codes.

This study compares state-of-the-art Large Language Models (LLMs) on their tendency to generate vulnerabilities when writing C programs using a neutral zero-shot prompt. Tihanyi et al. introduced the FormAI dataset at PROMISE'23, featuring 112,000 C programs generated by GPT-3.5-turbo, with over 51.24% identified as vulnerable. We extended that research with a large-scale study involving 9 state-of-the-art models such as OpenAI's GPT-4o-mini, Google's Gemini Pro 1.0, TII's 180 billion-parameter Falcon, Meta's 13 billion-parameter Code Llama, and several other compact models. Additionally, we introduce the FormAI-v2 dataset, which comprises 331 000 compilable C programs generated by these LLMs. Each program in the dataset is labeled based on the vulnerabilities detected in its source code through formal verification, using the Efficient SMT-based Context-Bounded Model Checker (ESBMC). This technique minimizes false positives by providing a counterexample for the specific vulnerability and reduces false negatives by thoroughly completing the verification process. Our study reveals that at least 62.07% of the generated programs are vulnerable. The differences between the models are minor, as they all show similar coding errors with slight variations. Our research highlights that while LLMs offer promising capabilities for code generation, deploying their output in a production environment requires proper risk assessment and validation.

Among all the deterministic CholeskyQR-type algorithms, Shifted CholeskyQR3 is specifically designed to address the QR factorization of ill-conditioned matrices. This algorithm introduces a shift parameter $s$ to prevent failure during the initial Cholesky factorization step, making the choice of this parameter critical for the algorithm's effectiveness. Our goal is to identify a smaller $s$ compared to the traditional selection based on $\norm{X}_{2}$. In this research, we propose a new matrix norm called the $g$-norm, which is based on the column properties of $X$. This norm allows us to obtain a reduced shift parameter $s$ for the Shifted CholeskyQR3 algorithm, thereby improving the sufficient condition of $\kappa_{2}(X)$ for this method. We provide rigorous proofs of orthogonality and residuals for the improved algorithm using our proposed $s$. Numerical experiments confirm the enhanced numerical stability of orthogonality and residuals with the reduced $s$. We find that Shifted CholeskyQR3 can effectively handle ill-conditioned $X$ with a larger $\kappa_{2}(X)$ when using our reduced $s$ compared to the original $s$. Furthermore, we compare CPU times with other algorithms to assess performance improvements.

In this work, we focus on improving LU-CholeskyQR2 \cite{LUChol}. Compared to other deterministic and randomized CholeskyQR-type algorithms, it does not require a sufficient condition on $\kappa_{2}(X)$ for the input tall-skinny matrix $X$, which ensures the algorithm's safety in most real-world applications. However, the Cholesky factorization step may break down when the $L$-factor after the LU factorization of $X$ is ill-conditioned. To address this, we construct a new algorithm, LU-Householder CholeskyQR2 (LHC2), which uses HouseholderQR to generate the upper-triangular factor, thereby avoiding numerical breakdown. Moreover, we utilize the latest sketching techniques to develop randomized versions of LHC: SLHC and SSLHC. We provide a rounding error analysis for these new algorithms. Numerical experiments demonstrate that our three new algorithms have better applicability and can handle a wider range of matrices compared to LU-CholeskyQR2. With the sketching technique, our randomized algorithms, SLHC2 and SSLHC3, show significant acceleration over LHC2. Additionally, SSLHC3, which employs multi-sketching, is more efficient than SLHC2 and exhibits better numerical stability. It is also robust as a randomized algorithm.

This paper leverages various philosophical and ontological frameworks to explore the concept of embodied artificial general intelligence (AGI), its relationship to human consciousness, and the key role of the metaverse in facilitating this relationship. Several theoretical frameworks underpin this exploration, such as embodied cognition, Michael Levin's computational boundary of a "Self," Donald D. Hoffman's Interface Theory of Perception, and Bernardo Kastrup's analytical idealism, which lead to considering our perceived outer reality as a symbolic representation of alternate inner states of being, and where AGI could embody a different form of consciousness with a larger computational boundary. The paper further discusses the developmental stages of AGI, the requirements for the emergence of an embodied AGI, the importance of a calibrated symbolic interface for AGI, and the key role played by the metaverse, decentralized systems, open-source blockchain technology, as well as open-source AI research. It also explores the idea of a feedback loop between AGI and human users in metaverse spaces as a tool for AGI calibration, as well as the role of local homeostasis and decentralized governance as preconditions for achieving a stable embodied AGI. The paper concludes by emphasizing the importance of achieving a certain degree of harmony in human relations and recognizing the interconnectedness of humanity at a global level, as key prerequisites for the emergence of a stable embodied AGI.

This paper considers estimating the parameters in a regime-switching stochastic differential equation(SDE) driven by Normal Inverse Gaussian(NIG) noise. The model under consideration incorporates a continuous-time finite state Markov chain to capture regime changes, enabling a more realistic representation of evolving market conditions or environmental factors. Although the continuous dynamics are typically observable, the hidden nature of the Markov chain introduces significant complexity, rendering standard likelihood-based methods less effective. To address these challenges, we propose an estimation algorithm designed for discrete, high-frequency observations, even when the Markov chain is not directly observed. Our approach integrates the Expectation-Maximization (EM) algorithm, which iteratively refines parameter estimates in the presence of latent variables, with a quasi-likelihood method adapted to NIG noise. Notably, this method can simultaneously estimate parameters within both the SDE coefficients and the driving noise. Simulation results are provided to evaluate the performance of the algorithm. These experiments demonstrate that the proposed method provides reasonable estimation under challenging conditions.

Recent studies have proposed integrating Chain-of-Thought (CoT) reasoning to further enhance the reliability of Code Language Models (CLMs) in generating code, a step-by-step approach that breaks down complex programming tasks into manageable sub-problems. Advances in this area have introduced CoT models, specifically designed to integrate CoT reasoning effectively into language models, achieving notable improvements in code generation. Despite these advancements, the security of CoT models has not been systematically studied. In this study, we aim to fill this gap by investigating the vulnerability of CoT models to backdoor injection in code generation tasks. To address this, we propose a model-agnostic backdoor attack method SABER (\textbf{S}elf-\textbf{A}ttention-\textbf{B}as\textbf{E}d backdoo\textbf{R}) based on the self-attention mechanism. SABER begins by selecting a malicious output as the backdoor using code mutation operations. It then identifies tokens most relevant to poisoned content by analyzing self-attention scores in the CodeBERT model. Finally, it applies semantic-preserving perturbations to generate adaptive and natural triggers. Our experiments on HumanEval-CoT and OpenEval-CoT test sets demonstrate that CoT models are susceptible to backdoor attacks via data poisoning. Taking the OpenEval-CoT dataset as an example, SABER achieves an ASR of 76.19%, representing an improvement of 14.29% over RIPPLe and a substantial 23.08% enhancement compared to BadPre. Further evaluations using ONION for automated detection and human studies reveal that SABER is stealthier and harder to detect, bypassing 77.27% of automated detection, with a human detection rate of just 3.17%. Our findings reveal that backdoors can be injected into CoT models to manipulate downstream code generation tasks.

Recent studies have discovered that LLMs have serious privacy leakage concerns, where an LLM may be fooled into outputting private information under carefully crafted adversarial prompts. These risks include leaking system prompts, personally identifiable information, training data, and model parameters. Most existing red-teaming approaches for privacy leakage rely on humans to craft the adversarial prompts. A few automated methods are proposed for system prompt extraction, but they cannot be applied to more severe risks (e.g., training data extraction) and have limited effectiveness even for system prompt extraction. In this paper, we propose PrivAgent, a novel black-box red-teaming framework for LLM privacy leakage. We formulate different risks as a search problem with a unified attack goal. Our framework trains an open-source LLM through reinforcement learning as the attack agent to generate adversarial prompts for different target models under different risks. We propose a novel reward function to provide effective and fine-grained rewards for the attack agent. Finally, we introduce customizations to better fit our general framework to system prompt extraction and training data extraction. Through extensive evaluations, we first show that PrivAgent outperforms existing automated methods in system prompt leakage against six popular LLMs. Notably, our approach achieves a 100% success rate in extracting system prompts from real-world applications in OpenAI's GPT Store. We also show PrivAgent's effectiveness in extracting training data from an open-source LLM with a success rate of 5.9%. We further demonstrate PrivAgent's effectiveness in evading the existing guardrail defense and its helpfulness in enabling better safety alignment. Finally, we validate our customized designs through a detailed ablation study. We release our code here //github.com/rucnyz/RedAgent.

The paper aims at proposing an efficient and stable quasi-interpolation based method for numerically computing the Helmholtz-Hodge decomposition of a vector field. To this end, we first explicitly construct a matrix kernel in a general form from polyharmonic splines such that it includes divergence-free/curl-free/harmonic matrix kernels as special cases. Then we apply the matrix kernel to vector decomposition via the convolution technique together with the Helmholtz-Hodge decomposition. More precisely, we show that if we convolve a vector field with a scaled divergence-free (curl-free) matrix kernel, then the resulting divergence-free (curl-free) convolution sequence converges to the corresponding divergence-free (curl-free) part of the Helmholtz-Hodge decomposition of the field. Finally, by discretizing the convolution sequence via certain quadrature rule, we construct a family of (divergence-free/curl-free) quasi-interpolants for the Helmholtz-Hodge decomposition (defined both in the whole space and over a bounded domain). Corresponding error estimates derived in the paper show that our quasi-interpolation based method yields convergent approximants to both the vector field and its Helmholtz-Hodge decomposition

This paper presents GARD, an upper limb end-effector rehabilitation device developed for stroke patients. GARD offers assistance force along or towards a 2D trajectory during physical therapy sessions. GARD employs a non-backdrivable mechanism with novel motor velocity-control-based algorithms, which offers superior control precision and stability. To our knowledge, this innovative technical route has not been previously explored in rehabilitation robotics. In alignment with the new design, GARD features two novel control algorithms: Implicit Euler Velocity Control (IEVC) algorithm and a generalized impedance control algorithm. These algorithms achieve O(n) runtime complexity for any arbitrary trajectory. The system has demonstrated a mean absolute error of 0.023mm in trajectory-following tasks and 0.14mm in trajectory-restricted free moving tasks. The proposed upper limb rehabilitation device offers all the functionalities of existing commercial devices with superior performance. Additionally, GARD provides unique functionalities such as area-restricted free moving and dynamic Motion Restriction Map interaction. This device holds strong potential for widespread clinical use, potentially improving rehabilitation outcomes for stroke patients.

北京阿比特科技有限公司