亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

With an aim to analyse the performance of Markov chain Monte Carlo (MCMC) methods, in our recent work we derive a large deviation principle (LDP) for the empirical measures of Metropolis-Hastings (MH) chains on a continuous state space. One of the (sufficient) assumptions for the LDP involves the existence of a particular type of Lyapunov function, and it was left as an open question whether or not such a function exists for specific choices of MH samplers. In this paper we analyse the properties of such Lyapunov functions and investigate their existence for some of the most popular choices of MCMC samplers built on MH dynamics: Independent Metropolis Hastings, Random Walk Metropolis, and the Metropolis-adjusted Langevin algorithm. We establish under what conditions such a Lyapunov function exists, and from this obtain LDPs for some instances of the MCMC algorithms under consideration. To the best of our knowledge, these are the first large deviation results for empirical measures associated with Metropolis-Hastings chains for specific choices of proposal and target distributions.

相關內容

In the present work, we examine and analyze an hp-version interior penalty discontinuous Galerkin finite element method for the numerical approximation of a steady fluid system on computational meshes consisting of polytopic elements on the boundary. This approach is based on the discontinuous Galerkin method, enriched by arbitrarily shaped elements techniques as has been introduced in [13]. In this framework, and employing extensions of trace, Markov-type, and H1/L2-type inverse estimates to arbitrary element shapes, we examine a stationary Stokes fluid system enabling the proof of the inf/sup condition and the hp- a priori error estimates, while we investigate the optimal convergence rates numerically. This approach recovers and integrates the flexibility and superiority of the discontinuous Galerkin methods for fluids whenever geometrical deformations are taking place by degenerating the edges, facets, of the polytopic elements only on the boundary, combined with the efficiency of the hp-version techniques based on arbitrarily shaped elements without requiring any mapping from a given reference frame.

In this work, we developed a new Bayesian method for variable selection in function-on-scalar regression (FOSR). Our method uses a hierarchical Bayesian structure and latent variables to enable an adaptive covariate selection process for FOSR. Extensive simulation studies show the proposed method's main properties, such as its accuracy in estimating the coefficients and high capacity to select variables correctly. Furthermore, we conducted a substantial comparative analysis with the main competing methods, the BGLSS (Bayesian Group Lasso with Spike and Slab prior) method, the group LASSO (Least Absolute Shrinkage and Selection Operator), the group MCP (Minimax Concave Penalty), and the group SCAD (Smoothly Clipped Absolute Deviation). Our results demonstrate that the proposed methodology is superior in correctly selecting covariates compared with the existing competing methods while maintaining a satisfactory level of goodness of fit. In contrast, the competing methods could not balance selection accuracy with goodness of fit. We also considered a COVID-19 dataset and some socioeconomic data from Brazil as an application and obtained satisfactory results. In short, the proposed Bayesian variable selection model is highly competitive, showing significant predictive and selective quality.

If AI is the new electricity, what should we do to keep ourselves from getting electrocuted? In this work, we explore factors related to the potential of large language models (LLMs) to manipulate human decisions. We describe the results of two experiments designed to determine what characteristics of humans are associated with their susceptibility to LLM manipulation, and what characteristics of LLMs are associated with their manipulativeness potential. We explore human factors by conducting user studies in which participants answer general knowledge questions using LLM-generated hints, whereas LLM factors by provoking language models to create manipulative statements. Then, we analyze their obedience, the persuasion strategies used, and the choice of vocabulary. Based on these experiments, we discuss two actions that can protect us from LLM manipulation. In the long term, we put AI literacy at the forefront, arguing that educating society would minimize the risk of manipulation and its consequences. We also propose an ad hoc solution, a classifier that detects manipulation of LLMs - a Manipulation Fuse.

Advances in artificial intelligence (AI) will transform many aspects of our lives and society, bringing immense opportunities but also posing significant risks and challenges. The next several decades may well be a turning point for humanity, comparable to the industrial revolution. We write to share a set of recommendations for moving forward from the perspective of the founder and leaders of the One Hundred Year Study on AI. Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts to evaluate the immediate, longer-term, and far-reaching effects of AI on people and society, and to make recommendations about AI research, policy, and practice. As we witness new capabilities emerging from neural models, it is crucial that we engage in efforts to advance our scientific understanding of these models and their behaviors. We must address the impact of AI on people and society through technical, social, and sociotechnical lenses, incorporating insights from a diverse range of experts including voices from engineering, social, behavioral, and economic disciplines. By fostering dialogue, collaboration, and action among various stakeholders, we can strategically guide the development and deployment of AI in ways that maximize its potential for contributing to human flourishing. Despite the growing divide in the field between focusing on short-term versus long-term implications, we think both are of critical importance. As Alan Turing, one of the pioneers of AI, wrote in 1950, "We can only see a short distance ahead, but we can see plenty there that needs to be done." We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.

In this paper, we innovatively develop uniform/variable-time-step weighted and shifted BDF2 (WSBDF2) methods for the anisotropic Cahn-Hilliard (CH) model, combining the scalar auxiliary variable (SAV) approach with two types of stabilized techniques. Using the concept of $G$-stability, the uniform-time-step WSBDF2 method is theoretically proved to be energy-stable. Due to the inapplicability of the relevant G-stability properties, another technique is adopted in this work to demonstrate the energy stability of the variable-time-step WSBDF2 method. In addition, the two numerical schemes are all mass-conservative.Finally, numerous numerical simulations are presented to demonstrate the stability and accuracy of these schemes.

We consider problems where many, somewhat redundant, hypotheses are tested and we are interested in reporting the most precise rejections, with false discovery rate (FDR) control. This is the case, for example, when researchers are interested both in individual hypotheses as well as group hypotheses corresponding to intersections of sets of the original hypotheses, at several resolution levels. A concrete application is in genome-wide association studies, where, depending on the signal strengths, it might be possible to resolve the influence of individual genetic variants on a phenotype with greater or lower precision. To adapt to the unknown signal strength, analyses are conducted at multiple resolutions and researchers are most interested in the more precise discoveries. Assuring FDR control on the reported findings with these adaptive searches is, however, often impossible. To design a multiple comparison procedure that allows for an adaptive choice of resolution with FDR control, we leverage e-values and linear programming. We adapt this approach to problems where knockoffs and group knockoffs have been successfully applied to test conditional independence hypotheses. We demonstrate its efficacy by analyzing data from the UK Biobank.

In (Dzanic, J. Comp. Phys., 508:113010, 2024), a limiting approach for high-order discontinuous Galerkin schemes was introduced which allowed for imposing constraints on the solution continuously (i.e., everywhere within the element). While exact for linear constraint functionals, this approach only imposed a sufficient (but not the minimum necessary) amount of limiting for nonlinear constraint functionals. This short note shows how this limiting approach can be extended to allow exactness for general nonlinear quasiconcave constraint functionals through a nonlinear limiting procedure, reducing unnecessary numerical dissipation. Some examples are shown for nonlinear pressure and entropy constraints in the compressible gas dynamics equations, where both analytic and iterative approaches are used.

In this paper, we propose a new algorithm, the irrational-window-filter projection method (IWFPM), for solving arbitrary dimensional global quasiperiodic systems. Based on the projection method (PM), IWFPM further utilizes the concentrated distribution of Fourier coefficients to filter out relevant spectral points using an irrational window. Moreover, a corresponding index-shift transform is designed to make the Fast Fourier Transform available. The corresponding error analysis on the function approximation level is also given. We apply IWFPM to 1D, 2D, and 3D quasiperiodic Schr\"odinger eigenproblems to demonstrate its accuracy and efficiency. IWFPM exhibits a significant computational advantage over PM for both extended and localized quantum states. Furthermore, the widespread existence of such spectral point distribution feature can endow IWFPM with significant potential for broader applications in quasiperiodic systems.

Relying on sheaf theory, we introduce the notions of projected barcodes and projected distances for multi-parameter persistence modules. Projected barcodes are defined as derived pushforward of persistence modules onto $\mathbb{R}$. Projected distances come in two flavors: the integral sheaf metrics (ISM) and the sliced convolution distances (SCD). We conduct a systematic study of the stability of projected barcodes and show that the fibered barcode is a particular instance of projected barcodes. We prove that the ISM and the SCD provide lower bounds for the convolution distance. Furthermore, we show that the $\gamma$-linear ISM and the $\gamma$-linear SCD which are projected distances tailored for $\gamma$-sheaves can be computed using TDA software dedicated to one-parameter persistence modules. Moreover, the time and memory complexity required to compute these two metrics are advantageous since our approach does not require computing nor storing an entire $n$-persistence module.

The Hierarchy Of Time-Surfaces (HOTS) algorithm, a neuromorphic approach for feature extraction from event data, presents promising capabilities but faces challenges in accuracy and compatibility with neuromorphic hardware. In this paper, we introduce Sup3r, a Semi-Supervised algorithm aimed at addressing these challenges. Sup3r enhances sparsity, stability, and separability in the HOTS networks. It enables end-to-end online training of HOTS networks replacing external classifiers, by leveraging semi-supervised learning. Sup3r learns class-informative patterns, mitigates confounding features, and reduces the number of processed events. Moreover, Sup3r facilitates continual and incremental learning, allowing adaptation to data distribution shifts and learning new tasks without forgetting. Preliminary results on N-MNIST demonstrate that Sup3r achieves comparable accuracy to similarly sized Artificial Neural Networks trained with back-propagation. This work showcases the potential of Sup3r to advance the capabilities of HOTS networks, offering a promising avenue for neuromorphic algorithms in real-world applications.

北京阿比特科技有限公司