亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We report an implementation of the McMurchie-Davidson (MD) algorithm for 3-center and 4-center 2-particle integrals over Gaussian atomic orbitals (AOs) with low and high angular momenta $l$ and varying degrees of contraction for graphical processing units (GPUs). This work builds upon our recent implementation of a matrix form of the MD algorithm that is efficient for GPU evaluation of 4-center 2-particle integrals over Gaussian AOs of high angular momenta ($l\geq 4$) [$\mathit{J. Phys. Chem. A}\ \mathbf{127}$, 10889 (2023)]. The use of unconventional data layouts and three variants of the MD algorithm allow to evaluate integrals in double precision with sustained performance between 25% and 70% of the theoretical hardware peak. Performance assessment includes integrals over AOs with $l\leq 6$ (higher $l$ is supported). Preliminary implementation of the Hartree-Fock exchange operator is presented and assessed for computations with up to quadruple-zeta basis and more than 20,000 AOs. The corresponding C++ code is a part of the experimental open-source $\mathtt{LibintX}$ library available at $\mathbf{github.com:ValeevGroup/LibintX}$.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

The rapid development of Large Language Models (LLMs) and Generative Pre-Trained Transformers(GPTs) in the field of Generative Artificial Intelligence (AI) can significantly impact task automation in themodern economy. We anticipate that the PRA field will inevitably be affected by this technology. Thus, themain goal of this paper is to engage the risk assessment community into a discussion of benefits anddrawbacks of this technology for PRA. We make a preliminary analysis of possible application of LLM inProbabilistic Risk Assessment (PRA) modeling context referring to the ongoing experience in softwareengineering field. We explore potential application scenarios and the necessary conditions for controlledLLM usage in PRA modeling (whether static or dynamic). Additionally, we consider the potential impact ofthis technology on PRA modeling tools.

Emphasis in the tensor literature on random embeddings (tools for low-distortion dimension reduction) for the canonical polyadic (CP) tensor decomposition has left analogous results for the more expressive Tucker decomposition comparatively lacking. This work establishes general Johnson-Lindenstrauss (JL) type guarantees for the estimation of Tucker decompositions when an oblivious random embedding is applied along each mode. When these embeddings are drawn from a JL-optimal family, the decomposition can be estimated within $\varepsilon$ relative error under restrictions on the embedding dimension that are in line with recent CP results. We implement a higher-order orthogonal iteration (HOOI) decomposition algorithm with random embeddings to demonstrate the practical benefits of this approach and its potential to improve the accessibility of otherwise prohibitive tensor analyses. On moderately large face image and fMRI neuroimaging datasets, empirical results show that substantial dimension reduction is possible with minimal increase in reconstruction error relative to traditional HOOI ($\leq$5% larger error, 50%-60% lower computation time for large models with 50% dimension reduction along each mode). Especially for large tensors, our method outperforms traditional higher-order singular value decomposition (HOSVD) and recently proposed TensorSketch methods.

Fairness is one of the most commonly identified ethical principles in existing AI guidelines, and the development of fair AI-enabled systems is required by new and emerging AI regulation. But most approaches to addressing the fairness of AI-enabled systems are limited in scope in two significant ways: their substantive content focuses on statistical measures of fairness, and they do not emphasize the need to identify and address fairness considerations across the whole AI lifecycle. Our contribution is to present an assurance framework and tool that can enable a practical and transparent method for widening the scope of fairness considerations across the AI lifecycle and move the discussion beyond mere statistical notions of fairness to consider a richer analysis in a practical and context-dependent manner. To illustrate this approach, we first describe and then apply the framework of Trustworthy and Ethical Assurance (TEA) to an AI-enabled clinical diagnostic support system (CDSS) whose purpose is to help clinicians predict the risk of developing hypertension in patients with Type 2 diabetes, a context in which several fairness considerations arise (e.g., discrimination against patient subgroups). This is supplemented by an open-source tool and a fairness considerations map to help facilitate reasoning about the fairness of AI-enabled systems in a participatory way. In short, by using a shared framework for identifying, documenting and justifying fairness considerations, and then using this deliberative exercise to structure an assurance case, research on AI fairness becomes reusable and generalizable for others in the ethical AI community and for sharing best practices for achieving fairness and equity in digital health and healthcare in particular.

A Riemannian geometric framework for Markov chain Monte Carlo (MCMC) is developed where using the Fisher-Rao metric on the manifold of probability density functions (pdfs) informed proposal densities for Metropolis-Hastings (MH) algorithms are constructed. We exploit the square-root representation of pdfs under which the Fisher-Rao metric boils down to the standard $L^2$ metric on the positive orthant of the unit hypersphere. The square-root representation allows us to easily compute the geodesic distance between densities, resulting in a straightforward implementation of the proposed geometric MCMC methodology. Unlike the random walk MH that blindly proposes a candidate state using no information about the target, the geometric MH algorithms effectively move an uninformed base density (e.g., a random walk proposal density) towards different global/local approximations of the target density. We compare the proposed geometric MH algorithm with other MCMC algorithms for various Markov chain orderings, namely the covariance, efficiency, Peskun, and spectral gap orderings. The superior performance of the geometric algorithms over other MH algorithms like the random walk Metropolis, independent MH and variants of Metropolis adjusted Langevin algorithms is demonstrated in the context of various multimodal, nonlinear and high dimensional examples. In particular, we use extensive simulation and real data applications to compare these algorithms for analyzing mixture models, logistic regression models and ultra-high dimensional Bayesian variable selection models. A publicly available R package accompanies the article.

Estimating the sharing of genetic effects across different conditions is important to many statistical analyses of genomic data. The patterns of sharing arising from these data are often highly heterogeneous. To flexibly model these heterogeneous sharing patterns, Urbut et al. (2019) proposed the multivariate adaptive shrinkage (MASH) method to jointly analyze genetic effects across multiple conditions. However, multivariate analyses using MASH (as well as other multivariate analyses) require good estimates of the sharing patterns, and estimating these patterns efficiently and accurately remains challenging. Here we describe new empirical Bayes methods that provide improvements in speed and accuracy over existing methods. The two key ideas are: (1) adaptive regularization to improve accuracy in settings with many conditions; (2) improving the speed of the model fitting algorithms by exploiting analytical results on covariance estimation. In simulations, we show that the new methods provide better model fits, better out-of-sample performance, and improved power and accuracy in detecting the true underlying signals. In an analysis of eQTLs in 49 human tissues, our new analysis pipeline achieves better model fits and better out-of-sample performance than the existing MASH analysis pipeline. We have implemented the new methods, which we call ``Ultimate Deconvolution'', in an R package, udr, available on GitHub.

We provide a posteriori error estimates for a discontinuous Galerkin scheme for the parabolic-elliptic Keller-Segel system in 2 or 3 space dimensions. The estimates are conditional, in the sense that an a posteriori computable quantity needs to be small enough - which can be ensured by mesh refinement - and optimal in the sense that the error estimator decays with the same order as the error under mesh refinement. A specific feature of our error estimator is that it can be used to prove existence of a weak solution up to a certain time based on numerical results.

The literature shows the possible existence of a problem called collinearity in both Nelson-Siegel and Nelson-Siegel-Svensson models due to the relationship between the slope and curvature components. The presence of this problem and the estimation of both models by Ordinary Least Squares would lead to coefficients estimates that may be unstable among other consequences. However, these estimates are used to make monetary policy decisions. For this reason, it is important to try mitigating this collinearity problem. Consequently, some authors propose traditional procedures for the treatment of collinearity such as: non-linear optimisation, to fix the shape parameter or ridge regression. Nevertheless, all these processes have their disadvantages. Alternatively, a new method with good properties called raise regression is proposed in this paper. Finally, the methodologies are illustrated with an empirical comparison on Euribor Overnight Index Swap and Euribor Interest Rates Swap data between 2011 and 2021.

We consider a convex constrained Gaussian sequence model and characterize necessary and sufficient conditions for the least squares estimator (LSE) to be optimal in a minimax sense. For a closed convex set $K\subset \mathbb{R}^n$ we observe $Y=\mu+\xi$ for $\xi\sim N(0,\sigma^2\mathbb{I}_n)$ and $\mu\in K$ and aim to estimate $\mu$. We characterize the worst case risk of the LSE in multiple ways by analyzing the behavior of the local Gaussian width on $K$. We demonstrate that optimality is equivalent to a Lipschitz property of the local Gaussian width mapping. We also provide theoretical algorithms that search for the worst case risk. We then provide examples showing optimality or suboptimality of the LSE on various sets, including $\ell_p$ balls for $p\in[1,2]$, pyramids, solids of revolution, and multivariate isotonic regression, among others.

In 1934, the American statistician Samuel S. Wilks derived remarkable formulas for the joint moments of embedded principal minors of sample covariance matrices in multivariate Gaussian populations, and he used them to compute the moments of sample statistics in various applications related to multivariate linear regression. These important but little-known moment results were extended in 1963 by the Australian statistician A. Graham Constantine using Bartlett's decomposition. In this note, a new proof of Wilks' results is derived using the concept of iterated Schur complements, thereby bypassing Bartlett's decomposition. Furthermore, Wilks' open problem of evaluating joint moments of disjoint principal minors of Wishart random matrices is related to the Gaussian product inequality conjecture.

We consider an unknown multivariate function representing a system-such as a complex numerical simulator-taking both deterministic and uncertain inputs. Our objective is to estimate the set of deterministic inputs leading to outputs whose probability (with respect to the distribution of the uncertain inputs) of belonging to a given set is less than a given threshold. This problem, which we call Quantile Set Inversion (QSI), occurs for instance in the context of robust (reliability-based) optimization problems, when looking for the set of solutions that satisfy the constraints with sufficiently large probability. To solve the QSI problem we propose a Bayesian strategy, based on Gaussian process modeling and the Stepwise Uncertainty Reduction (SUR) principle, to sequentially choose the points at which the function should be evaluated to efficiently approximate the set of interest. We illustrate the performance and interest of the proposed SUR strategy through several numerical experiments.

北京阿比特科技有限公司