亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Clinical artificial intelligence (AI) methods have been proposed for predicting social behaviors which could be reasonably understood from patient-reported data. This raises ethical concerns about respect, privacy, and patient awareness/control over how their health data is used. Ethical concerns surrounding clinical AI systems for social behavior verification were divided into three main categories: (1) the use of patient data retrospectively without informed consent for the specific task of verification, (2) the potential for inaccuracies or biases within such systems, and (3) the impact on trust in patient-provider relationships with the introduction of automated AI systems for fact-checking. Additionally, this report showed the simulated misuse of a verification system and identified a potential LLM bias against patient-reported information in favor of multimodal data, published literature, and the outputs of other AI methods (i.e., AI self-trust). Finally, recommendations were presented for mitigating the risk that AI verification systems will cause harm to patients or undermine the purpose of the healthcare system.

相關內容

人(ren)(ren)(ren)工(gong)(gong)智(zhi)能(neng)(neng)雜志AI(Artificial Intelligence)是目前公(gong)認的(de)(de)(de)(de)(de)(de)發表該(gai)領(ling)域最新(xin)研究成果(guo)的(de)(de)(de)(de)(de)(de)主要國際論壇。該(gai)期刊(kan)歡(huan)迎(ying)有關(guan)AI廣(guang)泛方(fang)(fang)(fang)面(mian)的(de)(de)(de)(de)(de)(de)論文,這些論文構成了整(zheng)個(ge)領(ling)域的(de)(de)(de)(de)(de)(de)進步,也歡(huan)迎(ying)介紹(shao)人(ren)(ren)(ren)工(gong)(gong)智(zhi)能(neng)(neng)應(ying)用的(de)(de)(de)(de)(de)(de)論文,但重點應(ying)該(gai)放在新(xin)的(de)(de)(de)(de)(de)(de)和(he)新(xin)穎的(de)(de)(de)(de)(de)(de)人(ren)(ren)(ren)工(gong)(gong)智(zhi)能(neng)(neng)方(fang)(fang)(fang)法(fa)如何提(ti)高應(ying)用領(ling)域的(de)(de)(de)(de)(de)(de)性能(neng)(neng),而不是介紹(shao)傳統人(ren)(ren)(ren)工(gong)(gong)智(zhi)能(neng)(neng)方(fang)(fang)(fang)法(fa)的(de)(de)(de)(de)(de)(de)另一個(ge)應(ying)用。關(guan)于應(ying)用的(de)(de)(de)(de)(de)(de)論文應(ying)該(gai)描述一個(ge)原則性的(de)(de)(de)(de)(de)(de)解決方(fang)(fang)(fang)案,強調其(qi)新(xin)穎性,并對正在開(kai)發的(de)(de)(de)(de)(de)(de)人(ren)(ren)(ren)工(gong)(gong)智(zhi)能(neng)(neng)技術進行深入的(de)(de)(de)(de)(de)(de)評估。 官網(wang)地址(zhi):

The sample compression theory provides generalization guarantees for predictors that can be fully defined using a subset of the training dataset and a (short) message string, generally defined as a binary sequence. Previous works provided generalization bounds for the zero-one loss, which is restrictive, notably when applied to deep learning approaches. In this paper, we present a general framework for deriving new sample compression bounds that hold for real-valued losses. We empirically demonstrate the tightness of the bounds and their versatility by evaluating them on different types of models, e.g., neural networks and decision forests, trained with the Pick-To-Learn (P2L) meta-algorithm, which transforms the training method of any machine-learning predictor to yield sample-compressed predictors. In contrast to existing P2L bounds, ours are valid in the non-consistent case.

Applying differential privacy (DP) by means of the DP-SGD algorithm to protect individual data points during training is becoming increasingly popular in NLP. However, the choice of granularity at which DP is applied is often neglected. For example, neural machine translation (NMT) typically operates on the sentence-level granularity. From the perspective of DP, this setup assumes that each sentence belongs to a single person and any two sentences in the training dataset are independent. This assumption is however violated in many real-world NMT datasets, e.g., those including dialogues. For proper application of DP we thus must shift from sentences to entire documents. In this paper, we investigate NMT at both the sentence and document levels, analyzing the privacy/utility trade-off for both scenarios, and evaluating the risks of not using the appropriate privacy granularity in terms of leaking personally identifiable information (PII). Our findings indicate that the document-level NMT system is more resistant to membership inference attacks, emphasizing the significance of using the appropriate granularity when working with DP.

In recent years, research involving human participants has been critical to advances in artificial intelligence (AI) and machine learning (ML), particularly in the areas of conversational, human-compatible, and cooperative AI. For example, roughly 9% of publications at recent AAAI and NeurIPS conferences indicate the collection of original human data. Yet AI and ML researchers lack guidelines for ethical research practices with human participants. Fewer than one out of every four of these AAAI and NeurIPS papers confirm independent ethical review, the collection of informed consent, or participant compensation. This paper aims to bridge this gap by examining the normative similarities and differences between AI research and related fields that involve human participants. Though psychology, human-computer interaction, and other adjacent fields offer historic lessons and helpful insights, AI research presents several distinct considerations$\unicode{x2014}$namely, participatory design, crowdsourced dataset development, and an expansive role of corporations$\unicode{x2014}$that necessitate a contextual ethics framework. To address these concerns, this manuscript outlines a set of guidelines for ethical and transparent practice with human participants in AI and ML research. Overall, this paper seeks to equip technical researchers with practical knowledge for their work, and to position them for further dialogue with social scientists, behavioral researchers, and ethicists.

Recent progress in artificial intelligence (AI) has been driven by insights from neuroscience, particularly with the development of artificial neural networks (ANNs). This has significantly enhanced the replication of complex cognitive tasks such as vision and natural language processing. Despite these advances, ANNs struggle with continual learning, adaptable knowledge transfer, robustness, and resource efficiency - capabilities that biological systems handle seamlessly. Specifically, ANNs often overlook the functional and morphological diversity of the brain, hindering their computational capabilities. Furthermore, incorporating cell-type specific neuromodulatory effects into ANNs with neuronal heterogeneity could enable learning at two spatial scales: spiking behavior at the neuronal level, and synaptic plasticity at the circuit level, thereby potentially enhancing their learning abilities. In this article, we summarize recent bio-inspired models, learning rules and architectures and propose a biologically-informed framework for enhancing ANNs. Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors and dendritic compartments to simulate morphological and functional diversity of neuronal computations. Finally, we outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, balances bioinspiration and complexity, and provides scalable solutions for pressing AI challenges, such as continual learning, adaptability, robustness, and resource-efficiency.

We introduce estimatable variation neural networks (EVNNs), a class of neural networks that allow a computationally cheap estimate on the $BV$ norm motivated by the space $BMV$ of functions with bounded M-variation. We prove a universal approximation theorem for EVNNs and discuss possible implementations. We construct sequences of loss functionals for ODEs and scalar hyperbolic conservation laws for which a vanishing loss leads to convergence. Moreover, we show the existence of sequences of loss minimizing neural networks if the solution is an element of $BMV$. Several numerical test cases illustrate that it is possible to use standard techniques to minimize these loss functionals for EVNNs.

Faithfully summarizing the knowledge encoded by a deep neural network (DNN) into a few symbolic primitive patterns without losing much information represents a core challenge in explainable AI. To this end, Ren et al. (2024) have derived a series of theorems to prove that the inference score of a DNN can be explained as a small set of interactions between input variables. However, the lack of generalization power makes it still hard to consider such interactions as faithful primitive patterns encoded by the DNN. Therefore, given different DNNs trained for the same task, we develop a new method to extract interactions that are shared by these DNNs. Experiments show that the extracted interactions can better reflect common knowledge shared by different DNNs.

Regression methods dominate the practice of biostatistical analysis, but biostatistical training emphasises the details of regression models and methods ahead of the purposes for which such modelling might be useful. More broadly, statistics is widely understood to provide a body of techniques for "modelling data", underpinned by what we describe as the "true model myth": that the task of the statistician/data analyst is to build a model that closely approximates the true data generating process. By way of our own historical examples and a brief review of mainstream clinical research journals, we describe how this perspective has led to a range of problems in the application of regression methods, including misguided "adjustment" for covariates, misinterpretation of regression coefficients and the widespread fitting of regression models without a clear purpose. We then outline a new approach to the teaching and application of biostatistical methods, which situates them within a framework that first requires clear definition of the substantive research question at hand within one of three categories: descriptive, predictive, or causal. Within this approach, the development and application of (multivariable) regression models, as well as other advanced biostatistical methods, should proceed differently according to the type of question. Regression methods will no doubt remain central to statistical practice as they provide a powerful tool for representing variation in a response or outcome variable as a function of "input" variables, but their conceptualisation and usage should follow from the purpose at hand.

Markov chain Monte Carlo (MCMC) is a commonly used method for approximating expectations with respect to probability distributions. Uncertainty assessment for MCMC estimators is essential in practical applications. Moreover, for multivariate functions of a Markov chain, it is important to estimate not only the auto-correlation for each component but also to estimate cross-correlations, in order to better assess sample quality, improve estimates of effective sample size, and use more effective stopping rules. Berg and Song [2022] introduced the moment least squares (momentLS) estimator, a shape-constrained estimator for the autocovariance sequence from a reversible Markov chain, for univariate functions of the Markov chain. Based on this sequence estimator, they proposed an estimator of the asymptotic variance of the sample mean from MCMC samples. In this study, we propose novel autocovariance sequence and asymptotic variance estimators for Markov chain functions with multiple components, based on the univariate momentLS estimators from Berg and Song [2022]. We demonstrate strong consistency of the proposed auto(cross)-covariance sequence and asymptotic variance matrix estimators. We conduct empirical comparisons of our method with other state-of-the-art approaches on simulated and real-data examples, using popular samplers including the random-walk Metropolis sampler and the No-U-Turn sampler from STAN.

This short study presents an opportunistic approach to a (more) reliable validation method for prediction uncertainty average calibration. Considering that variance-based calibration metrics (ZMS, NLL, RCE...) are quite sensitive to the presence of heavy tails in the uncertainty and error distributions, a shift is proposed to an interval-based metric, the Prediction Interval Coverage Probability (PICP). It is shown on a large ensemble of molecular properties datasets that (1) sets of z-scores are well represented by Student's-$t(\nu)$ distributions, $\nu$ being the number of degrees of freedom; (2) accurate estimation of 95 $\%$ prediction intervals can be obtained by the simple $2\sigma$ rule for $\nu>3$; and (3) the resulting PICPs are more quickly and reliably tested than variance-based calibration metrics. Overall, this method enables to test 20 $\%$ more datasets than ZMS testing. Conditional calibration is also assessed using the PICP approach.

Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern 1) a taxonomy and extensive overview of the state-of-the-art, 2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner, 3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time, and storage.

北京阿比特科技有限公司