亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

One of the most important tools available to limit the spread and impact of infectious diseases is vaccination. It is therefore important to understand what factors determine people's vaccination decisions. To this end, previous behavioural research made use of, (i) controlled but often abstract or hypothetical studies (e.g., vignettes) or, (ii) realistic but typically less flexible studies that make it difficult to understand individual decision processes (e.g., clinical trials). Combining the best of these approaches, we propose integrating real-world Bluetooth contacts via smartphones in several rounds of a game scenario, as a novel methodology to study vaccination decisions and disease spread. In our 12-week proof-of-concept study conducted with $N$ = 494 students, we found that participants strongly responded to some of the information provided to them during or after each decision round, particularly those related to their individual health outcomes. In contrast, information related to others' decisions and outcomes (e.g., the number of vaccinated or infected individuals) appeared to be less important. We discuss the potential of this novel method and point to fruitful areas for future research.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 優化器 · 線性的 · 泛函 · 類別 ·
2024 年 4 月 16 日

We extend Newton and Lagrange interpolation to arbitrary dimensions. The core contribution that enables this is a generalized notion of non-tensorial unisolvent nodes, i.e., nodes on which the multivariate polynomial interpolant of a function is unique. By validation, we reach the optimal exponential Trefethen rates for a class of analytic functions, we term Trefethen functions. The number of interpolation nodes required for computing the optimal interpolant depends sub-exponentially on the dimension, hence resisting the curse of dimensionality. Based on these results, we propose an algorithm to efficiently and numerically stably solve arbitrary-dimensional interpolation problems, with at most quadratic runtime and linear memory requirement.

The development of reliable and fair diagnostic systems is often constrained by the scarcity of labeled data. To address this challenge, our work explores the feasibility of unsupervised domain adaptation (UDA) to integrate large external datasets for developing reliable classifiers. The adoption of UDA with multiple sources can simultaneously enrich the training set and bridge the domain gap between different skin lesion datasets, which vary due to distinct acquisition protocols. Particularly, UDA shows practical promise for improving diagnostic reliability when training with a custom skin lesion dataset, where only limited labeled data are available from the target domain. In this study, we investigate three UDA training schemes based on source data utilization: single-source, combined-source, and multi-source UDA. Our findings demonstrate the effectiveness of applying UDA on multiple sources for binary and multi-class classification. A strong correlation between test error and label shift in multi-class tasks has been observed in the experiment. Crucially, our study shows that UDA can effectively mitigate bias against minority groups and enhance fairness in diagnostic systems, while maintaining superior classification performance. This is achieved even without directly implementing fairness-focused techniques. This success is potentially attributed to the increased and well-adapted demographic information obtained from multiple sources.

Though diffusion models have been successfully applied to various image restoration (IR) tasks, their performance is sensitive to the choice of training datasets. Typically, diffusion models trained in specific datasets fail to recover images that have out-of-distribution degradations. To address this problem, this work leverages a capable vision-language model and a synthetic degradation pipeline to learn image restoration in the wild (wild IR). More specifically, all low-quality images are simulated with a synthetic degradation pipeline that contains multiple common degradations such as blur, resize, noise, and JPEG compression. Then we introduce robust training for a degradation-aware CLIP model to extract enriched image content features to assist high-quality image restoration. Our base diffusion model is the image restoration SDE (IR-SDE). Built upon it, we further present a posterior sampling strategy for fast noise-free image generation. We evaluate our model on both synthetic and real-world degradation datasets. Moreover, experiments on the unified image restoration task illustrate that the proposed posterior sampling improves image generation quality for various degradations.

Artificial neural networks (ANNs) have evolved from the 1940s primitive models of brain function to become tools for artificial intelligence. They comprise many units, artificial neurons, interlinked through weighted connections. ANNs are trained to perform tasks through learning rules that modify the connection weights. With these rules being in the focus of research, ANNs have become a branch of machine learning developing independently from neuroscience. Although likely required for the development of truly intelligent machines, the integration of neuroscience into ANNs has remained a neglected proposition. Here, we demonstrate that designing an ANN along biological principles results in drastically improved task performance. As a challenging real-world problem, we choose real-time ocean-wave prediction which is essential for various maritime operations. Motivated by the similarity of ocean waves measured at a single location to sound waves arriving at the eardrum, we redesign an echo state network to resemble the brain's auditory system. This yields a powerful predictive tool which is computationally lean, robust with respect to network parameters, and works efficiently across a wide range of sea states. Our results demonstrate the advantages of integrating neuroscience with machine learning and offer a tool for use in the production of green energy from ocean waves.

Measuring the impact of an environmental point source exposure on the risk of disease, like cancer or childhood asthma, is well-developed. Modeling how an environmental health hazard that is extensive in space, like a wastewater canal, is not. We propose a novel Bayesian generative semiparametric model for characterizing the cumulative spatial exposure to an environmental health hazard that is not well-represented by a single point in space. The model couples a dose-response model with a log-Gaussian Cox process integrated against a distance kernel with an unknown length-scale. We show that this model is a well-defined Bayesian inverse model, namely that the posterior exists under a Gaussian process prior for the log-intensity of exposure, and that a simple integral approximation adequately controls the computational error. We quantify the finite-sample properties and the computational tractability of the discretization scheme in a simulation study. Finally, we apply the model to survey data on household risk of childhood diarrheal illness from exposure to a system of wastewater canals in Mezquital Valley, Mexico.

In molecular communication (MC), molecules are released from the transmitter to convey information. This paper considers a realistic molecule shift keying (MoSK) scenario with two species of molecule in two reservoirs, where the molecules are harvested from the environment and placed into different reservoirs, which are purified by exchanging molecules between the reservoirs. This process consumes energy, and for a reasonable energy cost, the reservoirs cannot be pure; thus, our MoSK transmitter is imperfect, releasing mixtures of both molecules for every symbol, resulting in inter-symbol interference (ISI). To mitigate ISI, the properties of the receiver are analyzed and a detection method based on the ratio of different molecules is proposed. Theoretical and simulation results are provided, showing that with the increase of energy cost, the system achieves better performance. The good performance of the proposed detection scheme is also demonstrated.

Epilepsy is one of the most common neurological disorders, and many patients require surgical intervention when medication fails to control seizures. For effective surgical outcomes, precise localisation of the epileptogenic focus - often approximated through the Seizure Onset Zone (SOZ) - is critical yet remains a challenge. Active probing through electrical stimulation is already standard clinical practice for identifying epileptogenic areas. This paper advances the application of deep learning for SOZ localisation using Single Pulse Electrical Stimulation (SPES) responses. We achieve this by introducing Transformer models that incorporate cross-channel attention. We evaluate these models on held-out patient test sets to assess their generalisability to unseen patients and electrode placements. Our study makes three key contributions: Firstly, we implement an existing deep learning model to compare two SPES analysis paradigms - namely, divergent and convergent. These paradigms evaluate outward and inward effective connections, respectively. Our findings reveal a notable improvement in moving from a divergent (AUROC: 0.574) to a convergent approach (AUROC: 0.666), marking the first application of the latter in this context. Secondly, we demonstrate the efficacy of the Transformer models in handling heterogeneous electrode placements, increasing the AUROC to 0.730. Lastly, by incorporating inter-trial variability, we further refine the Transformer models, with an AUROC of 0.745, yielding more consistent predictions across patients. These advancements provide a deeper insight into SOZ localisation and represent a significant step in modelling patient-specific intracranial EEG electrode placements in SPES. Future work will explore integrating these models into clinical decision-making processes to bridge the gap between deep learning research and practical healthcare applications.

As deep neural models in NLP become more complex, and as a consequence opaque, the necessity to interpret them becomes greater. A burgeoning interest has emerged in rationalizing explanations to provide short and coherent justifications for predictions. In this position paper, we advocate for a formal framework for key concepts and properties about rationalizing explanations to support their evaluation systematically. We also outline one such formal framework, tailored to rationalizing explanations of increasingly complex structures, from free-form explanations to deductive explanations, to argumentative explanations (with the richest structure). Focusing on the automated fact verification task, we provide illustrations of the use and usefulness of our formalization for evaluating explanations, tailored to their varying structures.

Textual entailment is a fundamental task in natural language processing. Most approaches for solving the problem use only the textual content present in training data. A few approaches have shown that information from external knowledge sources like knowledge graphs (KGs) can add value, in addition to the textual content, by providing background knowledge that may be critical for a task. However, the proposed models do not fully exploit the information in the usually large and noisy KGs, and it is not clear how it can be effectively encoded to be useful for entailment. We present an approach that complements text-based entailment models with information from KGs by (1) using Personalized PageR- ank to generate contextual subgraphs with reduced noise and (2) encoding these subgraphs using graph convolutional networks to capture KG structure. Our technique extends the capability of text models exploiting structural and semantic information found in KGs. We evaluate our approach on multiple textual entailment datasets and show that the use of external knowledge helps improve prediction accuracy. This is particularly evident in the challenging BreakingNLI dataset, where we see an absolute improvement of 5-20% over multiple text-based entailment models.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司