亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The focus of this paper is on the concurrent reconstruction of both the diffusion and potential coefficients present in an elliptic/parabolic equation, utilizing two internal measurements of the solutions. A decoupled algorithm is constructed to sequentially recover these two parameters. In the first step, we implement a straightforward reformulation that results in a standard problem of identifying the diffusion coefficient. This coefficient is then numerically recovered, with no requirement for knowledge of the potential, by utilizing an output least-square method coupled with finite element discretization. In the second step, the previously recovered diffusion coefficient is employed to reconstruct the potential coefficient, applying a method similar to the first step. Our approach is stimulated by a constructive conditional stability, and we provide rigorous a priori error estimates in $L^2(\Omega)$ for the recovered diffusion and potential coefficients. Our approach is stimulated by a constructive conditional stability, and we provide rigorous a priori error estimates in $L^2(\Omega)$ for the recovered diffusion and potential coefficients. To derive these estimates, we develop a weighted energy argument and suitable positivity conditions. These estimates offer a beneficial guide for choosing regularization parameters and discretization mesh sizes, in accordance with the noise level. Some numerical experiments are presented to demonstrate the accuracy of the numerical scheme and support our theoretical results.

相關內容

Chain-of-thought reasoning, a cognitive process fundamental to human intelligence, has garnered significant attention in the realm of artificial intelligence and natural language processing. However, there still remains a lack of a comprehensive survey for this arena. To this end, we take the first step and present a thorough survey of this research field carefully and widely. We use X-of-Thought to refer to Chain-of-Thought in a broad sense. In detail, we systematically organize the current research according to the taxonomies of methods, including XoT construction, XoT structure variants, and enhanced XoT. Additionally, we describe XoT with frontier applications, covering planning, tool use, and distillation. Furthermore, we address challenges and discuss some future directions, including faithfulness, multi-modal, and theory. We hope this survey serves as a valuable resource for researchers seeking to innovate within the domain of chain-of-thought reasoning.

Community rating is a policy that mandates uniform premium regardless of the risk factors. In this paper, our focus narrows to the single contract interpretation wherein we establish a theoretical framework for community rating using Stiglitz's (1977) monopoly model in which there is a continuum of agents. We exhibit profitability conditions and show that, under mild regularity conditions, the optimal premium is unique and satisfies the inverse elasticity rule. Our numerical analysis, using realistic parameter values, reveals that under regulation, a 10% increase in indemnity is possible with minimal impact on other variables.

This paper investigates the potential of Virtual Reality (VR) as a research tool for studying diversity and inclusion characteristics in the context of human-robot interactions (HRI). Some exclusive advantages of using VR in HRI are discussed, such as a controllable environment, the possibility to manipulate the variables related to the robot and the human-robot interaction, flexibility in the design of the robot and the environment, and advanced measurement methods related e.g. to eye tracking and physiological data. At the same time, the challenges of researching diversity and inclusion in HRI are described, especially in accessibility, cyber sickness and bias when developing VR-environments. Furthermore, solutions to these challenges are being discussed to fully harness the benefits of VR for the studying of diversity and inclusion.

In this paper, we investigate how the initial models and the final models for the polynomial functors can be uniformly specified in matching logic.

In this paper we consider data storage from a probabilistic point of view and obtain bounds for efficient storage in the presence of feature selection and undersampling, both of which are important from the data science perspective. First, we consider encoding of correlated sources for nonstationary data and obtain a Slepian-Wolf type result for the probability of error. We then reinterpret our result by allowing one source to be the set of features to be discarded and other source to be remaining data to be encoded. Next, we consider neighbourhood domination in random graphs where we impose the condition that a fraction of neighbourhood must be present for each vertex and obtain optimal bounds on the minimum size of such a set. We show how such sets are useful for data undersampling in the presence of imbalanced datasets and briefly illustrate our result using~\(k-\)nearest neighbours type classification rules as an example.

This paper presents a comprehensive study focusing on the influence of DEM type and spatial resolution on the accuracy of flood inundation prediction. The research employs a state-of-the-art deep learning method using a 1D convolutional neural network (CNN). The CNN-based method employs training input data in the form of synthetic hydrographs, along with target data represented by water depth obtained utilizing a 2D hydrodynamic model, LISFLOOD-FP. The performance of the trained CNN models is then evaluated and compared with the observed flood event. This study examines the use of digital surface models (DSMs) and digital terrain models (DTMs) derived from a LIDAR-based 1m DTM, with resolutions ranging from 15 to 30 meters. The proposed methodology is implemented and evaluated in a well-established benchmark location in Carlisle, UK. The paper also discusses the applicability of the methodology to address the challenges encountered in a data-scarce flood-prone region, exemplified by Pakistan. The study found that DTM performs better than DSM at lower resolutions. Using a 30m DTM improved flood depth prediction accuracy by about 21% during the peak stage. Increasing the resolution to 15m increased RMSE and overlap index by at least 50% and 20% across all flood phases. The study demonstrates that while coarser resolution may impact the accuracy of the CNN model, it remains a viable option for rapid flood prediction compared to hydrodynamic modeling approaches.

Statisticians are not only one of the earliest professional adopters of data visualization, but also some of its most prolific users. Understanding how these professionals utilize visual representations in their analytic process may shed light on best practices for visual sensemaking. We present results from an interview study involving 18 professional statisticians (19.7 years average in the profession) on three aspects: (1) their use of visualization in their daily analytic work; (2) their mental models of inferential statistical processes; and (3) their design recommendations for how to best represent statistical inferences. Interview sessions consisted of discussing inferential statistics, eliciting participant sketches of suitable visual designs, and finally, a design intervention with our proposed visual designs. We analyzed interview transcripts using thematic analysis and open coding, deriving thematic codes on statistical mindset, analytic process, and analytic toolkit. The key findings for each aspect are as follows: (1) statisticians make extensive use of visualization during all phases of their work (and not just when reporting results); (2) their mental models of inferential methods tend to be mostly visually based; and (3) many statisticians abhor dichotomous thinking. The latter suggests that a multi-faceted visual display of inferential statistics that includes a visual indicator of analytically important effect sizes may help to balance the attributed epistemic power of traditional statistical testing with an awareness of the uncertainty of sensemaking.

Maintaining factual consistency is a critical issue in abstractive text summarisation, however, it cannot be assessed by traditional automatic metrics used for evaluating text summarisation, such as ROUGE scoring. Recent efforts have been devoted to developing improved metrics for measuring factual consistency using pre-trained language models, but these metrics have restrictive token limits, and are therefore not suitable for evaluating long document text summarisation. Moreover, there is limited research evaluating whether existing automatic evaluation metrics are fit for purpose when applied to long document data sets. In this work, we evaluate the efficacy of automatic metrics at assessing factual consistency in long document text summarisation and propose a new evaluation framework LongDocFACTScore. This framework allows metrics to be extended to any length document. This framework outperforms existing state-of-the-art metrics in its ability to correlate with human measures of factuality when used to evaluate long document summarisation data sets. Furthermore, we show LongDocFACTScore has performance comparable to state-of-the-art metrics when evaluated against human measures of factual consistency on short document data sets. We make our code and annotated data publicly available: //github.com/jbshp/LongDocFACTScore.

In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.

Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.

北京阿比特科技有限公司