亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

News articles containing data visualizations play an important role in informing the public on issues ranging from public health to politics. Recent research on the persuasive appeal of data visualizations suggests that prior attitudes can be notoriously difficult to change. Inspired by an NYT article, we designed two experiments to evaluate the impact of elicitation and contrasting narratives on attitude change, recall, and engagement. We hypothesized that eliciting prior beliefs leads to more elaborative thinking that ultimately results in higher attitude change, better recall, and engagement. Our findings revealed that visual elicitation leads to higher engagement in terms of feelings of surprise. While there is an overall attitude change across all experiment conditions, we did not observe a significant effect of belief elicitation on attitude change. With regard to recall error, while participants in the draw trend elicitation exhibited significantly lower recall error than participants in the categorize trend condition, we found no significant difference in recall error when comparing elicitation conditions to no elicitation. In a follow-up study, we added contrasting narratives with the purpose of making the main visualization (communicating data on the focal issue) appear strikingly different. Compared to the results of study 1, we found that contrasting narratives improved engagement in terms of surprise and interest but interestingly resulted in higher recall error and no significant change in attitude. We discuss the effects of elicitation and contrasting narratives in the context of topic involvement and the strengths of temporal trends encoded in the data visualization.

相關內容

We use Markov categories to develop generalizations of the theory of Markov chains and hidden Markov models in an abstract setting. This comprises characterizations of hidden Markov models in terms of local and global conditional independences as well as existing algorithms for Bayesian filtering and smoothing applicable in all Markov categories with conditionals. We show that these algorithms specialize to existing ones such as the Kalman filter, forward-backward algorithm, and the Rauch-Tung-Striebel smoother when instantiated in appropriate Markov categories. Under slightly stronger assumptions, we also prove that the sequence of outputs of the Bayes filter is itself a Markov chain with a concrete formula for its transition maps. There are two main features of this categorical framework. The first is its generality, as it can be used in any Markov category with conditionals. In particular, it provides a systematic unified account of hidden Markov models and algorithms for filtering and smoothing in discrete probability, Gaussian probability, measure-theoretic probability, possibilistic nondeterminism and others at the same time. The second feature is the intuitive visual representation of information flow in these algorithms in terms of string diagrams.

We develop a forcing framework based on the idea of amalgamating language fragments into a theory with a canonical Henkin model. We then demonstrate the usefulness of this framework by applying it to both the extended Namba problem and the analysis of models of certain theories with constraints in interpretation (TCIs). The foundations for a theory of TCIs and their models are laid in parallel to the development of our framework, and are of independent interest.

Judgment aggregation is a framework to aggregate individual opinions on multiple, logically connected issues into a collective outcome. It is open to manipulative attacks such as \textsc{Manipulation} where judges cast their judgments strategically. Previous works have shown that most computational problems corresponding to these manipulative attacks are \NP-hard. This desired computational barrier, however, often relies on formulas that are either of unbounded size or of complex structure. We revisit the computational complexity for various \textsc{Manipulation} and \textsc{Bribery} problems in judgment aggregation, now focusing on simple and realistic formulas. We restrict all formulas to be clauses that are (positive) monotone, Horn-clauses, or have bounded length. For basic variants of \textsc{Manipulation}, we show that these restrictions make several variants, which were in general known to be \NP-hard, polynomial-time solvable. Moreover, we provide a P vs.\ NP dichotomy for a large class of clause restrictions (generalizing monotone and Horn clauses) by showing a close relationship between variants of \textsc{Manipulation} and variants of \textsc{Satisfiability}. For Hamming distance based \textsc{Manipulation}, we show that \NP-hardness even holds for positive monotone clauses of length three, but the problem becomes polynomial-time solvable for positive monotone clauses of length two. For \textsc{Bribery}, we show that \NP-hardness even holds for positive monotone clauses of length two, but it becomes polynomial-time solvable for the same clause set if there is a constant budget.

Sequences of linear systems arise in the predictor-corrector method when computing the Pareto front for multi-objective optimization. Rather than discarding information generated when solving one system, it may be advantageous to recycle information for subsequent systems. To accomplish this, we seek to reduce the overall cost of computation when solving linear systems using common recycling methods. In this work, we assessed the performance of recycling minimum residual (RMINRES) method along with a map between coefficient matrices. For these methods to be fully integrated into the software used in Enouen et al. (2022), there must be working version of each in both Python and PyTorch. Herein, we discuss the challenges we encountered and solutions undertaken (and some ongoing) when computing efficient Python implementations of these recycling strategies. The goal of this project was to implement RMINRES in Python and PyTorch and add it to the established Pareto front code to reduce computational cost. Additionally, we wanted to implement the sparse approximate maps code in Python and PyTorch, so that it can be parallelized in future work.

This paper presents the first systematic study of the evaluation of Deep Neural Networks (DNNs) for discrete dynamical systems under stochastic assumptions, with a focus on wildfire prediction. We develop a framework to study the impact of stochasticity on two classes of evaluation metrics: classification-based metrics, which assess fidelity to observed ground truth (GT), and proper scoring rules, which test fidelity-to-statistic. Our findings reveal that evaluating for fidelity-to-statistic is a reliable alternative in highly stochastic scenarios. We extend our analysis to real-world wildfire data, highlighting limitations in traditional wildfire prediction evaluation methods, and suggest interpretable stochasticity-compatible alternatives.

Recommendation algorithms play a pivotal role in shaping our media choices, which makes it crucial to comprehend their long-term impact on user behavior. These algorithms are often linked to two critical outcomes: homogenization, wherein users consume similar content despite disparate underlying preferences, and the filter bubble effect, wherein individuals with differing preferences only consume content aligned with their preferences (without much overlap with other users). Prior research assumes a trade-off between homogenization and filter bubble effects and then shows that personalized recommendations mitigate filter bubbles by fostering homogenization. However, because of this assumption of a tradeoff between these two effects, prior work cannot develop a more nuanced view of how recommendation systems may independently impact homogenization and filter bubble effects. We develop a more refined definition of homogenization and the filter bubble effect by decomposing them into two key metrics: how different the average consumption is between users (inter-user diversity) and how varied an individual's consumption is (intra-user diversity). We then use a novel agent-based simulation framework that enables a holistic view of the impact of recommendation systems on homogenization and filter bubble effects. Our simulations show that traditional recommendation algorithms (based on past behavior) mainly reduce filter bubbles by affecting inter-user diversity without significantly impacting intra-user diversity. Building on these findings, we introduce two new recommendation algorithms that take a more nuanced approach by accounting for both types of diversity.

Missing data in multiple variables is a common issue. We investigate the applicability of the framework of graphical models for handling missing data to a complex longitudinal pharmacological study of HIV-positive children treated with an efavirenz-based regimen as part of the CHAPAS-3 trial. Specifically, we examine whether the causal effects of interest, defined through static interventions on multiple continuous variables, can be recovered (estimated consistently) from the available data only. So far, no general algorithms are available to decide on recoverability, and decisions have to be made on a case-by-case basis. We emphasize sensitivity of recoverability to even the smallest changes in the graph structure, and present recoverability results for three plausible missingness directed acyclic graphs (m-DAGs) in the CHAPAS-3 study, informed by clinical knowledge. Furthermore, we propose the concept of ''closed missingness mechanisms'' and show that under these mechanisms an available case analysis is admissible for consistent estimation for any type of statistical and causal query, even if the underlying missingness mechanism is of missing not at random (MNAR) type. Both simulations and theoretical considerations demonstrate how, in the assumed MNAR setting of our study, a complete or available case analysis can be superior to multiple imputation, and estimation results vary depending on the assumed missingness DAG. Our analyses are possibly the first to show the applicability of missingness DAGs (m-DAGs) to complex longitudinal real-world data, while highlighting the sensitivity with respect to the assumed causal model.

We propose two novel extensions of the Wyner common information optimization problem. Each relaxes one fundamental constraints in Wyner's formulation. The \textit{Variational Wyner Common Information} relaxes the matching constraint to the known distribution while imposing conditional independence to the feasible solution set. We derive a tight surrogate upper bound of the obtained unconstrained Lagrangian via the theory of variational inference, which can be minimized efficiently. Our solver caters to problems where conditional independence holds with significantly reduced computation complexity; On the other hand, the \textit{Bipartite Wyner Common Information} relaxes the conditional independence constraint whereas the matching condition is enforced on the feasible set. By leveraging the difference-of-convex structure of the formulated optimization problem, we show that our solver is resilient to conditional dependent sources. Both solvers are provably convergent (local stationary points), and empirically, they obtain more accurate solutions to Wyner's formulation with substantially less runtime. Moreover, them can be extended to unknown distribution settings by parameterizing the common randomness as a member of the exponential family of distributions. Our approaches apply to multi-modal clustering problems, where multiple modalities of observations come from the same cluster. Empirically, our solvers outperform the state-of-the-art multi-modal clustering algorithms with significantly improved performance.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.

北京阿比特科技有限公司