亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Disinformation research has proliferated in reaction to widespread false, problematic beliefs purported to explain major social phenomena. Yet while the effects of disinformation are well-known, there is less consensus about its causes; the research spans several disciplines, each focusing on different pieces. This article contributes to this growing field by reviewing prevalent U.S. disinformation discourse (academic writing, media, and corporate and government narrative) and outlining the dominant understanding, or paradigm, of the disinformation problem by analyzing cross-disciplinary discourse about the content, individual, group, and institutional layers of the problem. The result is an individualistic explanation largely blaming social media, malicious individuals or nations, and irrational people. Yet this understanding has shortcomings: notably, that its limited, individualistic views of truth and rationality obscures the influence of oppressive ideologies and media or domestic actors in creating flawed worldviews and spreading disinformation. The article then concludes by putting forth an alternative, sociopolitical paradigm that allows subjective models of the world to govern rationality and information processing -- largely informed by social and group identity -- which are being formed and catered to by institutional actors (corporations, media, political parties, and the government) to maintain or gain legitimacy for their actions.

相關內容

There has recently been an explosion of interest in how "higher-order" structures emerge in complex systems. This "emergent" organization has been found in a variety of natural and artificial systems, although at present the field lacks a unified understanding of what the consequences of higher-order synergies and redundancies are for systems. Typical research treat the presence (or absence) of synergistic information as a dependent variable and report changes in the level of synergy in response to some change in the system. Here, we attempt to flip the script: rather than treating higher-order information as a dependent variable, we use evolutionary optimization to evolve boolean networks with significant higher-order redundancies, synergies, or statistical complexity. We then analyse these evolved populations of networks using established tools for characterizing discrete dynamics: the number of attractors, average transient length, and Derrida coefficient. We also assess the capacity of the systems to integrate information. We find that high-synergy systems are unstable and chaotic, but with a high capacity to integrate information. In contrast, evolved redundant systems are extremely stable, but have negligible capacity to integrate information. Finally, the complex systems that balance integration and segregation (known as Tononi-Sporns-Edelman complexity) show features of both chaosticity and stability, with a greater capacity to integrate information than the redundant systems while being more stable than the random and synergistic systems. We conclude that there may be a fundamental trade-off between the robustness of a systems dynamics and its capacity to integrate information (which inherently requires flexibility and sensitivity), and that certain kinds of complexity naturally balance this trade-off.

Asymptotic analysis for related inference problems often involves similar steps and proofs. These intermediate results could be shared across problems if each of them is made self-contained and easily identified. However, asymptotic analysis using Taylor expansions is limited for result borrowing because it is a step-to-step procedural approach. This article introduces EEsy, a modular system for estimating finite and infinitely dimensional parameters in related inference problems. It is based on the infinite-dimensional Z-estimation theorem, Donsker and Glivenko-Cantelli preservation theorems, and weight calibration techniques. This article identifies the systematic nature of these tools and consolidates them into one system containing several modules, which can be built, shared, and extended in a modular manner. This change to the structure of method development allows related methods to be developed in parallel and complex problems to be solved collaboratively, expediting the development of new analytical methods. This article considers four related inference problems -- estimating parameters with random sampling, two-phase sampling, auxiliary information incorporation, and model misspecification. We illustrate this modular approach by systematically developing 9 parameter estimators and 18 variance estimators for the four related inference problems regarding semi-parametric additive hazards models. Simulation studies show the obtained asymptotic results for these 27 estimators are valid. In the end, I describe how this system can simplify the use of empirical process theory, a powerful but challenging tool to be adopted by the broad community of methods developers. I discuss challenges and the extension of this system to other inference problems.

We present a novel combination of dynamic embedded topic models and change-point detection to explore diachronic change of lexical semantic modality in classical and early Christian Latin. We demonstrate several methods for finding and characterizing patterns in the output, and relating them to traditional scholarship in Comparative Literature and Classics. This simple approach to unsupervised models of semantic change can be applied to any suitable corpus, and we conclude with future directions and refinements aiming to allow noisier, less-curated materials to meet that threshold.

The mean residual life function is a key functional for a survival distribution. It has a practically useful interpretation as the expected remaining lifetime given survival up to a particular time point, and it also characterizes the survival distribution. However, it has received limited attention in terms of inference methods under a probabilistic modeling framework. We seek to provide general inference methodology for mean residual life regression. We employ Dirichlet process mixture modeling for the joint stochastic mechanism of the covariates and the survival response. This density regression approach implies a flexible model structure for the mean residual life of the conditional response distribution, allowing general shapes for mean residual life as a function of covariates given a specific time point, as well as a function of time given particular values of the covariates. We further extend the mixture model to incorporate dependence across experimental groups. This extension is built from a dependent Dirichlet process prior for the group-specific mixing distributions, with common atoms and weights that vary across groups through latent bivariate Beta distributed random variables. We discuss properties of the regression models, and develop methods for posterior inference. The different components of the methodology are illustrated with simulated data examples, and the model is also applied to a data set comprising right censored survival times.

Ordinal pattern dependence has been introduced in order to capture co-monotonic behavior between two time series. This concept has several features one would intuitively demand from a dependence measure. It was believed that ordinal pattern dependence satisfies the axioms which Grothe et al. [8] proclaimed for a multivariate measure of dependence. In the present article we show that this is not true and that there is a mistake in the article Betken et al. [5]. Furthermore we show that ordinal pattern dependence satisfies a slightly modified set of axioms.

Recently, addressing spatial confounding has become a major topic in spatial statistics. However, the literature has provided conflicting definitions, and many proposed definitions do not address the issue of confounding as it is understood in causal inference. We define spatial confounding as the existence of an unmeasured causal confounder with a spatial structure. We present a causal inference framework for nonparametric identification of the causal effect of a continuous exposure on an outcome in the presence of spatial confounding. We propose double machine learning (DML), a procedure in which flexible models are used to regress both the exposure and outcome variables on confounders to arrive at a causal estimator with favorable robustness properties and convergence rates, and we prove that this approach is consistent and asymptotically normal under spatial dependence. As far as we are aware, this is the first approach to spatial confounding that does not rely on restrictive parametric assumptions (such as linearity, effect homogeneity, or Gaussianity) for both identification and estimation. We demonstrate the advantages of the DML approach analytically and in simulations. We apply our methods and reasoning to a study of the effect of fine particulate matter exposure during pregnancy on birthweight in California.

The mean residual life function is a key functional for a survival distribution. It has practically useful interpretation as the expected remaining lifetime given survival up to a particular time point, and it also characterizes the survival distribution. However, it has received limited attention in terms of inference methods under a probabilistic modeling framework. In this paper, we seek to provide general inference methodology for mean residual life regression. Survival data often include a set of predictor variables for the survival response distribution, and in many cases it is natural to include the covariates as random variables into the modeling. We thus propose a Dirichlet process mixture modeling approach for the joint stochastic mechanism of the covariates and survival responses. This approach implies a flexible model structure for the mean residual life of the conditional response distribution, allowing general shapes for mean residual life as a function of covariates given a specific time point, as well as a function of time given particular values of the covariate vector. To expand the scope of the modeling framework, we extend the mixture model to incorporate dependence across experimental groups, such as treatment and control groups. This extension is built from a dependent Dirichlet process prior for the group-specific mixing distributions, with common locations and weights that vary across groups through latent bivariate beta distributed random variables. We develop properties of the proposed regression models, and discuss methods for prior specification and posterior inference. The different components of the methodology are illustrated with simulated data sets. Moreover, the modeling approach is applied to a data set comprising right censored survival times of patients with small cell lung cancer.

We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.

The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.

Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.

北京阿比特科技有限公司