亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This chapter presents an overview of a specific form of limited dependent variable models, namely discrete choice models, where the dependent (response or outcome) variable takes values which are discrete, inherently ordered, and characterized by an underlying continuous latent variable. Within this setting, the dependent variable may take only two discrete values (such as 0 and 1) giving rise to binary models (e.g., probit and logit models) or more than two values (say $j=1,2, \ldots, J$, where $J$ is some integer, typically small) giving rise to ordinal models (e.g., ordinal probit and ordinal logit models). In these models, the primary goal is to model the probability of responses/outcomes conditional on the covariates. We connect the outcomes of a discrete choice model to the random utility framework in economics, discuss estimation techniques, present the calculation of covariate effects and measures to assess model fitting. Some recent advances in discrete data modeling are also discussed. Following the theoretical review, we utilize the binary and ordinal models to analyze public opinion on marijuana legalization and the extent of legalization -- a socially relevant but controversial topic in the United States. We obtain several interesting results including that past use of marijuana, belief about legalization and political partisanship are important factors that shape the public opinion.

相關內容

We present a deterministic algorithm for the efficient evaluation of imaginary time diagrams based on the recently introduced discrete Lehmann representation (DLR) of imaginary time Green's functions. In addition to the efficient discretization of diagrammatic integrals afforded by its approximation properties, the DLR basis is separable in imaginary time, allowing us to decompose diagrams into linear combinations of nested sequences of one-dimensional products and convolutions. Focusing on the strong coupling bold-line expansion of generalized Anderson impurity models, we show that our strategy reduces the computational complexity of evaluating an $M$th-order diagram at inverse temperature $\beta$ from $\mathcal{O}(\beta^{2M-1})$ for a direct quadrature to $\mathcal{O}(M \log^{M+1} \beta)$, with controllable high-order accuracy. We benchmark our algorithm using third-order expansions for multi-band impurity problems with off-diagonal hybridization and spin-orbit coupling, presenting comparisons with exact diagonalization and quantum Monte Carlo approaches. In particular, we perform a self-consistent dynamical mean-field theory calculation for a three-band Hubbard model with strong spin-orbit coupling representing a minimal model of Ca$_2$RuO$_4$, demonstrating the promise of the method for modeling realistic strongly correlated multi-band materials. For expansions of low and intermediate order, in which diagrams can be enumerated, our method provides an efficient, straightforward, and robust black-box evaluation procedure. In this sense, it fills a gap between diagrammatic approximations of the lowest order, which are simple and inexpensive but inaccurate, and those based on Monte Carlo sampling of high-order diagrams.

In many practical scenarios, including finance, environmental sciences, system reliability, etc., it is often of interest to study the various notion of negative dependence among the observed variables. A new bivariate copula is proposed for modeling negative dependence between two random variables that complies with most of the popular notions of negative dependence reported in the literature. Specifically, the Spearman's rho and the Kendall's tau for the proposed copula have a simple one-parameter form with negative values in the full range. Some important ordering properties comparing the strength of negative dependence with respect to the parameter involved are considered. Simple examples of the corresponding bivariate distributions with popular marginals are presented. Application of the proposed copula is illustrated using a real data set on air quality in the New York City, USA.

This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a temporal discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence concerning the step size of the discretization of time on the local temporal interval. We have conducted several numerical experiments using the proposed algorithm for various test problems to validate its performance. It can be said that the obtained numerical results are in accordance with the theoretical findings.

Despite the superior performance, Large Language Models~(LLMs) require significant computational resources for deployment and use. To overcome this issue, quantization methods have been widely applied to reduce the memory footprint of LLMs as well as increasing the inference rate. However, a major challenge is that low-bit quantization methods often lead to performance degradation. It is important to understand how quantization impacts the capacity of LLMs. Different from previous studies focused on overall performance, this work aims to investigate the impact of quantization on \emph{emergent abilities}, which are important characteristics that distinguish LLMs from small language models. Specially, we examine the abilities of in-context learning, chain-of-thought reasoning, and instruction-following in quantized LLMs. Our empirical experiments show that these emergent abilities still exist in 4-bit quantization models, while 2-bit models encounter severe performance degradation on the test of these abilities. To improve the performance of low-bit models, we conduct two special experiments: (1) fine-gained impact analysis that studies which components (or substructures) are more sensitive to quantization, and (2) performance compensation through model fine-tuning. Our work derives a series of important findings to understand the impact of quantization on emergent abilities, and sheds lights on the possibilities of extremely low-bit quantization for LLMs.

There is currently no established method for evaluating human response timing across a range of naturalistic traffic conflict types. Traditional notions derived from controlled experiments, such as perception-response time, fail to account for the situation-dependency of human responses and offer no clear way to define the stimulus in many common traffic conflict scenarios. As a result, they are not well suited for application in naturalistic settings. Our main contribution is the development of a novel framework for measuring and modeling response times in naturalistic traffic conflicts applicable to automated driving systems as well as other traffic safety domains. The framework suggests that response timing must be understood relative to the subject's current (prior) belief and is always embedded in, and dependent on, the dynamically evolving situation. The response process is modeled as a belief update process driven by perceived violations to this prior belief, that is, by surprising stimuli. The framework resolves two key limitations with traditional notions of response time when applied in naturalistic scenarios: (1) The strong situation-dependence of response timing and (2) how to unambiguously define the stimulus. Resolving these issues is a challenge that must be addressed by any response timing model intended to be applied in naturalistic traffic conflicts. We show how the framework can be implemented by means of a relatively simple heuristic model fit to naturalistic human response data from real crashes and near crashes from the SHRP2 dataset and discuss how it is, in principle, generalizable to any traffic conflict scenario. We also discuss how the response timing framework can be implemented computationally based on evidence accumulation enhanced by machine learning-based generative models and the information-theoretic concept of surprise.

High-resolution wide-angle fisheye images are becoming more and more important for robotics applications such as autonomous driving. However, using ordinary convolutional neural networks or vision transformers on this data is problematic due to projection and distortion losses introduced when projecting to a rectangular grid on the plane. We introduce the HEAL-SWIN transformer, which combines the highly uniform Hierarchical Equal Area iso-Latitude Pixelation (HEALPix) grid used in astrophysics and cosmology with the Hierarchical Shifted-Window (SWIN) transformer to yield an efficient and flexible model capable of training on high-resolution, distortion-free spherical data. In HEAL-SWIN, the nested structure of the HEALPix grid is used to perform the patching and windowing operations of the SWIN transformer, resulting in a one-dimensional representation of the spherical data with minimal computational overhead. We demonstrate the superior performance of our model for semantic segmentation and depth regression tasks on both synthetic and real automotive datasets. Our code is available at //github.com/JanEGerken/HEAL-SWIN.

Continuous space species distribution models (SDMs) have a long-standing history as a valuable tool in ecological statistical analysis. Geostatistical and preferential models are both common models in ecology. Geostatistical models are employed when the process under study is independent of the sampling locations, while preferential models are employed when sampling locations are dependent on the process under study. But, what if we have both types of data collectd over the same process? Can we combine them? If so, how should we combine them? This study investigated the suitability of both geostatistical and preferential models, as well as a mixture model that accounts for the different sampling schemes. Results suggest that in general the preferential and mixture models have satisfactory and close results in most cases, while the geostatistical models presents systematically worse estimates at higher spatial complexity, smaller number of samples and lower proportion of completely random samples.

There is abundant observational data in the software engineering domain, whereas running large-scale controlled experiments is often practically impossible. Thus, most empirical studies can only report statistical correlations -- instead of potentially more insightful and robust causal relations. To support analyzing purely observational data for causal relations, and to assess any differences between purely predictive and causal models of the same data, this paper discusses some novel techniques based on structural causal models (such as directed acyclic graphs of causal Bayesian networks). Using these techniques, one can rigorously express, and partially validate, causal hypotheses; and then use the causal information to guide the construction of a statistical model that captures genuine causal relations -- such that correlation does imply causation. We apply these ideas to analyzing public data about programmer performance in Code Jam, a large world-wide coding contest organized by Google every year. Specifically, we look at the impact of different programming languages on a participant's performance in the contest. While the overall effect associated with programming languages is weak compared to other variables -- regardless of whether we consider correlational or causal links -- we found considerable differences between a purely associational and a causal analysis of the very same data. The takeaway message is that even an imperfect causal analysis of observational data can help answer the salient research questions more precisely and more robustly than with just purely predictive techniques -- where genuine causal effects may be confounded.

In earlier work, we introduced the framework of language-based decisions, the core idea of which was to modify Savage's classical decision-theoretic framework by taking actions to be descriptions in some language, rather than functions from states to outcomes, as they are defined classically. Actions had the form "if psi then do(phi)", where psi and phi were formulas in some underlying language, specifying what effects would be brought about under what circumstances. The earlier work allowed only one-step actions. But, in practice, plans are typically composed of a sequence of steps. Here, we extend the earlier framework to sequential actions, making it much more broadly applicable. Our technical contribution is a representation theorem in the classical spirit: agents whose preferences over actions satisfy certain constraints can be modeled as if they are expected utility maximizers. As in the earlier work, due to the language-based specification of the actions, the representation theorem requires a construction not only of the probability and utility functions representing the agent's beliefs and preferences, but also the state and outcomes spaces over which these are defined, as well as a "selection function" which intuitively captures how agents disambiguate coarse descriptions. The (unbounded) depth of action sequencing adds substantial interest (and complexity!) to the proof.

In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.

北京阿比特科技有限公司