亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The recent introduction of thermodynamic integration techniques has provided a new framework for understanding and improving variational inference (VI). In this work, we present a careful analysis of the thermodynamic variational objective (TVO), bridging the gap between existing variational objectives and shedding new insights to advance the field. In particular, we elucidate how the TVO naturally connects the three key variational schemes, namely the importance-weighted VI, Renyi-VI, and MCMC-VI, which subsumes most VI objectives employed in practice. To explain the performance gap between theory and practice, we reveal how the pathological geometry of thermodynamic curves negatively affects TVO. By generalizing the integration path from the geometric mean to the weighted Holder mean, we extend the theory of TVO and identify new opportunities for improving VI. This motivates our new VI objectives, named the Holder bounds, which flatten the thermodynamic curves and promise to achieve a one-step approximation of the exact marginal log-likelihood. A comprehensive discussion on the choices of numerical estimators is provided. We present strong empirical evidence on both synthetic and real-world datasets to support our claims.

相關內容

 出自“頭腦風暴”一詞。所謂頭腦風暴(Brain-storming)系統是運用系統的、統一的視覺符號系統。視覺識別是靜態的識別符號具體化、視覺化的傳達形式,項目最多,層面最廣,效果更直接。視覺識別系統屬于CIS中的VI,用完整、體系的視覺傳達體系,將企業理念、文化特質、服務內容、企業規范等抽象語意轉換為具體符號的概念,塑造出獨特的企業形象。視覺識別系統分為基本要素系統和應用要素系統兩方面。基本要素系統主要包括:企業名稱、企業標志、標準字、標準色、象征圖案、宣傳口語、市場行銷報告書等。應用系統主要包括:辦公事務用品、生產設備、建筑環境、產品包裝、廣告媒體、交通工具、衣著制服、旗幟、招牌、標識牌、櫥窗、陳列展示等。視覺識別(VI)在CI系統大眾所接受,據有主導的地位。

We develop a family of cut finite element methods of different orders based on the discontinuous Galerkin framework, for hyperbolic conservation laws with stationary interfaces in both one and two space dimensions, and for moving interfaces in one space dimension. Interface conditions are imposed weakly and so that both conservation and stability are ensured. A CutFEM with discontinuous elements in space is developed and coupled to standard explicit time-stepping schemes for linear advection problems and the acoustic wave problem with stationary interfaces. In the case of moving interfaces, we propose a space-time CutFEM based on discontinuous elements both in space and time for linear advection problems. We show that the proposed CutFEM are conservative and energy stable. For the stationary interface case an a priori error estimate is proven. Numerical computations in both one and two space dimensions support the analysis, and in addition demonstrate that the proposed methods have the expected accuracy.

This paper introduces a new Bayesian changepoint approach called the decoupled approach that separates the process of modeling and changepoint analysis. The approach utilizes a Bayesian dynamic linear model (DLM) for the modeling step and a weighted penalized likelihood estimator on the posterior of the Bayesian DLM to identify changepoints. A Bayesian DLM, with shrinkage priors, can provide smooth estimates of the underlying trend in presence of complex noise components; however, the inability to shrink exactly to zero make changepoint analysis difficult. Penalized likelihood estimators can be effective in estimating location of changepoints; however, they require a relatively smooth estimate of the data. The decoupled approach combines the flexibility of the Bayesian DLM along with the hard thresholding property of penalized likelihood estimator to extend application of changepoint analysis. The approach provides a robust framework that allows for identification of changepoints in highly complex Bayesian models. The approach can identify changes in mean, higher order trends and regression coefficients. We illustrate the approach's flexibility and robustness by comparing against several alternative methods in a wide range of simulations and two real world examples.

The aim of this note is to state a couple of general results about the properties of the penalized maximum likelihood estimators (pMLE) and of the posterior distribution for parametric models in a non-asymptotic setup and for possibly large or even infinite parameter dimension. We consider a special class of stochastically linear smooth (SLS) models satisfying two major conditions: the stochastic component of the log-likelihood is linear in the model parameter, while the expected log-likelihood is a smooth function. The main results simplify a lot if the expected log-likelihood is concave. For the pMLE, we establish a number of finite sample bounds about its concentration and large deviations as well as the Fisher and Wilks expansion. The later results extend the classical asymptotic Fisher and Wilks Theorems about the MLE to the non-asymptotic setup with large parameter dimension which can depend on the sample size. For the posterior distribution, our main result states a Gaussian approximation of the posterior which can be viewed as a finite sample analog of the prominent Bernstein--von Mises Theorem. In all bounds, the remainder is given explicitly and can be evaluated in terms of the effective sample size and effective parameter dimension. The results are dimension and coordinate free. In spite of generality, all the presented bounds are nearly sharp and the classical asymptotic results can be obtained as simple corollaries. An interesting case of logit regression with smooth or truncation priors is used to specify the results and to explain the main notions.

Dyadic data is often encountered when quantities of interest are associated with the edges of a network. As such it plays an important role in statistics, econometrics and many other data science disciplines. We consider the problem of uniformly estimating a dyadic Lebesgue density function, focusing on nonparametric kernel-based estimators which take the form of U-process-like dyadic empirical processes. We provide uniform point estimation and distributional results for the dyadic kernel density estimator, giving valid and feasible procedures for robust uniform inference. Our main contributions include the minimax-optimal uniform convergence rate of the dyadic kernel density estimator, along with strong approximation results for the associated standardized $t$-process. A consistent variance estimator is introduced in order to obtain analogous results for the Studentized $t$-process, enabling the construction of provably valid and feasible uniform confidence bands for the unknown density function. A crucial feature of U-process-like dyadic empirical processes is that they may be "degenerate" at some or possibly all points in the support of the data, a property making our uniform analysis somewhat delicate. Nonetheless we show formally that our proposed methods for uniform inference remain robust to the potential presence of such unknown degenerate points. For the purpose of implementation, we discuss uniform inference procedures based on positive semi-definite covariance estimators, mean squared error optimal bandwidth selectors and robust bias-correction methods. We illustrate the empirical finite-sample performance of our robust inference methods in a simulation study. Our technical results concerning strong approximations and maximal inequalities are of potential independent interest.

Since the average treatment effect (ATE) measures the change in social welfare, even if positive, there is a risk of negative effect on, say, some 10% of the population. Assessing such risk is difficult, however, because any one individual treatment effect (ITE) is never observed so the 10% worst-affected cannot be identified, while distributional treatment effects only compare the first deciles within each treatment group, which does not correspond to any 10%-subpopulation. In this paper we consider how to nonetheless assess this important risk measure, formalized as the conditional value at risk (CVaR) of the ITE distribution. We leverage the availability of pre-treatment covariates and characterize the tightest-possible upper and lower bounds on ITE-CVaR given by the covariate-conditional average treatment effect (CATE) function. Some bounds can also be interpreted as summarizing a complex CATE function into a single metric and are of interest independently of being a bound. We then proceed to study how to estimate these bounds efficiently from data and construct confidence intervals. This is challenging even in randomized experiments as it requires understanding the distribution of the unknown CATE function, which can be very complex if we use rich covariates so as to best control for heterogeneity. We develop a debiasing method that overcomes this and prove it enjoys favorable statistical properties even when CATE and other nuisances are estimated by black-box machine learning or even inconsistently. Studying a hypothetical change to French job-search counseling services, our bounds and inference demonstrate a small social benefit entails a negative impact on a substantial subpopulation.

Undertaking causal inference with observational data is incredibly useful across a wide range of tasks including the development of medical treatments, advertisements and marketing, and policy making. There are two significant challenges associated with undertaking causal inference using observational data: treatment assignment heterogeneity (\textit{i.e.}, differences between the treated and untreated groups), and an absence of counterfactual data (\textit{i.e.}, not knowing what would have happened if an individual who did get treatment, were instead to have not been treated). We address these two challenges by combining structured inference and targeted learning. In terms of structure, we factorize the joint distribution into risk, confounding, instrumental, and miscellaneous factors, and in terms of targeted learning, we apply a regularizer derived from the influence curve in order to reduce residual bias. An ablation study is undertaken, and an evaluation on benchmark datasets demonstrates that TVAE has competitive and state of the art performance.

When are inferences (whether Direct-Likelihood, Bayesian, or Frequentist) obtained from partial data valid? This paper answers this question by offering a new asymptotic theory about inference with missing data that is more general than existing theories. By using more powerful tools from real analysis and probability theory than those used in previous research, it proves that as the sample size increases and the extent of missingness decreases, the mean-loglikelihood function generated by partial data and that ignores the missingness mechanism will almost surely converge uniformly to that which would have been generated by complete data; and if the data are Missing at Random, this convergence depends only on sample size. Thus, inferences from partial data, such as posterior modes, uncertainty estimates, confidence intervals, likelihood ratios, test statistics, and indeed, all quantities or features derived from the partial-data loglikelihood function, will be consistently estimated. They will approximate their complete-data analogues. This adds to previous research which has only proved the consistency and asymptotic normality of the posterior mode, and developed separate theories for Direct-Likelihood, Bayesian, and Frequentist inference. Practical implications of this result are discussed, and the theory is verified using a previous study of International Human Rights Law.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

Amortized inference has led to efficient approximate inference for large datasets. The quality of posterior inference is largely determined by two factors: a) the ability of the variational distribution to model the true posterior and b) the capacity of the recognition network to generalize inference over all datapoints. We analyze approximate inference in variational autoencoders in terms of these factors. We find that suboptimal inference is often due to amortizing inference rather than the limited complexity of the approximating distribution. We show that this is due partly to the generator learning to accommodate the choice of approximation. Furthermore, we show that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.

北京阿比特科技有限公司