亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Several non-linear functions and machine learning methods have been developed for flexible specification of the systematic utility in discrete choice models. However, they lack interpretability, do not ensure monotonicity conditions, and restrict substitution patterns. We address the first two challenges by modelling the systematic utility using the Choquet Integral (CI) function and the last one by embedding CI into the multinomial probit (MNP) choice probability kernel. We also extend the MNP-CI model to account for attribute cut-offs that enable a modeller to approximately mimic the semi-compensatory behaviour using the traditional choice experiment data. The MNP-CI model is estimated using a constrained maximum likelihood approach, and its statistical properties are validated through a comprehensive Monte Carlo study. The CI-based choice model is empirically advantageous as it captures interaction effects while maintaining monotonicity. It also provides information on the complementarity between pairs of attributes coupled with their importance ranking as a by-product of the estimation. These insights could potentially assist policymakers in making policies to improve the preference level for an alternative. These advantages of the MNP-CI model with attribute cut-offs are illustrated in an empirical application to understand New Yorkers' preferences towards mobility-on-demand services.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 估計/估計量 · CC · 分解的 · 馬爾可夫隨機場 ·
2022 年 2 月 10 日

Non-stationarity is one thorny issue in cooperative multi-agent reinforcement learning (MARL). One of the reasons is the policy changes of agents during the learning process. Some existing works have discussed various consequences caused by non-stationarity with several kinds of measurement indicators. This makes the objectives or goals of existing algorithms are inevitably inconsistent and disparate. In this paper, we introduce a novel notion, the $\delta$-measurement, to explicitly measure the non-stationarity of a policy sequence, which can be further proved to be bounded by the KL-divergence of consecutive joint policies. A straightforward but highly non-trivial way is to control the joint policies' divergence, which is difficult to estimate accurately by imposing the trust-region constraint on the joint policy. Although it has lower computational complexity to decompose the joint policy and impose trust-region constraints on the factorized policies, simple policy factorization like mean-field approximation will lead to more considerable policy divergence, which can be considered as the trust-region decomposition dilemma. We model the joint policy as a pairwise Markov random field and propose a trust-region decomposition network (TRD-Net) based on message passing to estimate the joint policy divergence more accurately. The Multi-Agent Mirror descent policy algorithm with Trust region decomposition, called MAMT, is established by adjusting the trust-region of the local policies adaptively in an end-to-end manner. MAMT can approximately constrain the consecutive joint policies' divergence to satisfy $\delta$-stationarity and alleviate the non-stationarity problem. Our method can bring noticeable and stable performance improvement compared with baselines in cooperative tasks of different complexity.

Rapid technological advances have allowed for molecular profiling across multiple omics domains from a single sample for clinical decision making in many diseases, especially cancer. As tumor development and progression are dynamic biological processes involving composite genomic aberrations, key challenges are to effectively assimilate information from these domains to identify genomic signatures and biological entities that are druggable, develop accurate risk prediction profiles for future patients, and identify novel patient subgroups for tailored therapy and monitoring. We propose integrative probabilistic frameworks for high-dimensional multiple-domain cancer data that coherently incorporate dependence within and between domains to accurately detect tumor subtypes, thus providing a catalogue of genomic aberrations associated with cancer taxonomy. We propose an innovative, flexible and scalable Bayesian nonparametric framework for simultaneous clustering of both tumor samples and genomic probes. We describe an efficient variable selection procedure to identify relevant genomic aberrations that can potentially reveal underlying drivers of a disease. Although the work is motivated by several investigations related to lung cancer, the proposed methods are broadly applicable in a variety of contexts involving high-dimensional data. The success of the methodology is demonstrated using artificial data and lung cancer omics profiles publicly available from The Cancer Genome Atlas.

Background. The bootComb R package allows researchers to derive confidence intervals with correct target coverage for arbitrary combinations of arbitrary numbers of independently estimated parameters. Previous versions (< 1.1.0) of bootComb used independent bootstrap sampling and required that the parameters themselves are independent - an unrealistic assumption in some real-world applications. Findings. Using Gaussian copulas to define the dependence between parameters, the bootComb package has been extended to allow for dependent parameters. Implications. The updated bootComb package can now handle cases of dependent parameters, with users specifying a correlation matrix defining the dependence structure. While in practice it may be difficult to know the exact dependence structure between parameters, `bootComb` allows running sensitivity analyses to assess the impact of parameter dependence on the resulting confidence interval for the combined parameter. Availability. bootComb is available from the Comprehensive R Archive Network (//CRAN.R-project.org/package=bootComb).

In our previous work [SIAM J. Sci. Comput. 43(3) (2021) B784-B810], an accurate hyper-singular boundary integral equation method for dynamic poroelasticity in two dimensions has been developed. This work is devoted to studying the more complex and difficult three-dimensional problems with Neumann boundary condition and both the direct and indirect methods are adopted to construct combined boundary integral equations. The strongly-singular and hyper-singular integral operators are reformulated into compositions of weakly-singular integral operators and tangential-derivative operators, which allow us to prove the jump relations associated with the poroelastic layer potentials and boundary integral operators in a simple manner. Relying on both the investigated spectral properties of the strongly-singular operators, which indicate that the corresponding eigenvalues accumulate at three points whose values are only dependent on two Lam\'e constants, and the spectral properties of the Calder\'on relations of the poroelasticity, we propose low-GMRES-iteration regularized integral equations. Numerical examples are presented to demonstrate the accuracy and efficiency of the proposed methodology by means of a Chebyshev-based rectangular-polar solver.

We propose a framework, named the postselected inflation framework, to obtain converging outer approximations of the sets of probability distributions that are compatible with classical multi-network scenarios. Here, a network is a bilayer directed acyclic graph with a layer of sources of classical randomness, a layer of agents, and edges specifying the connectivity between the agents and the sources. A multi-network scenario is a list of such networks, together with a specification of subsets of agents using the same strategy. An outer approximation of the set of multi-network correlations provides means to certify the infeasibility of a list of agent outcome distributions. We furthermore show that the postselected inflation framework is mathematically equivalent to the standard inflation framework: in that respect, our results allow to gain further insights into the convergence proof of the inflation hierarchy of Navascu\`es and Wolfe [arXiv:1707.06476], and extend it to the case of multi-network scenarios.

Learning a graph topology to reveal the underlying relationship between data entities plays an important role in various machine learning and data analysis tasks. Under the assumption that structured data vary smoothly over a graph, the problem can be formulated as a regularised convex optimisation over a positive semidefinite cone and solved by iterative algorithms. Classic methods require an explicit convex function to reflect generic topological priors, e.g. the $\ell_1$ penalty for enforcing sparsity, which limits the flexibility and expressiveness in learning rich topological structures. We propose to learn a mapping from node data to the graph structure based on the idea of learning to optimise (L2O). Specifically, our model first unrolls an iterative primal-dual splitting algorithm into a neural network. The key structural proximal projection is replaced with a variational autoencoder that refines the estimated graph with enhanced topological properties. The model is trained in an end-to-end fashion with pairs of node data and graph samples. Experiments on both synthetic and real-world data demonstrate that our model is more efficient than classic iterative algorithms in learning a graph with specific topological properties.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

We propose a general and scalable approximate sampling strategy for probabilistic models with discrete variables. Our approach uses gradients of the likelihood function with respect to its discrete inputs to propose updates in a Metropolis-Hastings sampler. We show empirically that this approach outperforms generic samplers in a number of difficult settings including Ising models, Potts models, restricted Boltzmann machines, and factorial hidden Markov models. We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data. This approach outperforms variational auto-encoders and existing energy-based models. Finally, we give bounds showing that our approach is near-optimal in the class of samplers which propose local updates.

Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司