亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A decision maker is choosing between an active action (e.g., purchase a house, invest certain stock) and a passive action. The payoff of the active action depends on the buyer's private type and also an unknown state of nature. An information seller can design experiments to reveal information about the realized state to the decision maker, and would like to maximize profit from selling such information. We characterize, in closed-form, the revenue-optimal information selling mechanism for the seller. After eliciting the buyer's type, the optimal mechanism charges the buyer an upfront payment and then simply reveals whether the realized state exceeds a certain threshold or not. The optimal mechanism features both price discrimination and information discrimination. The special buyer type who is a-priori indifferent between the active and passive action benefits the most from participating in the mechanism.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · Processing(編程語言) · 內積 · Continuity · 噪聲 ·
2021 年 12 月 29 日

In Chen and Zhou 2021, they consider an inference problem for an Ornstein-Uhlenbeck process driven by a general one-dimensional centered Gaussian process $(G_t)_{t\ge 0}$. The second order mixed partial derivative of the covariance function $ R(t,\, s)=\mathbb{E}[G_t G_s]$ can be decomposed into two parts, one of which coincides with that of fractional Brownian motion and the other is bounded by $(ts)^{H-1}$ with $H\in (\frac12,\,1)$, up to a constant factor. In this paper, we investigate the same problem but with the assumption of $H\in (0,\,\frac12)$. It is well known that there is a significant difference between the Hilbert space associated with the fractional Gaussian processes in the case of $H\in (\frac12, 1)$ and that of $H\in (0, \frac12)$. The starting point of this paper is a new relationship between the inner product of $\mathfrak{H}$ associated with the Gaussian process $(G_t)_{t\ge 0}$ and that of the Hilbert space $\mathfrak{H}_1$ associated with the fractional Brownian motion $(B^{H}_t)_{t\ge 0}$. Then we prove the strong consistency with $H\in (0, \frac12)$, and the asymptotic normality and the Berry-Ess\'{e}en bounds with $H\in (0,\frac38)$ for both the least squares estimator and the moment estimator of the drift parameter constructed from the continuous observations. A good many inequality estimates are involved in and we also make use of the estimation of the inner product based on the results of $\mathfrak{H}_1$ in Hu, Nualart and Zhou 2019.

In a private private information structure, agents' signals contain no information about the signals of their peers. We study how informative such structures can be, and characterize those that are on the Pareto frontier, in the sense that it is impossible to give more information to any agent without violating privacy. In our main application, we show how to optimally disclose information about an unknown state under the constraint of not revealing anything about a correlated variable that contains sensitive information.

Multi-arm bandit (MAB) is a classic online learning framework that studies the sequential decision-making in an uncertain environment. The MAB framework, however, overlooks the scenario where the decision-maker cannot take actions (e.g., pulling arms) directly. It is a practically important scenario in many applications such as spectrum sharing, crowdsensing, and edge computing. In these applications, the decision-maker would incentivize other selfish agents to carry out desired actions (i.e., pulling arms on the decision-maker's behalf). This paper establishes the incentivized online learning (IOL) framework for this scenario. The key challenge to design the IOL framework lies in the tight coupling of the unknown environment learning and asymmetric information revelation. To address this, we construct a special Lagrangian function based on which we propose a socially-optimal mechanism for the IOL framework. Our mechanism satisfies various desirable properties such as agent fairness, incentive compatibility, and voluntary participation. It achieves the same asymptotic performance as the state-of-art benchmark that requires extra information. Our analysis also unveils the power of crowd in the IOL framework: a larger agent crowd enables our mechanism to approach more closely the theoretical upper bound of social performance. Numerical results demonstrate the advantages of our mechanism in large-scale edge computing.

This work studies an experimental design problem where $x$'s are to be selected with the goal of estimating a function $m(x)$, which is observed with noise. A linear model is fitted to $m(x)$ but it is not assumed that the model is correctly specified. It follows that the quantity of interest is the best linear approximation of $m(x)$, which is denoted by $\ell(x)$. It is shown that in this framework the ordinary least squares estimator typically leads to an inconsistent estimation of $\ell(x)$, and rather weighted least squares should be considered. An asymptotic minimax criterion is formulated for this estimator, and a design that minimizes the criterion is constructed. An important feature of this problem is that the $x$'s should be random, rather than fixed. Otherwise, the minimax risk is infinite. It is shown that the optimal random minimax design is different from its deterministic counterpart, which was studied previously, and a simulation study indicates that it generally performs better when $m(x)$ is a quadratic or a cubic function. Another finding is that when the variance of the noise goes to infinity, the random and deterministic minimax designs coincide. The results are illustrated for polynomial regression models and different generalizations are presented.

This paper presents a novel methodology for fast simulation and analysis of transient heat transfer. The proposed methodology is suitable for real-time applications owing to (i) establishing the solution method from the viewpoint of computationally efficient explicit dynamics, (ii) proposing an element-level thermal load computation to eliminate the need for assembling global thermal stiffness, leading to (iii) an explicit formulation of nodal temperature computation to eliminate the need for iterations anywhere in the algorithm, (iv) pre-computing the constant matrices and simulation parameters to facilitate online calculation, and (v) utilising computationally efficient finite elements to efficiently obtain thermal responses in the spatial domain, all of which lead to a significant reduction in computation time for fast run-time simulation. The proposed fast explicit dynamics finite element algorithm (FED-FEM) employs nonlinear thermal material properties, such as temperature-dependent thermal conductivity and specific heat capacity, and nonlinear thermal boundary conditions, such as heat convection and radiation, to account for nonlinear characteristics of transient heat transfer. Simulations and comparison analyses demonstrate that not only can the proposed methodology handle isotropic, orthotropic, anisotropic and temperature-dependent thermal properties but also satisfy the standard patch tests and achieve good agreement with those of the commercial finite element analysis packages for numerical accuracy, for 3-D heat conduction, convection, radiation, and thermal gradient concentration problems. Furthermore, the proposed FED-FEM algorithm is computationally efficient and only consumes a small computation time, capable of achieving real-time computational performance, leading to a novel methodology suitable for real-time simulation and analysis of transient heat transfer.

Imitation learning enables agents to reuse and adapt the hard-won expertise of others, offering a solution to several key challenges in learning behavior. Although it is easy to observe behavior in the real-world, the underlying actions may not be accessible. We present a new method for imitation solely from observations that achieves comparable performance to experts on challenging continuous control tasks while also exhibiting robustness in the presence of observations unrelated to the task. Our method, which we call FORM (for "Future Observation Reward Model") is derived from an inverse RL objective and imitates using a model of expert behavior learned by generative modelling of the expert's observations, without needing ground truth actions. We show that FORM performs comparably to a strong baseline IRL method (GAIL) on the DeepMind Control Suite benchmark, while outperforming GAIL in the presence of task-irrelevant features.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

We address the question of characterizing and finding optimal representations for supervised learning. Traditionally, this question has been tackled using the Information Bottleneck, which compresses the inputs while retaining information about the targets, in a decoder-agnostic fashion. In machine learning, however, our goal is not compression but rather generalization, which is intimately linked to the predictive family or decoder of interest (e.g. linear classifier). We propose the Decodable Information Bottleneck (DIB) that considers information retention and compression from the perspective of the desired predictive family. As a result, DIB gives rise to representations that are optimal in terms of expected test performance and can be estimated with guarantees. Empirically, we show that the framework can be used to enforce a small generalization gap on downstream classifiers and to predict the generalization ability of neural networks.

Clustering is one of the most fundamental and wide-spread techniques in exploratory data analysis. Yet, the basic approach to clustering has not really changed: a practitioner hand-picks a task-specific clustering loss to optimize and fit the given data to reveal the underlying cluster structure. Some types of losses---such as k-means, or its non-linear version: kernelized k-means (centroid based), and DBSCAN (density based)---are popular choices due to their good empirical performance on a range of applications. Although every so often the clustering output using these standard losses fails to reveal the underlying structure, and the practitioner has to custom-design their own variation. In this work we take an intrinsically different approach to clustering: rather than fitting a dataset to a specific clustering loss, we train a recurrent model that learns how to cluster. The model uses as training pairs examples of datasets (as input) and its corresponding cluster identities (as output). By providing multiple types of training datasets as inputs, our model has the ability to generalize well on unseen datasets (new clustering tasks). Our experiments reveal that by training on simple synthetically generated datasets or on existing real datasets, we can achieve better clustering performance on unseen real-world datasets when compared with standard benchmark clustering techniques. Our meta clustering model works well even for small datasets where the usual deep learning models tend to perform worse.

One of the main challenges in ranking is embedding the query and document pairs into a joint feature space, which can then be fed to a learning-to-rank algorithm. To achieve this representation, the conventional state of the art approaches perform extensive feature engineering that encode the similarity of the query-answer pair. Recently, deep-learning solutions have shown that it is possible to achieve comparable performance, in some settings, by learning the similarity representation directly from data. Unfortunately, previous models perform poorly on longer texts, or on texts with significant portion of irrelevant information, or which are grammatically incorrect. To overcome these limitations, we propose a novel ranking algorithm for question answering, QARAT, which uses an attention mechanism to learn on which words and phrases to focus when building the mutual representation. We demonstrate superior ranking performance on several real-world question-answer ranking datasets, and provide visualization of the attention mechanism to otter more insights into how our models of attention could benefit ranking for difficult question answering challenges.

北京阿比特科技有限公司