亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a principal-agent problem where the agent may privately choose to acquire relevant information prior to taking a hidden action. This model generalizes two special cases: a classic moral hazard setting, and a more recently studied problem of incentivizing information acquisition (IA). We show that all of these problems can be reduced to the design of a proper scoring rule. Under a limited liability condition, we consider the special cases separately and then the general problem. We give novel results for the special case of IA, giving a closed form "pointed polyhedral cone" solution for the general multidimensional problem. We also describe a geometric, scoring-rules based solution to the case of the classic contracts problem. Finally, we give an efficient algorithm for the general problem of Contracts with Information Acquisition.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 線性的 · Analysis · 控制器 · 情景 ·
2022 年 7 月 27 日

Backward reachability analysis is essential to synthesizing controllers that ensure the correctness of closed-loop systems. This paper is concerned with developing scalable algorithms that under-approximate the backward reachable sets, for discrete-time uncertain linear and nonlinear systems. Our algorithm sequentially linearizes the dynamics, and uses constrained zonotopes for set representation and computation. The main technical ingredient of our algorithm is an efficient way to under-approximate the Minkowski difference between a constrained zonotopic minuend and a zonotopic subtrahend, which consists of all possible values of the uncertainties and the linearization error. This Minkowski difference needs to be represented as a constrained zonotope to enable subsequent computation, but, as we show, it is impossible to find a polynomial-sized representation for it in polynomial time. Our algorithm finds a polynomial-sized under-approximation in polynomial time. We further analyze the conservatism of this under-approximation technique, and show that it is exact under some conditions. Based on the developed Minkowski difference technique, we detail two backward reachable set computation algorithms to control the linearization error and incorporate nonconvex state constraints. Several examples illustrate the effectiveness of our algorithms.

Sharing economy has become a socio-economic trend in transportation and housing sectors. It develops business models leveraging underutilized resources. Like those sectors, power grid is also becoming smarter with many flexible resources, and researchers are investigating the impact of sharing resources here as well that can help to reduce cost and extract value. In this work, we investigate sharing of energy storage devices among individual households in a cooperative fashion. Coalitional game theory is used to model the scenario where utility company imposes time-of-use (ToU) price and net metering billing mechanism. The resulting game has a non-empty core and we can develop a cost allocation mechanism in the core with easy to compute analytical formula. A mechanism for sharing the excess energy under the peer to peer network (P2P) is also developed. Thus sharing electricity generated by storage devices among consumers can be effective in this set-up. Our simulation results also validate the theoretical claim.

The ability to generalize to previously unseen tasks with little to no supervision is a key challenge in modern machine learning research. It is also a cornerstone of a future "General AI". Any artificially intelligent agent deployed in a real world application, must adapt on the fly to unknown environments. Researchers often rely on reinforcement and imitation learning to provide online adaptation to new tasks, through trial and error learning. However, this can be challenging for complex tasks which require many timesteps or large numbers of subtasks to complete. These "long horizon" tasks suffer from sample inefficiency and can require extremely long training times before the agent can learn to perform the necessary longterm planning. In this work, we introduce CASE which attempts to address these issues by training an Imitation Learning agent using adaptive "near future" subgoals. These subgoals are recalculated at each step using compositional arithmetic in a learned latent representation space. In addition to improving learning efficiency for standard long-term tasks, this approach also makes it possible to perform one-shot generalization to previously unseen tasks, given only a single reference trajectory for the task in a different environment. Our experiments show that the proposed approach consistently outperforms the previous state-of-the-art compositional Imitation Learning approach by 30%.

We consider a causal inference model in which individuals interact in a social network and they may not comply with the assigned treatments. Estimating causal parameters is challenging in the presence of network interference of unknown form, as each individual may be influenced by both close individuals and distant ones in complex ways. Noncompliance with treatment assignment further complicates this problem, and prior methods dealing with network spillovers but disregarding the noncompliance issue may underestimate the effect of the treatment receipt on the outcome. To estimate meaningful causal parameters, we introduce a new concept of exposure mapping, which summarizes potentially complicated spillover effects into a fixed dimensional statistic of instrumental variables. We investigate identification conditions for the intention-to-treat effect and the average causal effect for compliers, while explicitly considering the possibility of misspecification of exposure mapping. Based on our identification results, we develop nonparametric estimation procedures via inverse probability weighting. Their asymptotic properties, including consistency and asymptotic normality, are investigated using an approximate neighborhood interference framework, which is convenient for dealing with unknown forms of spillovers between individuals. For an empirical illustration, we apply our method to experimental data on the anti-conflict intervention school program.

We study streaming algorithms in the white-box adversarial model, where the stream is chosen adaptively by an adversary who observes the entire internal state of the algorithm at each time step. We show that nontrivial algorithms are still possible. We first give a randomized algorithm for the $L_1$-heavy hitters problem that outperforms the optimal deterministic Misra-Gries algorithm on long streams. If the white-box adversary is computationally bounded, we use cryptographic techniques to reduce the memory of our $L_1$-heavy hitters algorithm even further and to design a number of additional algorithms for graph, string, and linear algebra problems. The existence of such algorithms is surprising, as the streaming algorithm does not even have a secret key in this model, i.e., its state is entirely known to the adversary. One algorithm we design is for estimating the number of distinct elements in a stream with insertions and deletions achieving a multiplicative approximation and sublinear space; such an algorithm is impossible for deterministic algorithms. We also give a general technique that translates any two-player deterministic communication lower bound to a lower bound for {\it randomized} algorithms robust to a white-box adversary. In particular, our results show that for all $p\ge 0$, there exists a constant $C_p>1$ such that any $C_p$-approximation algorithm for $F_p$ moment estimation in insertion-only streams with a white-box adversary requires $\Omega(n)$ space for a universe of size $n$. Similarly, there is a constant $C>1$ such that any $C$-approximation algorithm in an insertion-only stream for matrix rank requires $\Omega(n)$ space with a white-box adversary. Our algorithmic results based on cryptography thus show a separation between computationally bounded and unbounded adversaries. (Abstract shortened to meet arXiv limits.)

Private information retrieval (PIR) schemes (with or without colluding servers) have been proposed for realistic coded distributed data storage systems. Star product PIR schemes with colluding servers for general coded distributed storage system were constructed over general finite fields by R. Freij-Hollanti, O. W. Gnilke, C. Hollanti and A. Karpuk in 2017. These star product PIR schemes with colluding servers are suitable for the storage of files over small fields and can be constructed for coded distributed storage system with large number of servers. In this paper for an efficient storage code, the problem to find good retrieval codes is considered. In general if the storage code is a binary Reed-Muller code the retrieval code needs not to be a binary Reed-Muller code in general. It is proved that when the storage code contains some special codewords, nonzero retrieval rate star product PIR schemes with colluding servers can only protect against small number of colluding servers. We also give examples to show that when the storage code is a good cyclic code, the best choice of the retrieval code is not cyclic in general. Therefore in the design of star product PIR schemes with colluding servers, the scheme with the storage code and the retrieval code in the same family of algebraic codes is not always efficient.

Changing conditions or environments can cause system dynamics to vary over time. To ensure optimal control performance, controllers should adapt to these changes. When the underlying cause and time of change is unknown, we need to rely on online data for this adaptation. In this paper, we will use time-varying Bayesian optimization (TVBO) to tune controllers online in changing environments using appropriate prior knowledge on the control objective and its changes. Two properties are characteristic of many online controller tuning problems: First, they exhibit incremental and lasting changes in the objective due to changes to the system dynamics, e.g., through wear and tear. Second, the optimization problem is convex in the tuning parameters. Current TVBO methods do not explicitly account for these properties, resulting in poor tuning performance and many unstable controllers through over-exploration of the parameter space. We propose a novel TVBO forgetting strategy using Uncertainty-Injection (UI), which incorporates the assumption of incremental and lasting changes. The control objective is modeled as a spatio-temporal Gaussian process (GP) with UI through a Wiener process in the temporal domain. Further, we explicitly model the convexity assumptions in the spatial dimension through GP models with linear inequality constraints. In numerical experiments, we show that our model outperforms the state-of-the-art method in TVBO, exhibiting reduced regret and fewer unstable parameter configurations.

Causal effect estimation from observational data is a challenging problem, especially with high dimensional data and in the presence of unobserved variables. The available data-driven methods for tackling the problem either provide an estimation of the bounds of a causal effect (i.e. nonunique estimation) or have low efficiency. The major hurdle for achieving high efficiency while trying to obtain unique and unbiased causal effect estimation is how to find a proper adjustment set for confounding control in a fast way, given the huge covariate space and considering unobserved variables. In this paper, we approach the problem as a local search task for finding valid adjustment sets in data. We establish the theorems to support the local search for adjustment sets, and we show that unique and unbiased estimation can be achieved from observational data even when there exist unobserved variables. We then propose a data-driven algorithm that is fast and consistent under mild assumptions. We also make use of a frequent pattern mining method to further speed up the search of minimal adjustment sets for causal effect estimation. Experiments conducted on extensive synthetic and real-world datasets demonstrate that the proposed algorithm outperforms the state-of-the-art criteria/estimators in both accuracy and time-efficiency.

With the explosive growth of information technology, multi-view graph data have become increasingly prevalent and valuable. Most existing multi-view clustering techniques either focus on the scenario of multiple graphs or multi-view attributes. In this paper, we propose a generic framework to cluster multi-view attributed graph data. Specifically, inspired by the success of contrastive learning, we propose multi-view contrastive graph clustering (MCGC) method to learn a consensus graph since the original graph could be noisy or incomplete and is not directly applicable. Our method composes of two key steps: we first filter out the undesirable high-frequency noise while preserving the graph geometric features via graph filtering and obtain a smooth representation of nodes; we then learn a consensus graph regularized by graph contrastive loss. Results on several benchmark datasets show the superiority of our method with respect to state-of-the-art approaches. In particular, our simple approach outperforms existing deep learning-based methods.

Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation. However, previous works paid little attention to modeling the backward network of MI (i.e., dependency from the target to the source), which is crucial to the tightness of the variational information maximization lower bound. In this paper, we propose Adversarial Mutual Information (AMI): a text generation framework which is formed as a novel saddle point (min-max) optimization aiming to identify joint interactions between the source and target. Within this framework, the forward and backward networks are able to iteratively promote or demote each other's generated instances by comparing the real and synthetic data distributions. We also develop a latent noise sampling strategy that leverages random variations at the high-level semantic space to enhance the long term dependency in the generation process. Extensive experiments based on different text generation tasks demonstrate that the proposed AMI framework can significantly outperform several strong baselines, and we also show that AMI has potential to lead to a tighter lower bound of maximum mutual information for the variational information maximization problem.

北京阿比特科技有限公司