We study the mechanism design problem of selling $k$ items to unit-demand buyers with private valuations for the items. A buyer either participates directly in the auction or is represented by an intermediary, who represents a subset of buyers. Our goal is to design robust mechanisms that are independent of the demand structure (i.e. how the buyers are partitioned across intermediaries), and perform well under a wide variety of possible contracts between intermediaries and buyers. We first study the case of $k$ identical items where each buyer draws its private valuation for an item i.i.d. from a known $\lambda$-regular distribution. We construct a robust mechanism that, independent of the demand structure and under certain conditions on the contracts between intermediaries and buyers, obtains a constant factor of the revenue that the mechanism designer could obtain had she known the buyers' valuations. In other words, our mechanism's expected revenue achieves a constant factor of the optimal welfare, regardless of the demand structure. Our mechanism is a simple posted-price mechanism that sets a take-it-or-leave-it per-item price that depends on $k$ and the total number of buyers, but does not depend on the demand structure or the downstream contracts. Next we generalize our result to the case when the items are not identical. We assume that the item valuations are separable. For this case, we design a mechanism that obtains at least a constant fraction of the optimal welfare, by using a menu of posted prices. This mechanism is also independent of the demand structure, but makes a relatively stronger assumption on the contracts between intermediaries and buyers, namely that each intermediary prefers outcomes with a higher sum of utilities of the subset of buyers represented by it.
The \emph{local boxicity} of a graph $G$, denoted by $lbox(G)$, is the minimum positive integer $l$ such that $G$ can be obtained using the intersection of $k$ (, where $k \geq l$,) interval graphs where each vertex of $G$ appears as a non-universal vertex in at most $l$ of these interval graphs. Let $G$ be a graph on $n$ vertices having $m$ edges. Let $\Delta$ denote the maximum degree of a vertex in $G$. We show that, (i) $lbox(G) \leq 2^{13\log^{*}{\Delta}} \Delta$. There exist graphs of maximum degree $\Delta$ having a local boxicity of $\Omega(\frac{\Delta}{\log\Delta})$. (ii) $lbox(G) \in O(\frac{n}{\log{n}})$. There exist graphs on $n$ vertices having a local boxicity of $\Omega(\frac{n}{\log n})$. (iii) $lbox(G) \leq (2^{13\log^{*}{\sqrt{m}}} + 2 )\sqrt{m}$. There exist graphs with $m$ edges having a local boxicity of $\Omega(\frac{\sqrt{m}}{\log m})$. (iv) the local boxicity of $G$ is at most its \emph{product dimension}. This connection helps us in showing that the local boxicity of the \emph{Kneser graph} $K(n,k)$ is at most $\frac{k}{2} \log{\log{n}}$. The above results can be extended to the \emph{local dimension} of a partially ordered set due to the known connection between local boxicity and local dimension. Finally, we show that the \emph{cubicity} of a graph on $n$ vertices of girth greater than $g+1$ is $O(n^{\frac{1}{\lfloor g/2\rfloor}}\log n)$.
We study the following two fixed-cardinality optimization problems (a maximization and a minimization variant). For a fixed $\alpha$ between zero and one we are given a graph and two numbers $k \in \mathbb{N}$ and $t \in \mathbb{Q}$. The task is to find a vertex subset $S$ of exactly $k$ vertices that has value at least (resp. at most for minimization) $t$. Here, the value of a vertex set computes as $\alpha$ times the number of edges with exactly one endpoint in $S$ plus $1-\alpha$ times the number of edges with both endpoints in $S$. These two problems generalize many prominent graph problems, such as Densest $k$-Subgraph, Sparsest $k$-Subgraph, Partial Vertex Cover, and Max ($k$,$n-k$)-Cut. In this work, we complete the picture of their parameterized complexity on several types of sparse graphs that are described by structural parameters. In particular, we provide kernelization algorithms and kernel lower bounds for these problems. A somewhat surprising consequence of our kernelizations is that Partial Vertex Cover and Max $(k,n-k)$-Cut not only behave in the same way but that the kernels for both problems can be obtained by the same algorithms.
The introduction of online marketplace platforms has led to the advent of new forms of flexible, on-demand (or 'gig') work. Yet, most prior research concerning the experience of gig workers examines delivery or crowdsourcing platforms, while the experience of the large numbers of workers who undertake educational labour in the form of tutoring gigs remains understudied. To address this, we use a computational grounded theory approach to analyse tutors' discussions on Reddit. This approach consists of three phases including data exploration, modelling and human-centred interpretation. We use both validation and human evaluation to increase the trustworthiness and reliability of the computational methods. This paper is a work in progress and reports on the first of the three phases of this approach.
We study signaling in Bayesian ad auctions, in which bidders' valuations depend on a random, unknown state of nature. The auction mechanism has complete knowledge of the actual state of nature, and it can send signals to bidders so as to disclose information about the state and increase revenue. For instance, a state may collectively encode some features of the user that are known to the mechanism only, since the latter has access to data sources unaccessible to the bidders. We study the problem of computing how the mechanism should send signals to bidders in order to maximize revenue. While this problem has already been addressed in the easier setting of second-price auctions, to the best of our knowledge, our work is the first to explore ad auctions with more than one slot. In this paper, we focus on public signaling and VCG mechanisms, under which bidders truthfully report their valuations. We start with a negative result, showing that, in general, the problem does not admit a PTAS unless P = NP, even when bidders' valuations are known to the mechanism. The rest of the paper is devoted to settings in which such negative result can be circumvented. First, we prove that, with known valuations, the problem can indeed be solved in polynomial time when either the number of states d or the number of slots m is fixed. Moreover, in the same setting, we provide an FPTAS for the case in which bidders are single minded, but d and m can be arbitrary. Then, we switch to the random valuations setting, in which these are randomly drawn according to some probability distribution. In this case, we show that the problem admits an FPTAS, a PTAS, and a QPTAS, when, respectively, d is fixed, m is fixed, and bidders' valuations are bounded away from zero.
We study the framework of universal dynamic regret minimization with strongly convex losses. We answer an open problem in Baby and Wang 2021 by showing that in a proper learning setup, Strongly Adaptive algorithms can achieve the near optimal dynamic regret of $\tilde O(d^{1/3} n^{1/3}\text{TV}[u_{1:n}]^{2/3} \vee d)$ against any comparator sequence $u_1,\ldots,u_n$ simultaneously, where $n$ is the time horizon and $\text{TV}[u_{1:n}]$ is the Total Variation of comparator. These results are facilitated by exploiting a number of new structures imposed by the KKT conditions that were not considered in Baby and Wang 2021 which also lead to other improvements over their results such as: (a) handling non-smooth losses and (b) improving the dimension dependence on regret. Further, we also derive near optimal dynamic regret rates for the special case of proper online learning with exp-concave losses and an $L_\infty$ constrained decision set.
Blockchain relay schemes enable cross-chain state proofs without requiring trusted intermediaries. This is achieved by applying the source blockchain's consensus validation protocol on the target blockchain. Existing chain relays allow for the validation of blocks created using the Proof of Work (PoW) protocol. Since PoW entails high energy consumption, limited throughput, and no guaranteed finality, Proof of Stake (PoS) blockchain protocols are increasingly popular for addressing these shortcomings. We propose Verilay, the first chain relay scheme that enables validating PoS protocols that produce finalized blocks, for example, Ethereum 2.0, Cosmos, and Polkadot. The concept does not require changes to the source blockchain protocols or validator operations. Signatures of block proposers are validated by a dedicated relay smart contract on the target blockchain. In contrast to basic PoW chain relays, Verilay requires only a subset of block headers to be submitted in order to maintain full verifiability. This yields enhanced scalability. We provide a prototypical implementation that facilitates the validation of Ethereum 2.0 beacon chain headers within the Ethereum Virtual Machine (EVM). Our evaluation proves the applicability to Ethereum 1.0's mainnet and confirms that only a fraction of transaction costs are required compared to PoW chain relay updates.
Message forwarding protocols are protocols in which a chain of agents handles transmission of a message. Each agent forwards the received message to the next agent in the chain. For example, TLS middleboxes act as intermediary agents in TLS, adding functionality such as filtering or compressing data. In such protocols, an attacker may attempt to bypass one or more intermediary agents. Such an agent-skipping attack can the violate security requirements of the protocol. Using the multiset rewriting model in the symbolic setting, we construct a comprehensive framework of such path protocols. In particular, we introduce a set of security goals related to path integrity: the notion that a message faithfully travels through participants in the order intended by the initiating agent. We perform a security analysis of several such protocols, highlighting key attacks on modern protocols.
We consider an auction mechanism design problem where a seller sells multiple homogeneous items to a set of buyers who are connected to form a network. Each buyer only knows the buyers he directly connects with and has a diminishing marginal utility valuation for the items. The seller initially also only connects to some of the buyers. The goal is to design an auction to incentivize the buyers who are aware of the auction to further invite their neighbors to join the auction. This is challenging because the buyers are competing for the items and they would not invite each other by default. Solutions have been proposed recently for the settings where each buyer requires at most one unit and demonstrated the difficulties for the design even in the simple settings. We move this forward to propose the very first diffusion auction for the multi-unit demand settings. We also show that it improves both the social welfare and the revenue to incentivize the seller to apply it.
The collective attention on online items such as web pages, search terms, and videos reflects trends that are of social, cultural, and economic interest. Moreover, attention trends of different items exhibit mutual influence via mechanisms such as hyperlinks or recommendations. Many visualisation tools exist for time series, network evolution, or network influence; however, few systems connect all three. In this work, we present AttentionFlow, a new system to visualise networks of time series and the dynamic influence they have on one another. Centred around an ego node, our system simultaneously presents the time series on each node using two visual encodings: a tree ring for an overview and a line chart for details. AttentionFlow supports interactions such as overlaying time series of influence and filtering neighbours by time or flux. We demonstrate AttentionFlow using two real-world datasets, VevoMusic and WikiTraffic. We show that attention spikes in songs can be explained by external events such as major awards, or changes in the network such as the release of a new song. Separate case studies also demonstrate how an artist's influence changes over their career, and that correlated Wikipedia traffic is driven by cultural interests. More broadly, AttentionFlow can be generalised to visualise networks of time series on physical infrastructures such as road networks, or natural phenomena such as weather and geological measurements.
Amortized inference has led to efficient approximate inference for large datasets. The quality of posterior inference is largely determined by two factors: a) the ability of the variational distribution to model the true posterior and b) the capacity of the recognition network to generalize inference over all datapoints. We analyze approximate inference in variational autoencoders in terms of these factors. We find that suboptimal inference is often due to amortizing inference rather than the limited complexity of the approximating distribution. We show that this is due partly to the generator learning to accommodate the choice of approximation. Furthermore, we show that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.