亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A Homomorphic Secret Sharing (HSS) scheme is a secret-sharing scheme that shares a secret $x$ among $s$ servers, and additionally allows an output client to reconstruct some function $f(x)$ using information that can be locally computed by each server. A key parameter in HSS schemes is download rate, which quantifies how much information the output client needs to download from the servers. Often, download rate is improved by amortizing over $\ell$ instances of the problem, making $\ell$ also a key parameter of interest. Recent work (Fosli, Ishai, Kolobov, and Wootters 2022) established a limit on the download rate of linear HSS schemes for computing low-degree polynomials and constructed schemes that achieve this optimal download rate; their schemes required amortization over $\ell = \Omega(s \log(s))$ instances of the problem. Subsequent work (Blackwell and Wootters, 2023) completely characterized linear HSS schemes that achieve optimal download rate in terms of a coding-theoretic notion termed optimal labelweight codes. A consequence of this characterization was that $\ell = \Omega(s \log(s))$ is in fact necessary to achieve optimal download rate. In this paper, we characterize all linear HSS schemes, showing that schemes of any download rate are equivalent to a generalization of optimal labelweight codes. This equivalence is constructive and provides a way to obtain an explicit linear HSS scheme from any linear code. Using this characterization, we present explicit linear HSS schemes with slightly sub-optimal rate but with much improved amortization $\ell = O(s)$. Our constructions are based on algebraic geometry codes (specifically Hermitian codes and Goppa codes).

相關內容

For a fixed graph $H$, the $H$-SUBGRAPH HITTING problem consists in deleting the minimum number of vertices from an input graph to obtain a graph without any occurrence of $H$ as a subgraph. This problem can be seen as a generalization of VERTEX COVER, which corresponds to the case $H = K_2$. We initiate a study of $H$-SUBGRAPH HITTING from the point of view of characterizing structural parameterizations that allow for polynomial kernels, within the recently active framework of taking as the parameter the number of vertex deletions to obtain a graph in a "simple" class $C$. Our main contribution is to identify graph parameters that, when $H$-SUBGRAPH HITTING is parameterized by the vertex-deletion distance to a class $C$ where any of these parameters is bounded, and assuming standard complexity assumptions and that $H$ is biconnected, allow us to prove the following sharp dichotomy: the problem admits a polynomial kernel if and only if $H$ is a clique. These new graph parameters are inspired by the notion of $C$-elimination distance introduced by Bulian and Dawar [Algorithmica 2016], and generalize it in two directions. Our results also apply to the version of the problem where one wants to hit $H$ as an induced subgraph, and imply in particular, that the problems of hitting minors and hitting (induced) subgraphs have a substantially different behavior with respect to the existence of polynomial kernels under structural parameterizations.

In the $k$-Disjoint Shortest Paths ($k$-DSP) problem, we are given a weighted graph $G$ on $n$ nodes and $m$ edges with specified source vertices $s_1, \dots, s_k$, and target vertices $t_1, \dots, t_k$, and are tasked with determining if $G$ contains vertex-disjoint $(s_i,t_i)$-shortest paths. For any constant $k$, it is known that $k$-DSP can be solved in polynomial time over undirected graphs and directed acyclic graphs (DAGs). However, the exact time complexity of $k$-DSP remains mysterious, with large gaps between the fastest known algorithms and best conditional lower bounds. In this paper, we obtain faster algorithms for important cases of $k$-DSP, and present better conditional lower bounds for $k$-DSP and its variants. Previous work solved 2-DSP over weighted undirected graphs in $O(n^7)$ time, and weighted DAGs in $O(mn)$ time. For the main result of this paper, we present linear time algorithms for solving 2-DSP on weighted undirected graphs and DAGs. Our algorithms are algebraic however, and so only solve the detection rather than search version of 2-DSP. For lower bounds, prior work implied that $k$-Clique can be reduced to $2k$-DSP in DAGs and undirected graphs with $O((kn)^2)$ nodes. We improve this reduction, by showing how to reduce from $k$-Clique to $k$-DSP in DAGs and undirected graphs with $O((kn)^2)$ nodes. A variant of $k$-DSP is the $k$-Disjoint Paths ($k$-DP) problem, where the solution paths no longer need to be shortest paths. Previous work reduced from $k$-Clique to $p$-DP in DAGs with $O(kn)$ nodes, for $p= k + k(k-1)/2$. We improve this by showing a reduction from $k$-Clique to $p$-DP, for $p=k + \lfloor k^2/4\rfloor$. Under the $k$-Clique Hypothesis from fine-grained complexity, our results establish better conditional lower bounds for $k$-DSP for all $k\ge 4$, and better conditional lower bounds for $p$-DP for all $p\le 4031$.

Today mobile users learn and share their traffic observations via crowdsourcing platforms (e.g., Waze). Yet such platforms simply cater to selfish users' myopic interests to recommend the shortest path, and do not encourage enough users to travel and learn other paths for future others. Prior studies focus on one-shot congestion games without considering users' information learning, while our work studies how users learn and alter traffic conditions on stochastic paths in a human-in-the-loop manner. Our analysis shows that the myopic routing policy leads to severe under-exploration of stochastic paths. This results in a price of anarchy (PoA) greater than $2$, as compared to the socially optimal policy in minimizing the long-term social cost. Besides, the myopic policy fails to ensure the correct learning convergence about users' traffic hazard beliefs. To address this, we focus on informational (non-monetary) mechanisms as they are easier to implement than pricing. We first show that existing information-hiding mechanisms and deterministic path-recommendation mechanisms in Bayesian persuasion literature do not work with even (\text{PoA}=\infty). Accordingly, we propose a new combined hiding and probabilistic recommendation (CHAR) mechanism to hide all information from a selected user group and provide state-dependent probabilistic recommendations to the other user group. Our CHAR successfully ensures PoA less than (\frac{5}{4}), which cannot be further reduced by any other informational (non-monetary) mechanism. Besides the parallel network, we further extend our analysis and CHAR to more general linear path graphs with multiple intermediate nodes, and we prove that the PoA results remain unchanged. Additionally, we carry out experiments with real-world datasets to further extend our routing graphs and verify the close-to-optimal performance of our CHAR.

We develop a novel method to construct uniformly valid confidence bands for a nonparametric component $f_1$ in the sparse additive model $Y=f_1(X_1)+\ldots + f_p(X_p) + \varepsilon$ in a high-dimensional setting. Our method integrates sieve estimation into a high-dimensional Z-estimation framework, facilitating the construction of uniformly valid confidence bands for the target component $f_1$. To form these confidence bands, we employ a multiplier bootstrap procedure. Additionally, we provide rates for the uniform lasso estimation in high dimensions, which may be of independent interest. Through simulation studies, we demonstrate that our proposed method delivers reliable results in terms of estimation and coverage, even in small samples.

Commit Message Generation (CMG) approaches aim to automatically generate commit messages based on given code diffs, which facilitate collaboration among developers and play a critical role in Open-Source Software (OSS). Very recently, Large Language Models (LLMs) have demonstrated extensive applicability in diverse code-related task. But few studies systematically explored their effectiveness using LLMs. This paper conducts the first comprehensive experiment to investigate how far we have been in applying LLM to generate high-quality commit messages. Motivated by a pilot analysis, we first clean the most widely-used CMG dataset following practitioners' criteria. Afterward, we re-evaluate diverse state-of-the-art CMG approaches and make comparisons with LLMs, demonstrating the superior performance of LLMs against state-of-the-art CMG approaches. Then, we further propose four manual metrics following the practice of OSS, including Accuracy, Integrity, Applicability, and Readability, and assess various LLMs accordingly. Results reveal that GPT-3.5 performs best overall, but different LLMs carry different advantages. To further boost LLMs' performance in the CMG task, we propose an Efficient Retrieval-based In-Context Learning (ICL) framework, namely ERICommiter, which leverages a two-step filtering to accelerate the retrieval efficiency and introduces semantic/lexical-based retrieval algorithm to construct the ICL examples. Extensive experiments demonstrate the substantial performance improvement of ERICommiter on various LLMs for code diffs of different programming languages. Meanwhile, ERICommiter also significantly reduces the retrieval time while keeping almost the same performance. Our research contributes to the understanding of LLMs' capabilities in the CMG field and provides valuable insights for practitioners seeking to leverage these tools in their workflows.

Large Language Models (LLMs) are essential tools to collaborate with users on different tasks. Evaluating their performance to serve users' needs in real-world scenarios is important. While many benchmarks have been created, they mainly focus on specific predefined model abilities. Few have covered the intended utilization of LLMs by real users. To address this oversight, we propose benchmarking LLMs from a user perspective in both dataset construction and evaluation designs. We first collect 1863 real-world use cases with 15 LLMs from a user study with 712 participants from 23 countries. These self-reported cases form the User Reported Scenarios(URS) dataset with a categorization of 7 user intents. Secondly, on this authentic multi-cultural dataset, we benchmark 10 LLM services on their efficacy in satisfying user needs. Thirdly, we show that our benchmark scores align well with user-reported experience in LLM interactions across diverse intents, both of which emphasize the overlook of subjective scenarios. In conclusion, our study proposes to benchmark LLMs from a user-centric perspective, aiming to facilitate evaluations that better reflect real user needs. The benchmark dataset and code are available at //github.com/Alice1998/URS.

The Multi-Agent Path Finding (MAPF) problem entails finding collision-free paths for a set of agents, guiding them from their start to goal locations. However, MAPF does not account for several practical task-related constraints. For example, agents may need to perform actions at goal locations with specific execution times, adhering to predetermined orders and timeframes. Moreover, goal assignments may not be predefined for agents, and the optimization objective may lack an explicit definition. To incorporate task assignment, path planning, and a user-defined objective into a coherent framework, this paper examines the Task Assignment and Path Finding with Precedence and Temporal Constraints (TAPF-PTC) problem. We augment Conflict-Based Search (CBS) to simultaneously generate task assignments and collision-free paths that adhere to precedence and temporal constraints, maximizing an objective quantified by the return from a user-defined reward function in reinforcement learning (RL). Experimentally, we demonstrate that our algorithm, CBS-TA-PTC, can solve highly challenging bomb-defusing tasks with precedence and temporal constraints efficiently relative to MARL and adapted Target Assignment and Path Finding (TAPF) methods.

In Pliable Private Information Retrieval (PPIR) with a single server, messages are partitioned into $\Gamma$ non-overlapping classes. The user wants to retrieve a message from its desired class without revealing the identity of the desired class to the server. In S. A. Obead, H. Y. Lin and E. Rosnes, Single-Server Pliable Private Information Retrieval With Side Information, arXiv:2305.06857, authors consider the problem of PPIR with Side Information (PPIR-SI), where the user now has side information. The user wants to retrieve any new message (not included in the side information) from its desired class without revealing the identity of the desired class and its side information. A scheme for the PPIR-SI is given by Obead et al. for the case when the users side information is unidentified, and this case is referred to as PPIR with Unidentifiable SI (PPIR-USI). In this paper, we study the problem of PPIR for the single server case when the side information is partially identifiable, and we term this case as PPIR with Identifiable Side Information (PPIR-ISI). The user is well aware of the identity of the side information belonging to $\eta$ number of classes, where $1\leq \eta \leq \Gamma$. In this problem, The user wants to retrieve a message from its desired class without revealing the identity of the desired class to the server. We give a scheme for PPIR-ISI, and we prove that having identifiable side information is advantageous by comparing the rate of the proposed scheme to the rate of the PPIR-USI scheme given by Obead et al. for some cases. Further, we extend the problem of PPIR-ISI for multi-user case, where users can collaborately generate the query sets, and we give a scheme for this problem.

Private Information Retrieval (PIR) is a mechanism for efficiently downloading messages while keeping the index secret. Here, PIRs in which servers do not communicate with each other are called standard PIRs, and PIRs in which some servers communicate with each other are called colluding PIRs. The information-theoretic upper bound on efficiency has been given in previous studies. However, the conditions for PIRs to keep privacy, to decode the desired message, and to achieve that upper bound have not been clarified. In this paper, we prove the necessary and sufficient conditions for achieving the capacity in standard PIRs and colluding PIRs.

Answering questions that require reading texts in an image is challenging for current models. One key difficulty of this task is that rare, polysemous, and ambiguous words frequently appear in images, e.g., names of places, products, and sports teams. To overcome this difficulty, only resorting to pre-trained word embedding models is far from enough. A desired model should utilize the rich information in multiple modalities of the image to help understand the meaning of scene texts, e.g., the prominent text on a bottle is most likely to be the brand. Following this idea, we propose a novel VQA approach, Multi-Modal Graph Neural Network (MM-GNN). It first represents an image as a graph consisting of three sub-graphs, depicting visual, semantic, and numeric modalities respectively. Then, we introduce three aggregators which guide the message passing from one graph to another to utilize the contexts in various modalities, so as to refine the features of nodes. The updated nodes have better features for the downstream question answering module. Experimental evaluations show that our MM-GNN represents the scene texts better and obviously facilitates the performances on two VQA tasks that require reading scene texts.

北京阿比特科技有限公司