The communication complexity of functions with large outputs

L Fontes, S Laplante, M Laurière, A Nolin - International Colloquium on …, 2023 - Springer
L Fontes, S Laplante, M Laurière, A Nolin
International Colloquium on Structural Information and Communication Complexity, 2023Springer
We study the two-party communication complexity of functions with large outputs, and show
that the communication complexity can greatly vary depending on what output model is
considered. We study a variety of output models, ranging from the open model, in which an
external observer can compute the outcome, to the XOR model, in which the outcome of the
protocol should be the bitwise XOR of the players' local outputs. This model is inspired by
XOR games, which are widely studied two-player quantum games. We focus on the question …
Abstract
We study the two-party communication complexity of functions with large outputs, and show that the communication complexity can greatly vary depending on what output model is considered. We study a variety of output models, ranging from the open model, in which an external observer can compute the outcome, to the XOR model, in which the outcome of the protocol should be the bitwise XOR of the players’ local outputs. This model is inspired by XOR games, which are widely studied two-player quantum games.
We focus on the question of error-reduction in these new output models. For functions of output size k, applying standard error reduction techniques in the XOR model would introduce an additional cost linear in k. We show that no dependency on k is necessary. Similarly, standard randomness removal techniques, incur a multiplicative cost of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2^{k}$$\end{document} in the XOR model. We show how to reduce this factor to O(k).
In addition, we prove analogous error reduction and randomness removal results in the other models, separate all models from each other, and show that some natural problems – including Set Intersection and Find the First Difference – separate the models when the Hamming weights of their inputs is bounded.
Springer
Showing the best result for this search. See all results