Randomness is an incredibly useful, yet expensive, resource. In many important applications, randomness is essential, and it needs to be as pure as possible. In other cases, prominently time- and space-bounded algorithms, we can reduce and even eliminate the use of it. In this talk, I will discuss two, not unrelated, results from the theory of pseudorandomness.
The first one will overview the recent exciting developments in the construction of two-source extractors, which are deterministic functions that purify randomness from two very weak sources. I will talk about the road to achieving optimal extractors, and our explicit entropy-efficient reduction from these extractors to nonmalleable ones — a reduction which is now used in supporting smaller and smaller entropies.
The second result concerns derandomizing time-bounded algorithms. Although we know that BPP = P follows from circuit lower bounds, the precise tradeoff between hardness and pseudorandomness has yet to be fully explored. Particularly, what is the optimal derandomization slowdown one can achieve? I will show that by assuming exponential lower bounds against nondeterministic circuits, we could convert any randomized algorithm running in time T to a deterministic one running in time T^{2+α} for an arbitrarily small constant α. Under plausible complexity-theoretic assumptions, such a slowdown is nearly optimal.
Dean Doron is a postdoctoral fellow at Stanford University, hosted by Omer Reingold. Before that, he spent a year as a postdoc at UT Austin’s Algorithms and Computational Theory Group, hosted by Dana Moshkovitz and David Zuckerman. Prior to coming to UT Austin, in 2018, he completed his Ph.D. at Tel Aviv University under the supervision of Amnon Ta-Shma. His research focuses on pseudorandomness, derandomization, and related explicit combinatorial constructions.