# Corelab Seminar

*2020-2021*

### Periklis Papakonstantinou

* Circuit depth
*

**Abstract.**

One of the most fundamental questions in computation is how to make computation parallel. In this talk, this means how a given algorithm that uses a certain processing power (for instance number of elementary processing units) can be made parallel without blowing up by too much the amount of processing power. Although this question is in general known to have a negative answer (ie the computation is not parallelizable) for almost every function, the same is not true for many important and interesting functions. I will deliver this talk at the level of an entry-level graduate student. I will walk you through some of the landmark developments from the 1950s until the 1990s. The 1992 result by Beigel-Tarui and Yao showed that parallel computation can be made super-parallel, but the blowup is somewhat significant. Somewhat more recently (in 2016) it was shown that this blowup can be exponentially reduced. I will briefly discuss this latter result, which was shown by Shiteng Chen and myself back in 2016, and then I will outline some of my most recent thoughts in reducing questions for proving unconditional complexity class separations to parallel speedups of somewhat parallel algorithms.