
Office hours. For the time being, just drop by when I am in.

Lecture notes are
downloadable as a single PDF file
(or TeX source if for any reasons
you need individual lectures). Any comments, corrections and
suggestions are most welcome.

Questions for selfstudy (aka "homeworks").
Assignment #1.
Assignment #2.

Open problems. The expected complexity of problems on this list varies from just hard to extremely hard. I, however, am avoiding here anything that is believed to be inaccessible with current methods.

For every $k>0$ there exists a $\Sigma_1^b(\alpha)$ formula $A$ such that $T_2^k(\alpha)\vdash A$ whereas $S_2^k(\alpha)\not\vdash A$ (see e.g. [18, Problem 6]; this source also contains a discussion of candidate $A$s).

For a fixed formula $A(x)$ (not necessarily bounded), define the hierarchy $\Sigma_k^b(A),\Pi_k^b(A)$ similarly to $\Sigma_k^b,\Pi_k^b$ but allowing one extra clause: every formula of the form $A(t)$ ($t$ any term) belongs to $\Sigma_0^b(A)\cup \Pi_0^b(A)$. The theories $S_2^k(A), T_2^k(A)$ are defined respectively. Does there exist a specific formula $A$ such that $S_2(A)$ is not finitely axiomatizable (or at least such that $S_2^1(A)\neq T_2^1(A)$)?
This problem has developed from a question asked in the class. An affirmative answer clearly implies that $S_2(\alpha)$ is not finitely axiomatizable, and an intended approach to this problem would be to try to take as $A$ an arithmetic definition of an oracle separating levels of the polynomial hierarchy.

Does $Res(2)$ have Feasible Interpolation Property (for the negative answer you can use
any reasonable complexity assumption)?

The Existential Interpolation Property for a proof system $f$
means that whenever $A(x)\lor B(y)$ has an $f$proof of size $s$ then
either $A(x)$ or $B(y)$ has an $f$proof of size polynomial in
$s$.
Does the (Extended) Frege proof system possess EIP (again, modulo any reasonable complexity
assumption in case of negative anwer)?

Is the resolution proof system quasiautomatizable (again, modulo assumptions)? (See e.g. [22].)

Let $\Delta$ be any fixed constant. Prove that with probability $1o(1)$ a random CNF that has
$n$ variables and $\Delta n$ clauses does not posses an $F_2$ (depth2 Frege) refutation
of polynomial size.

Does $ontoFPHP^\infty_n$ have a polynomial size boundeddepth Frege proof? ([17, Problem 2];
quasipolynomial bounds are known.)

Does $ontoFPHP^\infty_n$ have a quasipolynomial size $Res(t)$ proof for any constant $t$? ([17, Problems 3,4];
this is true if $t$ is allowed to grow polylogarithhmically in $n$).

We know that the minimal size of a resolution refutation of $ontoFPHP^\infty_n$ behaves as $\exp(n^{\epsilon})$, where $\epsilon\in [1/3,1/2]$. Determine the right value of $\epsilon$. ([17, Problem 5].)

Prove superpolynomial lower bounds for the system $F_d[p]$ ($p$ a prime), that is boundeddepth Frege
augmented with counting connectives $MOD_p$ [18, Problem 10].

Find a "direct" proof of superpolynomial lower bounds for the Cutting Planes proof system.

Prove superpolynomial lower bounds for the DAG version of the
LovaszSchrijver proof system $LS$.

Prove superlogarithmic lower bounds on the refutation rank for the
system $LS_{\ast}$ that extends LovaszSchrijver by allowing the
full power of the rule $f\geq 0, g\geq 0 \Longrightarrow fg\geq 0$,
$f,g$ arbitrary linear forms. Precise definitions can be found in
[27]; this paper also proves an $\Omega(\log n)$ bound.

Prove that there exist Nisan generators stretching $n$ bits to $2^{n^\epsilon}$ bits that are hard for Resolution. All definitions can be found in [33] (you can use any of the encodings described there), and it solves this problem for Resolution width. The best we can do at the moment in terms of Resolution size is $n^{3\epsilon}$ bits [34].

Literature. Unfortunately, I am not aware of any single "canonical" source that would comfortably cover all the material in our course. Recommended literature for its different portions will be posted on this page as we go along.
One good starting point might be the survey paper [1] by Nate Segerlind. It is (to the best of my knowledge) the most recent writing on the subject, it contains a comprehensive list of literature, and it includes recommendations for further reading, specifically indicating the topics where other surveys offer a different perspective.
The only drawback of the
survey [2] by Paul Beame and Toni Pitassi is that it is a little bit outdated. Due to natural limitations, lecture notes [3] by Paul Beame are not intended to be comprehensive. But if a topic is covered there, this is usually also one of the best expositions.

Plans and progress. This course will consist of two parts: Bounded Arithmetic (Part I) and propositional proof complexity proper (Part II), with a significant accent on the latter (see also course description). While these two topics are well related ideologically, their intersection in terms of concepts and techniques is surprisingly small. This makes the two parts of our course practically independent.
Research on Bounded Arithmetic heavily uses standard machinery from mathematical logic (Gentzenstyle calculus, cut elimination, Herbrand theorem etc.), while much of the propositional proof complexity is based upon combinatorial and algebraic techniques parallel to those employed in the circuit complexity and similar disciplines. It is not totally inconceivable that a certain part of the audience has developed a much better taste to the techniques of the second (combinatorial) kind. And, as a courtesy to these listeners, after a word (well, actually a few words) of introduction on January 6, and before immersing into the logical formalism for a few weeks, we will do preview of a sort. Namely, we will prove right away one basic result from the propositional part of the course known as widthsize tradeoff for resolution (with applications). Our exposition will follow, more or less closely, the original paper [4]; this material is also covered in [1,3].
Several lectures in the second part will be devoted to outreach activity, that is, important byproducts of research in the propositional proof complexity. We will certainly cover applications to:

integrality gaps of various LP/SDP relaxation procedures for integral programming (see [5,6] and extended followup literature);

learning theory [7].
On January 8 I hope to finish our "preview", and the course will begin its systematic development. We will start doing Bounded Arithmetic (with many technical proofs omitted), and at this simple level we will hardly need much more than the original monograph [8]. Motivated listeners can also consult more advanced sources [9] and [10, Chapter 5].
Well, after all we will cover (or at least mention) on January 20 some of more modern material.
The original papers are [11,12] (RSUVisomorphism), [13] (Ferreira's version of Bounded Arithmetic), [14] (conditional collapse of the BA hierarchy) and [15,16] (more witnessing theorems). For January 22: [17] (survey on the complexity of the pigeonhole principle) and [19] (specifically covers the transition from Bounded Arithmetic to the propositional world).
On January 27 we will begin doing our first block in Propositional Proof Complexity: Interpolation, Automatizability and Applications to Learning. Some of the original references are [2025].
On February 17 we will begin a new topic: Complexity of the PigeonHolePrinciple.
See [17] for a comprehensive survey; our proof of lower bounds for constantdepth Frege follows (more or less closely) [26].
On February 24 we will proceed to our next large block: Algebraic and Geometric Models. The best source I am aware of that nicely covers algebraic models (notably, Nullstellensatz and Polynomial Calculus) is [3]. For a catalogue of geometric systems see [27], and the original sources for Cutting Planes lower bounds are [28,29]. The lower bound on the LS rank is from [30].
And on March 10 we will begin wrapping this course up with
our last topic, Independence of Circuit Lower Bounds and Pseudorandom
Generators. See Introduction in [31]. The reduction to the Pigeon Hole Principle is from [32, Section 4].