Introduction to Theoretical Computer Science — Boaz Barak

Index

\[ \newcommand{\undefined}{} \newcommand{\hfill}{} \newcommand{\qedhere}{\square} \newcommand{\qed}{\square} \newcommand{\ensuremath}[1]{#1} \newcommand{\bbA}{\mathbb A} \newcommand{\bbB}{\mathbb B} \newcommand{\bbC}{\mathbb C} \newcommand{\bbD}{\mathbb D} \newcommand{\bbE}{\mathbb E} \newcommand{\bbF}{\mathbb F} \newcommand{\bbG}{\mathbb G} \newcommand{\bbH}{\mathbb H} \newcommand{\bbI}{\mathbb I} \newcommand{\bbJ}{\mathbb J} \newcommand{\bbK}{\mathbb K} \newcommand{\bbL}{\mathbb L} \newcommand{\bbM}{\mathbb M} \newcommand{\bbN}{\mathbb N} \newcommand{\bbO}{\mathbb O} \newcommand{\bbP}{\mathbb P} \newcommand{\bbQ}{\mathbb Q} \newcommand{\bbR}{\mathbb R} \newcommand{\bbS}{\mathbb S} \newcommand{\bbT}{\mathbb T} \newcommand{\bbU}{\mathbb U} \newcommand{\bbV}{\mathbb V} \newcommand{\bbW}{\mathbb W} \newcommand{\bbX}{\mathbb X} \newcommand{\bbY}{\mathbb Y} \newcommand{\bbZ}{\mathbb Z} \newcommand{\sA}{\mathscr A} \newcommand{\sB}{\mathscr B} \newcommand{\sC}{\mathscr C} \newcommand{\sD}{\mathscr D} \newcommand{\sE}{\mathscr E} \newcommand{\sF}{\mathscr F} \newcommand{\sG}{\mathscr G} \newcommand{\sH}{\mathscr H} \newcommand{\sI}{\mathscr I} \newcommand{\sJ}{\mathscr J} \newcommand{\sK}{\mathscr K} \newcommand{\sL}{\mathscr L} \newcommand{\sM}{\mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O} \newcommand{\sP}{\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R} \newcommand{\sS}{\mathscr S} \newcommand{\sT}{\mathscr T} \newcommand{\sU}{\mathscr U} \newcommand{\sV}{\mathscr V} \newcommand{\sW}{\mathscr W} \newcommand{\sX}{\mathscr X} \newcommand{\sY}{\mathscr Y} \newcommand{\sZ}{\mathscr Z} \newcommand{\sfA}{\mathsf A} \newcommand{\sfB}{\mathsf B} \newcommand{\sfC}{\mathsf C} \newcommand{\sfD}{\mathsf D} \newcommand{\sfE}{\mathsf E} \newcommand{\sfF}{\mathsf F} \newcommand{\sfG}{\mathsf G} \newcommand{\sfH}{\mathsf H} \newcommand{\sfI}{\mathsf I} \newcommand{\sfJ}{\mathsf J} \newcommand{\sfK}{\mathsf K} \newcommand{\sfL}{\mathsf L} \newcommand{\sfM}{\mathsf M} \newcommand{\sfN}{\mathsf N} \newcommand{\sfO}{\mathsf O} \newcommand{\sfP}{\mathsf P} \newcommand{\sfQ}{\mathsf Q} \newcommand{\sfR}{\mathsf R} \newcommand{\sfS}{\mathsf S} \newcommand{\sfT}{\mathsf T} \newcommand{\sfU}{\mathsf U} \newcommand{\sfV}{\mathsf V} \newcommand{\sfW}{\mathsf W} \newcommand{\sfX}{\mathsf X} \newcommand{\sfY}{\mathsf Y} \newcommand{\sfZ}{\mathsf Z} \newcommand{\cA}{\mathcal A} \newcommand{\cB}{\mathcal B} \newcommand{\cC}{\mathcal C} \newcommand{\cD}{\mathcal D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cG}{\mathcal G} \newcommand{\cH}{\mathcal H} \newcommand{\cI}{\mathcal I} \newcommand{\cJ}{\mathcal J} \newcommand{\cK}{\mathcal K} \newcommand{\cL}{\mathcal L} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cO}{\mathcal O} \newcommand{\cP}{\mathcal P} \newcommand{\cQ}{\mathcal Q} \newcommand{\cR}{\mathcal R} \newcommand{\cS}{\mathcal S} \newcommand{\cT}{\mathcal T} \newcommand{\cU}{\mathcal U} \newcommand{\cV}{\mathcal V} \newcommand{\cW}{\mathcal W} \newcommand{\cX}{\mathcal X} \newcommand{\cY}{\mathcal Y} \newcommand{\cZ}{\mathcal Z} \newcommand{\bfA}{\mathbf A} \newcommand{\bfB}{\mathbf B} \newcommand{\bfC}{\mathbf C} \newcommand{\bfD}{\mathbf D} \newcommand{\bfE}{\mathbf E} \newcommand{\bfF}{\mathbf F} \newcommand{\bfG}{\mathbf G} \newcommand{\bfH}{\mathbf H} \newcommand{\bfI}{\mathbf I} \newcommand{\bfJ}{\mathbf J} \newcommand{\bfK}{\mathbf K} \newcommand{\bfL}{\mathbf L} \newcommand{\bfM}{\mathbf M} \newcommand{\bfN}{\mathbf N} \newcommand{\bfO}{\mathbf O} \newcommand{\bfP}{\mathbf P} \newcommand{\bfQ}{\mathbf Q} \newcommand{\bfR}{\mathbf R} \newcommand{\bfS}{\mathbf S} \newcommand{\bfT}{\mathbf T} \newcommand{\bfU}{\mathbf U} \newcommand{\bfV}{\mathbf V} \newcommand{\bfW}{\mathbf W} \newcommand{\bfX}{\mathbf X} \newcommand{\bfY}{\mathbf Y} \newcommand{\bfZ}{\mathbf Z} \newcommand{\rmA}{\mathrm A} \newcommand{\rmB}{\mathrm B} \newcommand{\rmC}{\mathrm C} \newcommand{\rmD}{\mathrm D} \newcommand{\rmE}{\mathrm E} \newcommand{\rmF}{\mathrm F} \newcommand{\rmG}{\mathrm G} \newcommand{\rmH}{\mathrm H} \newcommand{\rmI}{\mathrm I} \newcommand{\rmJ}{\mathrm J} \newcommand{\rmK}{\mathrm K} \newcommand{\rmL}{\mathrm L} \newcommand{\rmM}{\mathrm M} \newcommand{\rmN}{\mathrm N} \newcommand{\rmO}{\mathrm O} \newcommand{\rmP}{\mathrm P} \newcommand{\rmQ}{\mathrm Q} \newcommand{\rmR}{\mathrm R} \newcommand{\rmS}{\mathrm S} \newcommand{\rmT}{\mathrm T} \newcommand{\rmU}{\mathrm U} \newcommand{\rmV}{\mathrm V} \newcommand{\rmW}{\mathrm W} \newcommand{\rmX}{\mathrm X} \newcommand{\rmY}{\mathrm Y} \newcommand{\rmZ}{\mathrm Z} \newcommand{\paren}[1]{( #1 )} \newcommand{\Paren}[1]{\left( #1 \right)} \newcommand{\bigparen}[1]{\bigl( #1 \bigr)} \newcommand{\Bigparen}[1]{\Bigl( #1 \Bigr)} \newcommand{\biggparen}[1]{\biggl( #1 \biggr)} \newcommand{\Biggparen}[1]{\Biggl( #1 \Biggr)} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\Abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigabs}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigabs}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggabs}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggabs}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\card}[1]{\lvert #1 \rvert} \newcommand{\Card}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigcard}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigcard}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggcard}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggcard}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\Norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\bignorm}[1]{\bigl\lVert #1 \bigr\rVert} \newcommand{\Bignorm}[1]{\Bigl\lVert #1 \Bigr\rVert} \newcommand{\biggnorm}[1]{\biggl\lVert #1 \biggr\rVert} \newcommand{\Biggnorm}[1]{\Biggl\lVert #1 \Biggr\rVert} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\Iprod}[1]{\left\langle #1 \right\rangle} \newcommand{\bigiprod}[1]{\bigl\langle #1 \bigr\rangle} \newcommand{\Bigiprod}[1]{\Bigl\langle #1 \Bigr\rangle} \newcommand{\biggiprod}[1]{\biggl\langle #1 \biggr\rangle} \newcommand{\Biggiprod}[1]{\Biggl\langle #1 \Biggr\rangle} \newcommand{\set}[1]{\lbrace #1 \rbrace} \newcommand{\Set}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bigset}[1]{\bigl\lbrace #1 \bigr\rbrace} \newcommand{\Bigset}[1]{\Bigl\lbrace #1 \Bigr\rbrace} \newcommand{\biggset}[1]{\biggl\lbrace #1 \biggr\rbrace} \newcommand{\Biggset}[1]{\Biggl\lbrace #1 \Biggr\rbrace} \newcommand{\bracket}[1]{\lbrack #1 \rbrack} \newcommand{\Bracket}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\bigbracket}[1]{\bigl\lbrack #1 \bigr\rbrack} \newcommand{\Bigbracket}[1]{\Bigl\lbrack #1 \Bigr\rbrack} \newcommand{\biggbracket}[1]{\biggl\lbrack #1 \biggr\rbrack} \newcommand{\Biggbracket}[1]{\Biggl\lbrack #1 \Biggr\rbrack} \newcommand{\ucorner}[1]{\ulcorner #1 \urcorner} \newcommand{\Ucorner}[1]{\left\ulcorner #1 \right\urcorner} \newcommand{\bigucorner}[1]{\bigl\ulcorner #1 \bigr\urcorner} \newcommand{\Bigucorner}[1]{\Bigl\ulcorner #1 \Bigr\urcorner} \newcommand{\biggucorner}[1]{\biggl\ulcorner #1 \biggr\urcorner} \newcommand{\Biggucorner}[1]{\Biggl\ulcorner #1 \Biggr\urcorner} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\bigceil}[1]{\bigl\lceil #1 \bigr\rceil} \newcommand{\Bigceil}[1]{\Bigl\lceil #1 \Bigr\rceil} \newcommand{\biggceil}[1]{\biggl\lceil #1 \biggr\rceil} \newcommand{\Biggceil}[1]{\Biggl\lceil #1 \Biggr\rceil} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\Floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\bigfloor}[1]{\bigl\lfloor #1 \bigr\rfloor} \newcommand{\Bigfloor}[1]{\Bigl\lfloor #1 \Bigr\rfloor} \newcommand{\biggfloor}[1]{\biggl\lfloor #1 \biggr\rfloor} \newcommand{\Biggfloor}[1]{\Biggl\lfloor #1 \Biggr\rfloor} \newcommand{\lcorner}[1]{\llcorner #1 \lrcorner} \newcommand{\Lcorner}[1]{\left\llcorner #1 \right\lrcorner} \newcommand{\biglcorner}[1]{\bigl\llcorner #1 \bigr\lrcorner} \newcommand{\Biglcorner}[1]{\Bigl\llcorner #1 \Bigr\lrcorner} \newcommand{\bigglcorner}[1]{\biggl\llcorner #1 \biggr\lrcorner} \newcommand{\Bigglcorner}[1]{\Biggl\llcorner #1 \Biggr\lrcorner} \newcommand{\expr}[1]{\langle #1 \rangle} \newcommand{\Expr}[1]{\left\langle #1 \right\rangle} \newcommand{\bigexpr}[1]{\bigl\langle #1 \bigr\rangle} \newcommand{\Bigexpr}[1]{\Bigl\langle #1 \Bigr\rangle} \newcommand{\biggexpr}[1]{\biggl\langle #1 \biggr\rangle} \newcommand{\Biggexpr}[1]{\Biggl\langle #1 \Biggr\rangle} \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\from}{\colon} \newcommand{\super}[2]{#1^{(#2)}} \newcommand{\varsuper}[2]{#1^{\scriptscriptstyle (#2)}} \newcommand{\tensor}{\otimes} \newcommand{\eset}{\emptyset} \newcommand{\sse}{\subseteq} \newcommand{\sst}{\substack} \newcommand{\ot}{\otimes} \newcommand{\Esst}[1]{\bbE_{\substack{#1}}} \newcommand{\vbig}{\vphantom{\bigoplus}} \newcommand{\seteq}{\mathrel{\mathop:}=} \newcommand{\defeq}{\stackrel{\mathrm{def}}=} \newcommand{\Mid}{\mathrel{}\middle|\mathrel{}} \newcommand{\Ind}{\mathbf 1} \newcommand{\bits}{\{0,1\}} \newcommand{\sbits}{\{\pm 1\}} \newcommand{\R}{\mathbb R} \newcommand{\Rnn}{\R_{\ge 0}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\mper}{\,.} \newcommand{\mcom}{\,,} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\val}{val} \DeclareMathOperator{\opt}{opt} \DeclareMathOperator{\Opt}{Opt} \DeclareMathOperator{\Val}{Val} \DeclareMathOperator{\LP}{LP} \DeclareMathOperator{\SDP}{SDP} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Inf}{Inf} \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\argmax}{arg\,max} \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\qpoly}{qpoly} \DeclareMathOperator{\qqpoly}{qqpoly} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\mspan}{span} \DeclareMathOperator{\mrank}{rank} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\pE}{\tilde{\mathbb E}} \DeclareMathOperator{\Pr}{\mathbb P} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Cone}{Cone} \DeclareMathOperator{\junta}{junta} \DeclareMathOperator{\NSS}{NSS} \DeclareMathOperator{\SA}{SA} \DeclareMathOperator{\SOS}{SOS} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\R}{\mathbb{R}} \newcommand{\cE}{\mathcal{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\pE}{\tilde{\mathbb{E}}} \newcommand{\N}{\mathbb{N}} \renewcommand{\P}{\mathcal{P}} \notag \]
\[ \newcommand{\sleq}{\ensuremath{\preceq}} \newcommand{\sgeq}{\ensuremath{\succeq}} \newcommand{\diag}{\ensuremath{\mathrm{diag}}} \newcommand{\support}{\ensuremath{\mathrm{support}}} \newcommand{\zo}{\ensuremath{\{0,1\}}} \newcommand{\pmo}{\ensuremath{\{\pm 1\}}} \newcommand{\uppersos}{\ensuremath{\overline{\mathrm{sos}}}} \newcommand{\lambdamax}{\ensuremath{\lambda_{\mathrm{max}}}} \newcommand{\rank}{\ensuremath{\mathrm{rank}}} \newcommand{\Mslow}{\ensuremath{M_{\mathrm{slow}}}} \newcommand{\Mfast}{\ensuremath{M_{\mathrm{fast}}}} \newcommand{\Mdiag}{\ensuremath{M_{\mathrm{diag}}}} \newcommand{\Mcross}{\ensuremath{M_{\mathrm{cross}}}} \newcommand{\eqdef}{\ensuremath{ =^{def}}} \newcommand{\threshold}{\ensuremath{\mathrm{threshold}}} \newcommand{\vbls}{\ensuremath{\mathrm{vbls}}} \newcommand{\cons}{\ensuremath{\mathrm{cons}}} \newcommand{\edges}{\ensuremath{\mathrm{edges}}} \newcommand{\cl}{\ensuremath{\mathrm{cl}}} \newcommand{\xor}{\ensuremath{\oplus}} \newcommand{\1}{\ensuremath{\mathrm{1}}} \notag \]
\[ \newcommand{\transpose}[1]{\ensuremath{#1{}^{\mkern-2mu\intercal}}} \newcommand{\dyad}[1]{\ensuremath{#1#1{}^{\mkern-2mu\intercal}}} \newcommand{\nchoose}[1]{\ensuremath{{n \choose #1}}} \newcommand{\generated}[1]{\ensuremath{\langle #1 \rangle}} \notag \]

Is every function computable?

  • See a fundamental result in computer science and mathematics: the existence of uncomputable functions.
  • See the canonical example for an uncomputable function: the halting problem.
  • Introduction to the technique of reductions which will be used time and again in this course to show difficulty of computational tasks.
  • Rice’s Theorem, which is a starting point for much of research on compilers and programming languages, and marks the difference between semantic and syntactic properties of programs.

“A function of a variable quantity is an analytic expression composed in any way whatsoever of the variable quantity and numbers or constant quantities.”, Leonhard Euler, 1748.

We saw that NAND programs can compute every finite function. A natural guess is that NAND++ programs could compute every infinite function. However, this turns out to be false, even for functions with \(0/1\) output. That is, there exists a function \(F:\{0,1\}^* \rightarrow \{0,1\}\) that is uncomputable! This is actually quite surprising, if you think about it. Our intuitive notion of a “function” (and the notion most scholars had until the 20th century) is that a function \(f\) defines some implicit or explicit way of computing the output \(f(x)\) from the input \(x\).In the 1800’s, with the invention of the Fourier series and with the systematic study of continuity and differentiability, people have starting looking at more general kinds of functions, but the modern definition of a function as an arbitrary mapping was not yet universally accepted. For example, in 1899 Poincare wrote “we have seen a mass of bizarre functions which appear to be forced to resemble as little as possible honest functions which serve some purpose. … they are invented on purpose to show that our ancestor’s reasoning was at fault, and we shall never get anything more than that out of them”. The notion of an “uncomputable function” thus seems to be a contradiction in terms, but yet the following theorem shows that such creatures do exist:

There exists a function \(F^*:\{0,1\}^* \rightarrow \{0,1\}\) that is not computable by any NAND++ program.

The proof is illustrated in Reference:diagonal-fig. We start by defining the following function \(G:\{0,1\}^* \rightarrow \{0,1\}\):

For every string \(x\in\bits^*\), if \(x\) satisfies (1) \(x\) is a valid representation of a NAND++ program \(P_x\) and (2) when the program \(P_x\) is executed on the input \(x\) it halts and produces an output, then we define \(G(x)\) as the first bit of this output. Otherwise (i.e., if \(x\) is not a valid representation of a program, or the program \(P_x\) never halts on \(x\)) we define \(G(x)=0\). We define \(F^*(x) := 1 - G(x)\).

We claim that there is no NAND++ program that computes \(F^*\). Indeed, suppose, towards the sake of contradiction, that there was some program \(P\) that computed \(F^*\), and let \(x\) be the binary string that represents the program \(P\). Then on input \(x\), the program \(P\) outputs \(F^*(x)\). But by definition, the program should also output \(1-F^*(x)\), hence yielding a contradiction.

We construct an uncomputable function by defining for every two strings \(x,y\) the value \(1-P_y(x)\) which equals to \(0\) if the program described by \(y\) outputs \(1\) on \(x\), and equals to \(1\) otherwise. We then define \(F^*(x)\) to be the “diagonal” of this table, namely \(F^*(x)=1-P_x(x)\) for every \(x\). The function \(F^*\) is uncomputable, because if it was computable by some program whose string description is \(x^*\) then we would get that \(P_{x^*}(x^*)=F(x^*)=1-P_{x^*}(x^*)\).

The proof of Reference:uncomputable-func is short but subtle. I suggest that you pause here and go back to read it again and think about it - this is a proof that is worth reading at least twice if not three or four times. It is not often the case that a few lines of mathematical reasoning establish a deeply profound fact - that there are problems we simply cannot solve and the “firm conviction” that Hilbert alluded to above is simply false.

The type of argument used to prove Reference:uncomputable-func is known as diagonalization since it can be described as defining a function based on the diagonal entries of a table as in Reference:diagonal-fig. The proof can be thought of as an infinite version of the counting argument we used for showing lower bound for NAND progams in Reference:counting-lb. Namely, we show that it’s not possible to compute all functions from \(\{0,1\}^* \rightarrow \{0,1\}\) by NAND++ programs simply because there are more functions like that then there are NAND++ programs.

The Halting problem

Reference:uncomputable-func shows that there is some function that cannot be computed. But is this function the equivalent of the “tree that falls in the forest with no one hearing it”? That is, perhaps it is a function that no one actually wants to compute.
It turns out that there are natural uncomputable functions:

Let \(HALT:\{0,1\}^* \rightarrow \{0,1\}\) be the function such that \(HALT(P,x)=1\) if the NAND++ program \(P\) halts on input \(x\) and equals to \(0\) if it does not. Then \(HALT\) is not computable.

Before turning to prove Reference:halt-thm, we note that \(HALT\) is a very natural function to want to compute. For example, one can think of \(HALT\) as a special case of the task of managing an “App store”. That is, given the code of some application, the gatekeeper for the store needs to decide if this code is safe enough to allow in the store or not. At a minimum, it seems that we should verify that the code would not go into an infinite loop.

We prove that \(HALT\) is uncomputable using a reduction from computing the previously shown uncomputable function \(F^*\) to computing \(HALT\). We assume that we had an algorithm that computes \(HALT\) and use that to obtain an algorithm that computes \(F^*\).

The proof will use the previously established Reference:uncomputable-func , as illustrated in Reference:halt-fig. That is, we will assume, towards a contradiction, that there is NAND++ program \(P^*\) that can compute the \(HALT\) function, and use that to derive that there is some NAND++ program \(Q^*\) that computes the function \(F^*\) defined above, contradicting Reference:uncomputable-func. (This is known as a proof by reduction, since we reduce the task of computing \(F^*\) to the task of computing \(HALT\). By the contrapositive, this means the uncomputability of \(F^*\) implies the uncomputability of \(HALT\).)

Indeed, suppose that \(P^*\) was a NAND++ program that computes \(HALT\). Then we can write a NAND++ program \(Q^*\) that does the following on input \(x\in \{0,1\}^*\):Note that we are using here a “high level” description of NAND++ programs. We know that we can implement the steps below, for example by first writing them in NAND<< and then transforming the NAND<< program to NAND++. Step 1 involves simply running the program \(P^*\) on some input.

  1. Compute \(z=P^*(x,x)\)
  2. If \(z=0\) then output \(1\).
  3. Otherwise, if \(z=1\) then let \(y\) be the first bit of \(EVAL(x,x)\) (i.e., evaluate the program described by \(x\) on the input \(x\)). If \(y=1\) then output \(0\). Otherwise output \(1\).

Claim: For every \(x\in \{0,1\}^*\), if \(P^*(x,x)=HALT(x,x)\) then the program \(Q^*(x)=F^*(x)\) where \(F^*\) is the function from the proof of Reference:uncomputable-func.

Note that the claim immediately implies that our assumption that \(P^*\) computes \(HALT\) contradicts Reference:uncomputable-func, where we proved that the function \(F^*\) is uncomputable. Hence the claim is sufficient to prove the theorem.

Proof of claim:: Let \(x\) be any string. If the program described by \(x\) halts on input \(x\) and its first output bit is \(1\) then \(F^*(x)=0\) and the output \(Q^*(x)\) will also equal \(0\) since \(z=HALT(x,x)=1\), and hence in step 3 the program \(Q^*\) will run in a finite number of steps (since the program described by \(x\) halts on \(x\)), obtain the value \(y=1\) and output \(0\).

Otherwise, there are two cases. Either the program described by \(x\) does not halt on \(x\), in which case \(z=0\) and \(Q^*(x)=1=F^*(x)\). Or the program halts but its first output bit is not \(1\). In this case \(z=1\) but the value \(y\) computed by \(Q^*(x)\) is not \(1\) and so \(Q^*(x)=1=F^*(x)\).

Once again, this is a proof that’s worth reading more than once. The uncomputability of the halting problem is one of the fundamental theorems of computer science, and is the starting point for much of the investigations we will see later. An excellent way to get a better understanding of Reference:halt-thm is to do Reference:halting-alt-ex which asks you to prove an alternative proof of the same result.

Is the Halting problem really hard?

Many people’s first instinct when they see the proof of Reference:halt-thm is to not believe it. That is, most people do believe the mathematical statement, but intuitively it doesn’t seem that the Halting problem is really that hard. After all, being uncomputable only means that \(HALT\) cannot be computed by a NAND++ program. But programmers seem to solve \(HALT\) all the time by informally or formally arguing that their programs halt. While it does occasionally happen that a program unexpectedly enters an infinite loop, is there really no way to solve the halting problem? Some people argue that they can, if they think hard enough, determine whether any concrete program that they are given will halt or not. Some have even argued that humans in general have the ability to do that, and hence humans have inherently superior intelligence to computers or anything else modeled by NAND++ programs (aka Turing machines).This argument has also been connected to the issues of consciousness and free will. I am not completely sure of its relevance but perhaps the reasoning is that humans have the ability to solve the halting problem but they exercise their free will and consciousness by choosing not to do so.

The best answer we have so far is that there truly is no way to solve \(HALT\), whether using Macs, PCs, quantum computers, humans, or any other combination of mechanical and biological devices. Indeed this assertion is the content of the Church-Turing Thesis. This of course does not mean that for every possible program \(P\), it is hard to decide if \(P\) enter an infinite loop. Some programs don’t even have loops at all (and hence trivially halt), and there are many other far less trivial examples of programs that we can certify to never enter an infinite loop (or programs that we know for sure that will enter such a loop). However, there is no general procedure that would determine for an arbitrary program \(P\) whether it halts or not. Moreover, there are some very simple programs for which it not known whether they halt or not. For example, the following Python program will halt if and only if Goldbach’s conjecture is false:

def isprime(p):
    return all(p % i  for i in range(2,p-1))

def Goldbach(n):
    return any( (isprime(p) and isprime(n-p))
           for p in range(2,n-1))

n = 4
while True:
    if not Goldbach(n): break
    n+= 2

Given that Goldbach’s Conjecture has been open since 1742, it is unclear that humans have any magical ability to say whether this (or other similar programs) will halt or not.

XKCD’s take on solving the Halting problem, using the principle that “in the long run, we’ll all be dead”.

Reductions

The Halting problem turns out to be a linchpin of uncomputability, in the sense that Reference:halt-thm has been used to show the uncomputability of a great many interesting functions. We will see several examples in such results in the lecture and the exercises, but there are many more such results in the literature (see Reference:haltreductions).

The idea behind such uncomputability results is conceptually simple but can at first be quite confusing. If we know that \(HALT\) is uncomputable, and we want to show that some other function \(BLAH\) is uncomputable, then we can do so via a contrapositive argument (i.e., proof by contradiction). That is, we show that if we had a NAND++ program that computes \(BLAH\) then we could have a NAND++ program that computes \(HALT\). (Indeed, this is exactly how we showed that \(HALT\) itself is uncomputable, by showing this follows from the uncomputability of the function \(F^*\) from Reference:uncomputable-func.)

For example, to prove that \(BLAH\) is uncomputable, we could show that there is a computable function \(R:\{0,1\}^* \rightarrow \{0,1\}^*\) such that for every \(x\in \{0,1\}^*\), \(HALT(x)=BLAH(R(x))\). Such a function is known as a reduction, because we are reducing the task of computing \(HALT\) to the task of computing \(BLAH\). The confusing part about reductions is that we are assuming something we believe is false (that \(BLAH\) has an algorithm) to derive something that we know is false (that \(HALT\) has an algorithm). For this reason Michael Sipser described such results as having the form “If pigs could whistle then horses could fly”.

At the end of the day reduction-based proofs are just like other proofs by contradiction, but the fact that they involve hypothetical algorithms that don’t really exist tends to make such proofs quite confusing. The one silver lining is that at the end of the day the notion of reductions is mathematically quite simple, and so it’s not that bad even if you have to go back to first principles every time you need to remember what is the direction that a reduction should go in. (If this discussion itself is confusing, feel free to ignore it; it might become clearer after you see an example of a reduction such as the proof of Reference:spec-thm.)

Some of the functions that have been proven uncomputable. An arrow from problem X to problem Y means that the proof that Y is uncomputable follows by reducing computing X to computing Y. Black arrows correspond to proofs that are shown in this and the next lecture while pink arrows correspond to proofs that are known but not shown here. There are many other functions that have been shown uncomputable via a reduction from the Halting function \(HALT\).

Impossibility of general software verification

The uncomputability of the Halting problem turns out to be a special case of a much more general phenomenon. Namely, that we cannot certify semantic properties of general purpose programs. “Semantic properties” mean properties of the function that the program computes, as opposed to properties that depend on the particular syntax. For example, we can easily check whether or not a given C program contains no comments, or whether all function names begin with an upper case letter. As we’ve seen, we cannot check whether a given program enters into an infinite loop or not.

But we could still hope to check some other properties of the program. For example, we could hope to certify that a given program \(M\) correctly computes the multiplication operation, or that no matter what input the program is provided with, it will never reveal some confidential information. Alas it turns out that the task of checking that a given program conforms with such a specification is uncomputable. We start by proving a simple generalization of the Halting problem:

Let \(HALTONZERO:\{0,1\}^* \rightarrow\{0,1\}\) be the function that on input \(P\in \{0,1\}^*\), maps \(P\) to \(1\) if and only if the NAND++ program represented by \(P\) halts when supplied the single bit \(0\) as input. Then \(HALTONZERO\) is uncomputable.

The proof of Reference:haltonzero-thm is below, but before reading it you might want to pause for a couple of minutes and think how you would prove it yourself. In particular, try to think of what a reduction from \(HALT\) to \(HALTONZERO\) would look like. Doing so is an excellent way to get some initial comfort with the notion of proofs by reduction, which is a notion that will recur time and again in this course.

The proof is by reduction from \(HALT\). Suppose, towards the sake of contradiction, that \(HALTONZERO\) is computable. In other words, suppose towards the sake of contradiction that there exists an algorithm \(A\) such that \(A(P')=HALTONZERO(P')\) for every \(P'\in \{0,1\}^*\). Then, we will construct an algorithm \(B\) that solves \(HALT\).

On input a program \(P\) and some input \(x\), the algorithm \(B\) will construct a program \(P'\) such that \(P'(0)=P(x)\) and then feed this to \(A\), returning \(A(P')\). We will describe this algorithm in terms of how one can use the input \(x\) to modify the source code of \(P\) to obtain the source code of the program \(P'\). However, it is clearly possible to do these modification also on the level of the string representations of the programs \(P\) and \(P'\).

Constructing the program \(P'\) is in fact rather simple. The algorithm \(B\) will obtain \(P'\) by modifying \(P\) to ignore its input and use \(x\) instead. In particular, for \(n=|x|\), the program \(P'\) will have have variables myx_0,\(\ldots\),my_\(\expr{n-1}\) that are set to the constants zero or one based on the value of \(x\). That is, it will contain lines of the form myx_\(\expr{i}\):= \(\expr{x_i}\) for every \(i < n\). Similarly, \(P'\) will have variables myvalidx_0,\(\ldots\),myvalidx_\(\expr{n-1}\) that are all set to one. Algorithm \(B\) will include in the program \(P'\) a copy of \(P\) modified to change any reference to x_\(\expr{i}\) to myx_\(\expr{i}\) and any reference to validx_\(\expr{i}\) to myvalidx_\(\expr{i}\). Clearly, regardless of its input, \(P'\) always emulates the behavior of \(P\) on input \(x\). In particular \(P'\) will halt on the input \(0\) if and only if \(P\) halts on the input \(x\). Thus if the hypothetical algorithm \(A\) satisfies \(A(P')=HALTONZERO(P')\) for every \(P'\) then the algorithm \(B\) we construct satisfies \(B(P,x)=HALT(P,x)\) for every \(P,x\), contradicting the uncomputability of \(HALT\).

In the proof of Reference:haltonzero-thm we used the technique of “hardwiring” an input \(x\) to a program \(P\). That is, modifying a program \(P\) that it uses “hardwired constants” for some of all of its input. This technique is quite common in reductions and elsewhere, and we will often use it again in this course.

Once we show the uncomputability of \(HALTONZERO\) we can extend to various other natural functions:

Let \(ZEROFUNC:\{0,1\}^* \rightarrow \{0,1\}\) be the function that on input \(P\in \{0,1\}^*\), maps \(P\) to \(1\) if and only if the NAND++ program represented by \(P\) outputs \(0\) on every input \(x\in \{0,1\}^*\). Then \(ZEROFUNC\) is uncomputable.

The proof is by reduction to \(HALTONZERO\). Suppose, towards the sake of contradiction, that there was an algorithm \(A\) such that \(A(P')=ZEROFUNC(P')\) for every \(P'\in \{0,1\}^*\). Then we will construct an algorithm \(B\) that solves \(HALTONZERO\). Given a program \(P\), Algorithm \(B\) will construct the following program \(P'\): on input \(x\in \{0,1\}^*\), \(P'\) will first run \(P(0)\), and then output \(0\).

Now if \(P\) halts on \(0\) then \(P'(x)=0\) for every \(x\), but if \(P\) does not halt on \(0\) then \(P'\) will never halt on every input and in particular will not compute \(ZEROFUNC\). Hence, \(ZEROFUNC(P')=1\) if and only if \(HALTONZERO(P)=1\). Thus if we define algorithm \(B\) as \(B(P)=A(P')\) (where a program \(P\) is mapped to \(P'\) as above) then we see that if \(A\) computes \(ZEROFUNC\) then \(B\) computes \(HALTONZERO\), contradicting Reference:haltonzero-thm .

We can simply prove the following:

The following function is uncomputable \[ COMPUTES\text{-}PARITY(P) = \begin{cases} 1 & P \text{ computes the parity function } \\ 0 & \text{otherwise} \end{cases} \]

We leave the proof of Reference:spec-thm as an exercise.

Rice’s Theorem

Reference:spec-thm can be generalized far beyond the parity function and in fact it rules out verifying any type of semantic specification on programs. We define a semantic specification on programs to be some property that does not depend on the code of the program but just on the function that the program computes.

For example, consider the following two C programs

int First(int k) {
    return 2*k;
}
int Second(int n) {
    int i = 0;
    int j = 0
    while (j<n) {
        i = i + 2;
        j=  j + 1;
    }
    return i;
}

First and Second are two distinct C programs, but they compute the same function. A semantic property, such as “computing a function \(f:\N \rightarrow \N\) where \(f(m) \geq m\) for every \(m\)”, would be either true for both programs or false for both programs, since it depends on the function the programs compute and not on their code. A syntactic property, such as “containing the variable k” or “using a while operation” might be true for one of the programs and false for the other, since it can depend on properties of the programs’ code.

Often the properties of programs that we are most interested in are the semantic ones, since we want to understand the programs’ functionality. Unfortunately, the following theorem shows that such properties are uncomputable in general:

We say that two strings \(P\) and \(Q\) representing NAND++ programs have the same functionality if for every input \(x\in \{0,1\}^*\), either the programs corresponding to both \(P\) and \(Q\) don’t halt on \(x\), or they both halt with the same output.

We say that a function \(F:\{0,1\}^* \rightarrow \{0,1\}\) is semantic if for every \(P\) and \(Q\) that have the same functionality, \(F(P)=F(Q)\). Then the only semantic computable total functions \(F:\{0,1\}^* \rightarrow \{0,1\}\) are the constant zero function and the constant one function.

We will illustrate the proof idea by considering a particular semantic function \(F\). Define \(MONOTONE:\{0,1\}^* \rightarrow \{0,1\}\) as follows: \(MONOTONE(P)=1\) if there does not exist \(n\in \N\) and two inputs \(x,x' \in \{0,1\}^n\) such that for every \(i\in [n]\) \(x_i \leq x'_i\) but \(P(x)\) outputs \(1\) and \(P(x')=0\). That is, \(MONOTONE(P)=1\) if it’s not possible to find an input \(x\) such that flipping some bits of \(x\) from \(0\) to \(1\) will change \(P\)’s output in the other direction from \(1\) to \(0\). We will prove that \(MONOTONE\) is uncomputable, but the proof will easily generalize to any semantic function. For starters we note that \(MONOTONE\) is not actually the all zeroes or all one function:

  • The program \(INF\) that simply goes into an infinite loop satisfies \(MONOTONE(INF)=1\), since there are no inputs \(x,x'\) on which \(INF(x)=1\) and \(INF(x')=1\).
  • The program \(PAR\) that we’ve seen, which computes the XOR or parity of its input, is not monotone (e.g., \(PAR(1,1,0,0,\ldots,0)=0\) but \(PAR(1,0,0,\ldots,0)=0\)) and hence \(MONOTONE(PAR)=0\).

(It is important to note that in the above we talk about programs \(INF\) and \(PAR\) and not the corresesponding functions that they compute.)

We will now give a reduction from \(HALTONZERO\) to \(MONOTONE\). That is, we assume towards a contradiction that there exists an algorithm \(A\) that computes \(MONOTONE\) and we will build an algorithm \(B\) that computes \(HALTONZERO\). Our algorithm \(B\) will work as follows:

  1. On input a program \(P \in \{0,1\}^*\), \(B\) will construct the following program \(Q\): “on input \(z\in \{0,1\}^*\) do: a. Run \(P(0)\), b. Return \(PAR(z)\)”.
  2. \(B\) will then return the value \(1-A(Q)\).

To complete the proof we need to show that \(B\) outputs the correct answer, under our assumption that \(A\) computes \(MONOTONE\). In other words, we need to show that \(HALTONZERO(P)=1-MONOTONE(Q)\). However, note that if \(P\) does not halt on zero, then the program \(Q\) enters into an infinite loop in step a. and will never reach step b. Hence in this case the program \(Q\) has the same functionality as \(INF\).Note that the program \(Q\) has different code than \(INF\). It is not the same program, but it does have the same behavior (in this case) of never halting on any input. Thus, \(MONOTONE(Q)=MONOTONE(INF)=1\). If \(P\) does halt on zero, then step a. in \(Q\) will eventually conclude and \(Q\)’s output will be determined by step b., where it simply outputs the parity of its input. Hence in this case, \(Q\) computes the non-monotone parity function, and we get that \(MONOTONE(Q)=MONOTONE(PAR)=0\). In both cases we see that \(MONOTONE(Q)=1-HALTONZERO(P)\), which is what we wanted to prove. An examination of this proof shows that we did not use anything about \(MONOTONE\) beyond the fact that it is semantic and non-trivial (in the sense that it is not the all zero, nor the all-ones function).

Is software verification doomed?

Programs are increasingly being used for mission critical purposes, whether it’s running our banking system, flying planes, or monitoring nuclear reactors. If we can’t even give a certification algorithm that a program correctly computes the parity function, how can we ever be assured that a program does what it is supposed to do? The key insight is that while it is impossible to certify that a general program conforms with a specification, it is possible to write a program in the first place in a way that will make it easier to certify. As a trivial example, if you write a program without loops, then you can certify that it halts. Also, while it might not be possible to certify that an artbirary program computes the parity function, it is quite possible to write a particular program \(P\) for which we can mathematically prove that \(P\) computes the parity. In fact, writing programs or algorithms and providing proofs for their correctness is what we do all the time in algorithms research.

The field of software verification is concerned with verifying that given programs satisfy certain conditions. These conditions can be that the program computes a certain function, that it never writes into a dangeours memory location, that is respects certain invariants, and others. While the general tasks of verifying this may be uncomputable, researchers have managed to do so for many interesting cases, especially if the program is written in the first place in a formalism or programming language that makes verification easier. That said, verification, especially of large and complex programs, remains a highly challenging task in practice as well, and the number of programs that have been formally proven correct is still quite small. Moreover, even phrasing the right theorem to prove (i.e., the specification) if often a highly non-trivial endeavor.

Lecture summary

Exercises

Give an alternative, more direct, proof for the uncomputability of the Halting problem. Let us define \(H:\{0,1\}^* \rightarrow \{0,1\}\) to be the function such that \(H(P)=1\) if, when we interpret \(P\) as a program, then \(H(P)\) equals \(1\) if \(P(P)\) halts (i.e., invoke \(P\) on its own string representation) and \(H(P)\) equals \(0\) otherwise. Prove that there no program \(P^*\) that computes \(H\), by building from such a supposed \(P^*\) a program \(Q\) such that, under the assumption that \(P^*\) computes \(H\), \(Q(Q)\) halts if and only if it does not halt.Hint: See Christopher Strachey’s letter in the biographical notes.

For each of the following two functions, say whether it is decidable (computable) or not:

  1. Given a NAND++ program \(P\), an input \(x\), and a number \(k\), when we run \(P\) on \(x\), does the index variable i ever reach \(k\)?
  2. Given a NAND++ program \(P\), an input \(x\), and a number \(k\), when we run \(P\) on \(x\), does \(P\) ever write to an array at index \(k\)?

Bibliographical notes

The diagonlization argument used to prove uncomputability of \(F^*\) is of course derived from Cantor’s argument for the uncountability of the reals. In a twist of fate, using techniques originating from the works Gödel and Turing, Paul Cohen showed in 1963 that Cantor’s Continuum Hypothesis is independent of the axioms of set theory, which means that neither it nor its negation is provable from these axioms and hence in some sense can be considered as “neither true nor false.”The Continuum Hypothesis is the conjecture that for every subset \(S\) of \(\mathbb{R}\), either there is a one-to-one and onto map between \(S\) and \(\N\) or there is a one-to-one and onto map between \(S\) and \(\mathbb{R}\). It was conjectured by Cantor and listed by Hilbert in 1900 as one of the most important problems in mathematics. See here for recent progress on a related question.

Further explorations

Some topics related to this lecture that might be accessible to advanced students include: (to be completed)

Acknowledgements