Preliminary

Notation

Motation Name Formal Definition
$f(n) = o(g(n))$ Little O $\forall k > 0 \exists n_0 \forall n > n_0.f(n) < kg(n)$
$f(n) = O(g(n))$ Big O $\exists k >0 \exists n_0\forall n>n_0.f(n) \leq kg(n)$
$f(n) = \Theta(g(n))$ Big Theta $\exists k_1 >0 \exists k_2 > 0 \exists n_0 \forall n > n_0.k_1g(n) \leq f(n) \leq k_2g(n)$
$f(n) \backsim g(n)$ On the order of $\forall \epsilon > 0 \exists n_0 \forall n > n_0.\lvert \frac{f(n)}{g(n)} - 1 \rvert < \epsilon $
$f(n) = \Omega(g(n))$ Big Omega $ \exists k > 0 \exists n_0 \forall n > n_0. f(n) \geq kg(n)$
$f(n) = \omega(g(n))$ Small Omega $\forall k > 0 \exists n_0 \forall n>n_0.f(n) > kg(n)$

Probabilistic Method

Lattice

If we define B as $m \times n$ matrix whose columns are $b_1, b_2,…, b_n$, then the lattice generated by B is $L(B) = L(b_1, b_2,…, b_n) = { Bx \vert x \in Z^n}$. We say that the rank of the lattive is n and its dimension is m. If $m = n$, the lattice is called a full-rank lattice.

For any lattice basis B we define $P(B) = {Bx \vert x \in R^n, \forall i:0\leq x_i < 1}$

The span of a lattice $L(B)$ is the linear space spanned by its vectors, $span(L(B)) = span(B) = { By \vert y \in R^n}$. It follows easily from the definitions above, that if we place one copy of $P(B)$ at each lattice point in $L(B)$.

Lemma 1

Let $\Lambda$ be a lattice of rank n, and let $b_1,b_2,…,b_n \in \Lambda$ be n linearly independent lattice vectors. Then $b_1, b_2,…,b_n$ form a basis of $\Lambda$ if and only if $P(b_1,b_2,…,b_n) \wedge \Lambda = { 0 }$

Turing machine

euqailty between various Turing machine

All of these are sequential storage Turing machine.

one tape Turing machine

two tapes Turing machine

K tapes Turing machine

oblivious Turing machine

这是一个重要的模型,是证明复杂度下界的重要手段,也揭示了circuit和TM之间的关系。

理解上的难度主要在于oblivious,我认为翻译成不经意为佳,一个算法是oblivious的也就是说当找到解后并不停止,而是继续遍历其他的情况。以查找为例,典型的顺序查找可以实现为oblivious的,但二分查找就很难实现,因为每次查询范围由查询结果确定,找到结果后算法如何继续呢?

Pippenger and Fischer, “Relations Among Complexity Measures.”

上面引用的论文,主要证明了两个结论:(在one-dimension条件下)

  1. If M is a transducer with one-dimensional tapes, there is an oblivious machin M’ with two one-dimensional tapes that simulates M on-line in time O(nlogn).
  2. If M is a transducer with one-dimensional tapes, there is a combinational logic network imiplementing n steps by M with cost O(nlogn) and depth O(n).

据此,我们可以引申为:Any T-time Turing Machine can be simulated by an O(TlogT)-size circuit

Nondeterministic Turing Machine

Probabilistic Turing Machine

There are several ways to give Turing machines access to a source of randomness. If one wishes to study randomness in the (small) space-bounded setting one needs to be slightly careful. Namely, onve a random bit has been acquired it seems most reasonable to make sure that the Truing machine needs to actually store the value(either in the internal states or on a work tape) should it want to read the random bit again. Our choice of definition will be to give a probabilistic Turing machine an extra read-only and read-once tape which is pre-initialized with uniform and independent random bits. The tape is made read-only and read0once by not allowing the Turing machine to modify the cells of the tape and by only allowing the tape head to be stationary or to move right. The advantage of using this definition of probabilistic Turing machines is that it becomes simple to define the precise number of random bits used during computation.

这个定义比较好,如上所述,能精确的描述random bits,另一种投硬币的说法,不太正式。

reduction

LOGSPACE REDUCTION

Informally, there is a $O(\log \lvert x \rvert)$ space machine that given input (x, i) outputs f(x) i-th bit provided $i \leq \lvert f(x) \rvert$. We say that f is implicitly logspace computable.

circuit

CNF and DNF

CNF $\bigwedge_{i}(\bigvee_{j} x)$ is short for Conjunctive Normal Form, and DNF $\bigvee_{i}(\bigwedge_{j} x)$ is short for Disjunctive Normal From.

SAT

Let SAT be the language of all satisfiable CNF formulae and 3SAT be the language of all satisfiable 3CNF formulae. Then, SAT and 3SAT are both NP-complete. (Cook-Levin Theorem)

universality of boolean operation

For every boolean function f : $ \{ 0,1 \}^{l} \rightarrow \{ 0,1 \}$, there is an l-variable CNF formular $\phi$ of size $l2^l$ such that $\phi(u) = f(u)$ for every $u \in \{ 0,1\}^{l}$.

It’s easy to show because for every possible input, we can construct a boolean circuit to identify it. For example, if the input is $\langle 1,1,0,1 \rangle$, we can use $\bar x_1 \vee \bar x_2 \vee x_3 \vee \bar x_4$. And then we conjunct all identify circuit of inputs that have output 1. The size is $O(l2^l)$.

circuit complexity

P/poly

P/poly = $\cup_{c} \text{SIZE}(n^c)$

uniform computation vs. nonuniform computation

Turing machine is a canonical nonuniform computation because it can compute on inputs with any length, but circuit cannot do that. 我们为什么需要这个东西呢,因为我们可以用硬编码将一些本身undecidable的问题,转换成线性复杂度的circuit。例如我们要判定一个长度为n的input是否是unary language,也就是是否属于是一进制串的子集的某个语言。我们可以将这个语言转换到长度小于等于n的,每个一进制串是否在该语言中。这个子判定电路与输入无关也就是常数的,所以整体就是线性的。这就是在耍赖了,也因此我们需要给电路一些限制,使其更具有现实意义。

LOGSPACE-UNIFORM CIRCUIT FAMILIES

Interactive proofs

verifier验证证明,prover提供证明。值得一提的是,这一章很多定理由Shafi Goldwasser提出并证明,也因此两获哥德尔奖,12年获图灵奖,杰出的女性理论计算机学家。

Deterministic Proof System

$(completeness) x \in L \Rightarrow \exists P: \{ 0, 1\}^* \rightarrow \{ 0, 1\}* out_V\langle V, P\rangle(x) = 1$ $(Soundness) x \notin L \Rightarrow \forall P: \{ 0, 1\}^* \rightarrow \{ 0, 1\}* out_V\langle V, P\rangle(x) = 0$ $out_V\langle V, P\rangle(x)$ represents $V(x, a_1,…, a_k)$值得注意的是,soundness中out是1还是0没有区别。dIP contains all languages with a k(n)-round deterministic interactive proof systems with k(n) polynomial in n. dIP = NP 证明比较trivial,后补

IP

$IP = \cup_{c\geq 1}IP[n^c]$

我们假设验证者是probabilistic的,事实上可以证明prover是否是probabilistic对IP无影响。IP $ \subseteq $ PSPACE

Arora and Barak, Computational Complexity.

There are some insights about IP in P8.4(150).

GNI and GI

Graph Isomorphic is in NP, but we don’t konw if it’s a P or NP-complete. It’s still an open question. The complement of GI, GNI is short for Graph Not Isomorphic.

GNI属于IP,非常巧妙的构造。甚至一轮交互就可以证明。我们均匀随机的从两个图选其一,然后随机的生成一个这个图的节点排列,让prover确定这个排列是由哪个图产生的。如果这两个图不isomoriphic,prover产生的结果一定是对的;如果这两个图同构,上帝来了也得抛硬币,因此对的概率小于1/2。

自然而然的我们想到,prover猜不出来的原因是verifier掷硬币的结果不可见,那如果可见呢,是否会弱化IP的计算能力?答案是会弱化,但弱化很小,具体的可以证明$IP[k] \subseteq AM[K + 2]$

GNI属于AM[k]

首先我们先定义identity permutation是指形如[1,2,…,n]这种的排列,也就是说$\pi(G) = G$,non-identity permutation是形不如上述的排列。现在有暴论如下,如果一个n个顶点的图没有n!个同构图,则存在non-identity permutation使得$\pi(G) = G$。我们考虑将edge固定为数组里两个元素的链接,则该数组上的全排列都满足同构条件,因此至多n!,但由于可能存在上述non-identity permutation,导致存在一些排列$\pi, \pi^2, \pi^3,…, \pi^k$,使得与原图相同,故我们定义$(H, \pi)$,其中$\pi(H) = H$,如此满足当$G_1 \equiv G_2$,恰有n!上述数对,当$G_1 \not\equiv G_2$,恰有2(n!)个。Then, it can be done by set lower bound protocol.

Set Lower Bound Protocol

Arora and Barak, Computational Complexity.书里证明有一点疏漏,并且有的比较关键。首先是对pairwise independent hash function的定义就有问题,

DEFITION(Pairwise Independent hash functions): Let $H_{n,k}$ be a collection of functions from $\{ 0,1\}^{n}$ to $\{ 0,1\}^{k}$. We say that $H_{n,k}$ is pairwise independent if for every $x, x’ \in \{0, 1\}^{n}$ with $x \neq x’$ and for every $y, y’\in \{ 0,1\}^{k}, \Pr_{h \in_R H_{n,k}}[h(x) = y \wedge h(x’) = y’] = 2^{-2k}$.

书里写的是$ 2^{-2n} $,这是没有道理的,因为用这个等式进行双重边缘概率求和,不难发现和不为1,因此我认为这里应该是k,并且这样一来independent也更顺理成章。例子中affine function的证明没有问题。

对set lower bound protocol的证明在使用inclusion-exclusion principle时,乘了个系数1/2,这并不是任何放缩,仅仅是为了便于计数的trick,n个事件两两相交共有n(n-1)/2种,作者把这个1/2从求和空间中写到了系数里,十分迷惑。不难归纳,余项大于0,因此放缩掉了余项。

最重要的问题是,按照作者的逻辑,证明的假设需要满足$ \lvert S\rvert \leq \frac{2^k}{2} $,但我们仅仅在定义里假设$\vert S \rvert \geq K$,其中$2^{k-2} \leq K \leq 2^{k-1} $。实际上,这并不是证明的不完全,如果$\lvert S\rvert > \frac{2^k}{2}$,我们可以将K再扩大二倍,总能找到这样的K和k。问题的关键是,接收集至少是拒绝集的二倍

IP = SPACE

#SAT is the problem of counting the number of interpretations that statisfy a given Boolean formula. 中国话就是给定一个boolean formula计算出有几种解法

A decision version of the counting problem #SAT: #$\text{SAT}_D = { \langle \phi, K\rangle:\text{ K is the number of satisfying assignments of } \phi }$

We can show that $\text{SAT}_D \in IP$ by arithmetization and sum-check protocol. 有一点需要思考一下,为什么我们需要寻找素数构造域,而不是全部整数呢?或者域的性质也就是具有乘法inverse element起到了什么作用呢?

Sum-check Protocol

NUS的课件,我觉得讲解的非常到位,只不过很多证明用归纳法证明的,不够优雅。其次,这里提到了个说法,prover欺骗,其实准确的讲是,如果K确实等于多项式的和,prover不会欺骗,如果K不等于多项式的和,才会发生所谓的欺骗。所以这里的欺骗不是恶意的欺骗,而是无能为力,尽可能的给一个像结果的结果。所以,往根上理解交互式证明,就像生物老师,你让她讲一个正确的结果,她侃侃而谈,如果你让她讲一个错误的结果,她还能讲出个一二三。XDDDDD

Firstly, prove completeness. 显然可得。

Secondly, prove soundness. 因为我们想证明的K本身不等于这个多项式,因此如果出错prover一定传回一个不同于原本多项式的多项式,我们又知道对于一个degree为d的多项式,至多有d个根,因此我们从Field中随机选择,有d/p的概率证明本身确实相等,因此我们继续调用SUMcheck,此时的新K确实与该多项式相等,由completeness可知,这会被sum-check接受;如果没选到根,则记下来要继续证明新K不等于多项式。同时由于在最后verifier可以自行验证,因此在n个变量的验证过程中,必须在某个阶段发生第一种状况,再由union bound,得出soundness为nd/p.

TQBF

PCP and Hardness of Approximation

PCP can imply that many optimization problems are NP-hard not only to solve exactly but even to approximate. Thus PCP theorem extends the practical importance of the theory of NP-completeness.

(r,q)-VERIFIER

Let L be a language and $q,r:\mathbb{N} \rightarrow \mathbb{N}$. We say that L has an (r(n), q(n))-verifier if there is a polynomial-time probabilistic algorithm V satisfying:

Efficiency: On input a string $x \in { 0, 1}^n$ and given random access to a string $\pi \in { 0,1}^*$(which we call the proof), V uses at most $r(n)$ random coins and makes at most $q(n)$ non-adaptive queries to location of $\pi$. Then it outputs 1 or 0. We use the notation $V^{\pi}(x)$ to denote the random variable representing V’s output on input x and with random access to $\pi$.

Completeness: If $x \in L$ then there exists a proof $\pi \in { 0,1}^*$ such that $\Pr[V^{\pi}(x)=1]=1$. We call $\pi$ the correct proof for x.

Soundness: If $x \notin L$ then for every proof $\pi \in { 0,1}^*,\Pr[V^{\pi}(x) = 1] \leq 1/2$

(r,q)-verifier standord complexity class proof idea
$PCP(0,0)$ P no randomness and no access
$PCP(O(\log n),0)$ P 这意味着一共最多有n种可能的转移函数,P可以模拟全部的可能,然后计算接收率
$PCP(0, O(\log n))$ P 根据定义,P可以枚举$\log n$长度的所有字符串,存在接受的则accept,否则reject,符合verifier定义
$PCP(poly(n),0)$ coRP
$PCP(O(\log n),O(1))$ NP PCP thereom

Table of contents