Natural Proof: Distinguish Simple From Random

In this post, we briefly give an overview of the famous “natural proof barrier” [RR94], which explained why it's extremely hard to obtain a circuit lower bound towards the P vs. NP problem.

The content of this post has its reference mainly from the book [AB09], rather than the original paper [RR94].

Introduction

The celebrated natural proof theorem, stated by Razborov and Rudich in [RR94], gives the fundamental limitation of the common proof technique that has the following form:

  1. all circuits of size \(s\) do not satisfy some certain property \(\mathcal{P}\) (which is “not simple”, i.e., having a size lower bound \(s\)),
  2. the target function \(f\) satisfies the property \(\mathcal{P}\),
  3. so \(f\) has the size lower bound \(s\).

By picking a random function \(g\) and consider the complexity of \(f, g\) and \(f \oplus g\), we would see that at least 1/4 of all functions (with such signature) has its complexity measure \(\mu(g) \geq \mu(f)/4\). (Such phenomenon might be a witness of the power of circuit model?)

Moreover, such property is usually succinct enough to describe (in natural language), so it is plausible that checking the property could be efficient (here more formalizations are needed), where we measure the running time with respect to the length of truth table, \(2^n\), rather than \(n\). (It's said that such property is typically a combinatorial one, which is constructive in most cases [AB09].)

Definition

Now we already have the intuition of the definition of a “natural” predicate: a property \(\mathcal{P}\) s.t.

  1. constructiveness: there is a \(2^{O(n)}\) time algorithm that on input (the truth table of) a function \(g\) outputs \(\mathcal{P}(g)\).
  2. largeness: the probability that a random function \(g: \left\lbrace 0, 1 \right\rbrace^n \to \left\lbrace 0, 1 \right\rbrace\) satisfies \(\mathcal{P}(g) = 1\) is at least \(1/n\).

Such property \(\mathcal{P}\) is said to be \(n^c\)-useful if \(\mathcal{P}(g) = 0, \forall g \in \textsf{SIZE}(n^c)\). Observe that an \(n^c\)-useful natural property can be used to prove a lower bound of \(n^c\).

The word "natural" is quite a good match, capturing the common aspect of such proof strategies in such a sense that:

  1. if you are trying to prove a circuit law for even a specific function, you will have such a corresponding lower bound on a large fraction of circuit and you will satisfy the largeness condition;
  2. as previously stated, almost every property considered would satisfy the constructiveness condition.

Thus the proof strategy will fall into our definition, and that's kinda “natural”.

Natural Proof: Distinguish Simple From Random

But how could proofs using such natural properties be excluded? Because such proof could be used to attack sub-exponentially secure PRGs, which are widely believed to be exist. To be more precise, a random function would satisfy \(\mathcal{P}\) (and pass the checker by constructiveness) w.p. \(\geq 1/n\), while the truth table generated by the PRG would never satisfy \(\mathcal{P}\) (and fail the checker) since it has a low circuit complexity. Therefore (the checker of) the natural property \(\mathcal{P}\) could distinguish the PRG from random, violating a widely-used plausible complexity assumption.

Supplementary Technical Details

The checker given by constructiveness requires \(2^{O(n)}\) time, which might be larger than a sub-exponential function in \(n\), like \(2^{n^\epsilon}\). To fix this, we take \(n = m^{\epsilon/2}\) where \(m\) is the length of the PRG.

So finally as a conclusion I preferred: natural proof = “distinguish simple from random”.

  • usefulness (against) = “not simple”,
  • largeness = “random”,
  • constructiveness = “(efficiently) distinguishable”.

And More ...

The great and succinct conclusion comes from the talk given by Prof. Valentine Kabanets at the Simons Institute. Incredibly thanks to him for the inspiration.

To bypass this barrier, since the largeness condition always holds, it should rely on the constructiveness. The 1/4 of the functions which get the lower bound is not such constructive. But no more thoughts about this now.

Also this concept is related to Kolmogorov Complexity in some sense, as it could "distinguish simple from random". Hopefully more thoughts would be added here in the future.

References

  • [AB09] Sanjeev Arora, and Boaz Barak. Computational Complexity: A Modern Approach.
  • [RR94] Alexander A. Razborov, and Steven Rudich. Natural Proofs.