Select Connection: INPUT[inlineListSuggester(optionQuery(#permanent_note), optionQuery(#literature_note), optionQuery(#fleeting_note)):connections] ⇒ it seems like ML is just sampling from the complexity of the computational universe, and picking out behaviour that happen to overlap to what’s needed.

In the discrete example a simplification of the ml (and biological) evolution can be seen. The system doesn’t find the simplest solution, actually it seems they just “happen to work”.

  • computational irreducibility has two implications
    • richness, without it systems wouldn’t be random enough. For example, in ml training without it the process would probably stop in a local minimum, as seen in AI course
    • we can’t have a general explanation like it is done in general science

What can be learned?

  • single layer perceptron → any piecewise linear function (only straight line pieces)
  • one intermediate layer → piecewise hyperplanar functions (functions that change direction only at linear fault lines)

Principle of Computational Equivalence → almost any setup is capable of representing any function

Observation: ML can find solutions, but not structured. They are solutions that just happen to work, like biology

Conclusions

⇒ computational irreducibility lets simple processes be successful (Principle of Computational Equivalence) ⇒ only if the system is computationally reducible, we will be able to know in advance what a system is able to do. But if it is, then it won’t achieve magic as we see ⇒ but within any computationally irreducible system, there are pockets of computational reducibility. That pockets allow us to identify things like “laws of nature” from which we can build “human-level narratives”