My favorite xkcd comic
My favorite xkcd comic
Favorite Papers
Not only computers, but all processes in physical reality appear constrained by the computational difficulty of NP-complete problems (e.g. Aaronson 2005)
Gravity appears to place limits on the speed of computation (see Lloyd 1999 and rebuttle by Jordan 2017)
A fair coin toss requires bit commitment, which in turn requires requires relativity. Adding quantum mechanics makes it particularly simple (e.g. Kent 2011)
Incompleteness theorems have physical implications. For example, Maxwell's demon not only is incapable of decreasing entropy, but because it can not know the optimal compression size of its memory, will necessarily increase it (Zurek 1989).
Thermalization will generically take a long time since preparing a Gibbs state of a generic local Hamiltonian is BQP complete (see for example Zeng et. al. 2015).
Free energy is "not fungible", i.e. a system with less free energy may nevertheless be more "valuable" if it contains the output of a "useful" computation. An example in nature is the willingness of flowers to exchange nectar for pollen with bees despite the former's greater calorie content (see "valuable information" by Seth LLoyd). In my opinion this can be exemplified more clearly as follows: a 1MB USB stick containing a constructive proof of P=NP (in that unlikely scenario) would be far more valuable than an empty 1GB USB stick.
"Given the limitless variety of ways in which matter and energy can arrange themselves, almost all of which would be "random," the fact that the physical world is a coherent collection of mutually tolerant, quasi-stable entities is surely a key scientific fact in need of explanation" (from "Why is the Physical World so Comprehensible?" by P.C.W. Davies).
We are living in one of a discrete spectrum of worlds parameterized by the existance of increasingly sophisticated cryptographic protocols, with deep implications about the nature of knowledge. (Impagliazzo, 1995)
One way functions exist if and only if the time bound Kolmogorov complexity is hard on average (Liu-Pass 2020)
How can one judge the quality of an agent assigning probabilities to a priori statements through empirical evidence but without proof? Garrabrant et. al. propose a formal test: whether or not there exists a polynomial time algorithm that can be used to "dutch book" the agent, i.e. it is difficult get money out of the agent by betting with it on the truth of its beliefs.
How can we design in theory a generally intelligent agent which self-improves ad infinitum? Schmidhuber proposes to scan through pairs of tasks and agents to find a new agent provably capable of solving the task together will all previous tasks. The tasks can be chosen from the set of all computable tasks, making the setup fully general.
What is the most in-principle intelligent agent? The AIXI agent is such a proposal. It maximizes expected reward using a universal prior over computable environments.
How can we define intelligence? Chollet gives a formal definition based on skill acquisition efficiency.