talk111
Other than efficiency, the only real design
issue for a brain is to define its
reinforcement values (i.e., what it wants)
Human values are necessarily mostly selfish,
but building selfish machines would be nuts
Their values should be our happiness
Any other values (e.g., profits of the
corporation building the machine) will be
dangerous to humans