November 1, 2019
For today
Today
Hashtables
Project ideation
Free will
Boids test run
For Tuesday:
Read Chapter 11 and do the reading quiz
Watch the second video on free will
Organize teams and start a project proposal (due next Thursday so I can read them before class)
On Tuesday (virtual class):
Read Lecture 19 and follow the instructions, including the things for Friday
Optional reading related to our discussion of AI: We Shouldn’t be Scared by ‘Superintelligent A.I.’
PyData experiment test run: greenteapress.com/ip
Best data structure ever? Hint: yes.
The appendix is meant to help you understand the implementation and performance.
1) Linear search through a sequence of key-value pairs
2) Use a hash function to reduce the leading constant of the linear search. What limitation does this come with?
3) Grow the hashtable to bound the length of the linear search.
Not at all obvious that (3) actually solves the problem.
In fact, if we grow arithmetically, it doesn't.
But if we grow geometrically, it does. Most clearly if we double the size when needed:
This is an example of amortized analysis.
Let's review the criteria
Conversations
1) If you have an idea, pitch it and get feedback, or help your partner
2) If you are undecided, summarize the papers in you bibliography and apply the criteria
3) And think about teammates
If you did a solo project last time, I encourage you to team up this time.
If you were on a team last time, you have the option of going solo.
I suggest working with different people this time, but won't require it.
Project proposals
1) Make sure you are clear about the motivation of the paper you are replicating, and the primary result.
2) Is there is a figure you plan to generate, or some other way you will know whether you have replicated the result? Are you looking for quantitative replication, or qualitative?
3) Avoid adding features for the sake of adding features. Your extension to the model should allow you to answer a question.
4) Reimplementing using Python/NumPy/SciPy is the default option, but you could use modeling packages like Mesa or even non-Python modeling tools like Repast. Or StarLogo (TNG).
Let's do a speed read of the Wikipedia page on Free Will, guided by these reading questions:
What's the definition of "free will"?
What is "the problem of free will"?
What does it mean to say that mental activities are "causally effective"?
What is the "consequence argument"?
A few quotes:
It is difficult to reconcile the intuitive evidence that conscious decisions are causally effective with the view that the physical world can be explained to operate perfectly by physical law.
...the free will evoked to make any given choice is really an illusion and the choice had been made all along, oblivious to its "decider".
Causal determinism: The idea that everything is caused by prior conditions, making it impossible for anything else to happen.
Libertarianism holds onto a concept of free will that requires that the agent be able to take more than one possible course of action under a given set of circumstances.
Non-physical theories hold that the events in the brain that lead to the performance of actions do not have an entirely physical explanation, which requires that the world is not closed under physics. This includes interactionist dualism, which claims that some non-physical mind, will, or soul overrides physical causality.
As consequent of incompatibilism, metaphysical libertarian explanations that do not involve dispensing with physicalism require physical indeterminism, such as probabilistic subatomic particle behavior – theory unknown to many of the early writers on free will.
Ideas from emergence can help cut through this thicket
1) Levels of description, and apparent conflicts between the properties of systems described at different levels. Example: Rule 30.
2) Box's fundamental theorem of instrumentalism ("All models are wrong"), and its implied criterion, "meaningful and useful".
3) Many arguments about free will involve causation that crosses levels of description, which I suggest is an error of the same kind as particle man:
John Flansburgh / John Linnell
Particle man, particle man
Doing the things a particle can
What's he like? It's not important
Particle man
Is he a dot, or is he a speck?
When he's underwater does he get wet?
Or does the water get him instead?
Nobody knows, Particle man
Suppose we ban causation that crosses level boundaries. Does that get us anywhere?
For more on this topic, I recommend the ongoing debate between Sam Harris and Daniel Dennett.
Harris, The Illusion of Free Will
Reflections on Free Will: A Review by Daniel C. Dennett
Big Sort
My implementation of the Big Sort uses a variation of a pattern called DSU,
for "decorate, sort, undecorate"
It's less common than it used to be, because sort can take a key function.
The results are similar to Schelling, although not as quickly (Schelling levels off in about 10 time steps).
As expected, if each agent looks at more houses, you get more segregation, more quickly.
EvoSugarscape
An example of how computational modeling and object-oriented programming go together.
In my solution, I name the parent class explicitly rather than using super:
#Sugarscape.__init__(self, n, **params)
super().__init__(n, **params)
#Sugarscape.step(self)
super().step()
For a discussion of the pros and cons, see this thread on StackOverflow.
Consensus advice is "always use super".
Result: population pressure increases fitness and therefore increases carrying capacity.