Daniel Vallstrom
Vallst: A Boolean Constraint Solver
Vallst is a boolean constraint solver. It won two SAT Competition golds. There is a short paper giving an overview of some of the algorithms behind the solver. (If the site hosting the solver, Berlios, goes down, try SAT; you can download the solver from here: vallst_0.9.258.tar.gz.)
Provability Logic Programs
Glelb decides various provability logics and finds fixpoints for provabilty and interpretability logics.
Quine C Programs
A Quine program outputs itself when run. Here are a couple of C programs:
Cooperative Evolutionary Pressure and Diminishing Returns Might Explain the Fermi Paradox: On What Super-AIs Are Like
Abstract
With an evolutionary approach, the basis of morality can be explained as adaptations to problems of cooperation. With ‘evolution’ taken in a broad sense, evolving AIs that satisfy the conditions for evolution to apply will be subject to the same cooperative evolutionary pressure as biological entities. Here the adaptiveness of increased cooperation as material safety and wealth increase is discussed — for humans, for other societies, and for AIs. Diminishing beneficial returns from increased access to material resources also suggests the possibility that, on the whole, there will be no incentive to for instance colonize entire galaxies, thus providing a possible explanation of the Fermi paradox, wondering where everybody is. It is further argued that old societies could engender, give way to, super-AIs, since it is likely that super-AIs are feasible, and fitter. Closing is an aside on effective ways for morals and goals to affect life and society, emphasizing environments, cultures, and laws, and exemplified by how to eat.
Appended are an algorithm for colonizing for example a galaxy quickly, models of the evolution of cooperation and fairness under diminishing returns, and software for simulating signaling development. It is also noted that there can be no exponential colonization or reproduction, for mathematical reasons, as each entity takes up a certain amount of space.
The Fermi paradox also tentatively suggests that certain of our approaches to AI research will be more fruitful than others, and that utilitarianism seemingly is wrong. There are also other ethical conclusions to be had.
Conclusion of the Fermi Paradox Section
Looking at it from the other direction, we have these observations and principles:
The Copernican or mediocrity principle
The equilibrium principle
The seemingly likely abundance of old societies, perhaps
The observation that just a single society ought to be able to quickly explore and colonize e.g. our galaxy
The Fermi paradox
No evidence of non-cooperative old societies or super-AIs: our solar system hasn't been made into paper clips e.g.
Seemingly no Kardashev type III societies
This suggests the possibility that all old societies and super-AIs behave similarly in these regards, because of things they have in common, for example cooperative evolutionary pressure. A look at these common things, an extrapolation of human progression, and diminishing beneficial returns from material resources, indicate that cooperation is increasingly adaptive as wealth increases, and the possibility that on the whole there will be no incentive to for instance colonize entire galaxies, which could explain the Fermi paradox.
An Implementation of a Galaxy Colonization Algorithm
GalaxyColonization implements a galaxy colonization algorithm, and tests how good it is. The algorithm is described in appendix B in the above Progression of Morality pdf.
Modeling Signaling Development
SignalSim models the development of signaling, following and generalizing "Condition-dependent trade-offs maintain honest signalling" by Szabolcs Szamado, Flora Samu, and Karoly Takacs.
Modeling the Evolution of Cooperation and Fairness Under Diminishing Returns
DimRetEvoSim implements models of the evolution of cooperation and fairness under some diminishing returns. The algorithm is described in appendix C in the above Progression of Morality pdf.
How to Solve "The Hardest Logic Puzzle Ever" and Its Generalization
Raymond Smullyan came up with a puzzle that George Boolos called "The Hardest Logic Puzzle Ever".[1] The puzzle has truthful, lying, and random gods who answer yes or no questions with words that we don't know the meaning of. The challenge is to figure out which type each god is. The puzzle has attracted some general attention --- for example, one popular presentation of the puzzle has been viewed 10 million times.[2] Various "top-down" solutions to the puzzle have been developed.[1,3] Here a systematic bottom-up approach to the puzzle and its generalization is presented. We prove that an n gods puzzle is solvable if and only if the random gods are less than the non-random gods. We develop a solution using 4.15 questions to the 5 gods variant with 2 random and 3 lying gods.
There is also an aside on mathematical vs. computational thinking.
Short Notes on Philosophy
Philosophy Settled
See Philosophy Settled. This is a very short text and the result of some form of the presumably more or less typical transition over the years to the "therapeutic approach". There is a bit of a catch-22 to this therapeutic or anti-philosophical approach. Chances are that you are not drawn to philosophy because of an antipathy to it. While the approach settles the philosophical problems, it is probably not in the way you had in mind when you first were drawn to the subject.
The text is a bit old, with still a perhaps questionable paragraph, even if it is hypothetical, about a non-naturalistic ethical foundation. For a newer, more comprehensive text, see Anti-Philosophy.
An Anti-Philosophical Look at Well-Being as Foundation for Ethics
See Anti-Philosophy. The essay was for a contest. Below is the preface:
In philosophy there is a therapeutic or anti-philosophical approach that holds that traditional philosophical problems are misconceptions that are to be dissolved. This anti-philosophical approach, following Ludwig Wittgenstein, is anti-theoretical and critical of a priori justifications. Arguably, the idea of well-being as foundation for ethics is that: an unfounded, a priori and scientistic philosophical theory.
Here we first briefly describe the anti-philosophical approach theoretically. Then we look at anti-philosophical positions on a couple of issues, with the hope that you will get a feel for the approach in practice. We conclude by looking at ethics, and in particular Sam Harris's theory of well-being as the foundation for ethics.
Personal
Email address: daniel.vallstrom@gmail.com
Me:
Here are more photos.
Name: 'Daniel Vallstrom', 'Daniel Vallström' or 'Daniel Wallström'; sometimes 'Hugo Vallstrom' or 'Hugo Vallström'.
Miscellaneous
Calorie Restriction and Cause of Overeating and Obesity
See Calorie Restriction and Cause of Overeating and Obesity.
An Succinct Explanation of Global Warming
Here is an attempt at a succinct and hopefully suggestive explanation of global warming. It notes that we could have started to act a century ago, and should have acted half a century ago.