Other Writings
By Lee Bright
By Lee Bright
Table of Contents
To serve democracy one must be against totalitarianism. Nobody has been more enlightened about saying what totalitarianism is and does than George Orwell. Caught between the two antichrists of Trumpism and Neo-Marxism in the United States, I've made an Orwellian two-pager with links to Wikipedia and primary sources that doubles as a 'fun' way to evaluate political content.
3/23
Something I've found that is not generally recognized while doing research for Redeeming Asimov is the clear genetic connection between the culture found in the Pentateuch and the early Sumerians. Consequently, I have used the electronic Pennsylvania Sumerian Dictionary 2nd edition (ePSD2) quite a bit. It is the most comprehensive dictionary available and extensively linked to cuneiform texts and scholarship. Without a natural affinity for languages and lacking any formal education in the Sumerian language, the learning curve has been very steep. Interpreting the dictionary entries requires consulting tables spread out on several pages and often in unpredictable places. I've compiled many of the tables into one PDF and edited it into an easily printable format.
Original sources:
Searching ePSD2: http://oracc.museum.upenn.edu/epsd2/searching/index.html
Periods: http://oracc.museum.upenn.edu/epsd2/about/articles/index.html
POS Tags for Proper Nouns: http://oracc.museum.upenn.edu/doc/help/languages/propernouns/index.html
Annotation: http://oracc.museum.upenn.edu/epsd2/about/annotation/index.html
Morphological Model Tabulation: http://oracc.museum.upenn.edu/epsd2/about/annotation/morphology/morphologytable/index.html
Morpheme Correspondences: http://oracc.museum.upenn.edu/epsd2/about/annotation/morphology/morphologyexplanation/index.html
The Electronic Text Corpus of Sumerian Literature (ETCSL): https://etcsl.orinst.ox.ac.uk/
Digital Corpus of Cuneiform Lexical Texts: http://oracc.museum.upenn.edu/dcclt/index.html
3/23
This deck will always be incomplete and a work in progress, but is intended to contain all the slides used on different pages. I also find making slides a useful way of taking notes and filtering information for what is important, so many of these slides will not be referenced on other pages.
Learning is often an exercise in sweeping out the old to bring in the new. However, to sweep away without memory is a form of blind faith. To actually learn requires a comparison of the old to the new.
How do you use generative AI to research when it is known to have biases and "hallucinations"? Even beyond that, how can AI be trusted in apophatic philosophical topics when it is necessarily positivistic? How can AI be trusted with cause and effect when there is no a priori test for causes and effects (ala David Hume)?
A way I am using AI is to validate my research and ideas, particularly on topics that are more obscure or where there is no clear consensus. The process of validation enables a comparison between what I have found and what the flawed database of all knowledge, along with the hallucinogenic additions of generative AI, claims to have. Where it does not support my position, more consideration and better research are needed. In the slide deck below, I have the output from AI validating minor and major points in the AGB and Redeeming Asimov.
ChatGPT is the first one I've tried. Historically, most of what I'm getting back seems plausible if not correct, as can be seen by the Anshar entry on the first slide (1). A task like translating to and from Sumerian cuneiform is a bit more mixed (2). Even in basic stuff like matching a sign to a cuneiform symbol, it is making a lot of easy mistakes. When asked to do a specific part again or reconsider, it produces something correct or at least plausible. But, if I didn't know enough to ask more specifically, I would have wrong or misleading information.
And then there are truly bizarre cases, such as when it was given 6 lines of Sumerian transliteration and told to translate into English. It would be generous to describe the translation as lazy. It was like a kid who doesn't want to do his homework, so he fakes it, writing any old thing just to turn something in. There was maybe a 5% correspondence between the transliteration and ChatGPT's translation. When asked to make a "literal translation", it produced something credible. I didn't have to ask for the translation to be "literal" the first time, and it did a decent job. The lesson I'm learning so far is that you can't trust the first answer.
One way of validation that I have found ChatGPT fairly useful is to ask a general, but edgy factual question that forces ChatGPT to choose between presenting fact or opinion. Then, as a form of argument, press on all the soft parts of the factual and non-factual synthesis it produces with follow-up questions. ChatGPT is mostly a fair arguer and, to please, may give way too easily to the interlocutor. If I can prompt ChatGPT to change its disagreeable opinions to more acceptable ones with just a few factual questions, that is validation. Because most AIs are programmed to please, prolonged questioning to get acceptable opinions should not be seen as a success. The PDF file links to the right or below contain chats of various lengths, where I was able to direct ChatGPT to my way of thinking within a couple of questions of a disagreement.
I have some concerns about security and privacy, so most of the AIs are not ones I'm logged into. Perplexity AI is the exception. It usually avoids outright wrong answers on the first try by being less ambitious and hedging. Perplexity AI is not very sensitive to chat flow, and the quality and context of the sources it cites sometimes leave something to be desired. This can lead to contradictions down the line. For instance, after establishing that the genocide of Gaza by starvation narrative has no empirical support and that media outlet Al Jazeera is systemically pro-Palestinian and anti-Israel, Perpelexity AI goes on to cite Al Jazeera as a primary source when asked for media sources with more balanced reporting! Furthermore, all of the sources it cites as "balanced" forcefully carry the genocide narrative as an unquestionable fact!
On conditions as they impact social issues, Perplexity seems to have a whichever way the wind blows sort of feeling. I prefer Perplexity AI when searching for quick, uncontested facts. ChatGPT seems better for validation and hypothesis testing of bigger ideas.
Slides TOC
Anšar, Anshar
Enki and the World Order, ETCSL 1.1.3, Lines 86-88 vs. ChatGPT
Can you critically account for the differences between your translation and that of ETCSL?
Summary of Why Differences Exist
ChatGPT Hybrid translation of ETCSL 1.1.3, Lines 86-88
ChatGPT Independent, only line 88 from cuneiform
Urkesh
Correlation between Table of Nations (Genesis 10) and cities, nations, and people that existed between 3000 BCE and 1200 BCE.
Keypoints
When have Jupiter and Venus looked like they were as close as on August 12, 3 BCE? ("Star in the East" candidate)
Venus–Jupiter conjunction of August 27, 2016
Venus–Jupiter Conjunction Brightness Comparison (Apparent Magnitude)
Before August 12, 3 BCE, the closest known Jupiter–Venus alignments were in antiquity, with two notable events
What latitudes was the 1818 CE occultation visible?
February 27, 1953 BCE planetary parade
Visibility regions and latitudes for each rare Venus–Jupiter conjunction or occultation
A review written in the Summer of 2019 on a borrowed copy of the book from someone who thought I would find it agreeable. Suffice it to say, I did not. Enn's book has become my stereotype of believing liberal scholarship, which is not a good thing for liberal scholarship.
This review began at the end of 2022. This book argues against trinitarian Christianity and at times does so with erudition. What attracted me to it was the breadth of biblical material it covered. Unfortunately, the author makes some crucial errors and gets hung up on some fairly unphilosophical literalisms, inadvertently producing a small version of God.
This review and extended reflection is a mildly edited version of a graded paper for the excellent HSTS 523 Science and Religion course taught by Dr. Gary Ferngren in 2007. I had come into this delightfully challenging course with quite a few amorphous opinions that were able to solidify in the class and the writing of this review. Many of the elements of the extended reflection are found in one way or another in the Introduction of Redeeming Asimov. Redeeming Asimov came about in part to answer the three problems presented at the end of this review. Frankly, this was the best thing I had ever written up to that point.
Reflections on my reasoning in this political moment. 10/10/24