“Large Language Models or LLMs are simply predicting what words would come next due to all of their learning.” Prompted by my husband's suggestion I have decided to have a series of episodes dedicated to understanding AI better. Instead of being scared of AI or simply ignoring it, I use the book, Co-Intelligence, Living and Working with AI by Ethan Mollick. Part one of this series is just a basic understanding of how the large language model came to be, like ChatGPT. I talk about the idea of a digital brain. I share how it had its initial learning with millions of words and information and the analogy of an apprentice chef learning to combine ingredients to make recipes. I then tell of the adding of human feedback to its learning of millions of words that it is learning. The key to all of this is adding certain weights to certain words to help AI better understand the human language. For a very complex machine - I try to keep it simple to understand the basics of what it is doing.
Show Notes: Hi Friends! I hope you enjoyed listening to this episode. Below are all the references.
Life updates: adjusting to having only one child at home, catching up with friends, and even finally getting a new-to-me van.
How my daughter and I are finishing Gilmore Girls together (and how AI “summaries” reminded me of Cliff Notes).
Why I decided to start a series on AI, despite avoiding it for years.
Insights from Co-Intelligence by Ethan Mollick and why he challenges students to use AI as a co-intelligent tool.
A beginner’s explanation of large language models — like a digital brain predicting the next word.
The “apprentice chef” analogy for how AI learns from trial and error.
Why human feedback is essential for fine-tuning AI and keeping it aligned.
How I’m starting to experiment with ChatGPT for podcast ideas and catching up on my website
“I may not understand everything under the hood of AI, but like driving a car, I can still learn how to use it — and even find ways it can help me without being afraid of it.”
About the author -
Ethan Mollick is an associate professor at the Wharton School of the University of Pennsylvania, where he studies and teaches innovation and entrepreneurship. He is also author of The Unicorn's Shadow: Combating the Dangerous Myths that Hold Back Startups, Founders, and Investors. His papers have been published in top management journals and have won multiple awards. His work on crowdfunding is the most cited article in management published in the last five years.
Prior to his time in academia, Ethan cofounded a startup company, and he currently advises a number of startups and organizations. As the academic director and cofounder of Wharton Interactive, he works to transform entrepreneurship education using games and simulations. He has long had interest in using games for teaching, and he coauthored a book on the intersection between video games and business that was named one of the American Library Association’s top 10 business books of the year. He has built numerous teaching games, which are used by tens of thousands of students around the world.
Mollick received his PhD and MBA from MIT’s Sloan School of Management and his bachelor’s degree from Harvard University, magna cum laude.
I am Camille Johnson, and this is Finding the Floor.
Stories and reflections of midlife motherhood, family, and finding meaning in it all.
Join me as I share a little piece of my life and figure out what I want to be when I grow up.
Hey friends, welcome to Finding the Floor.
This is episode 231, and today we're going to start this series called Understanding AI.
Not like philosophically or how, but more just understanding it in general.
I have been doing pretty good.
I'm still getting used to having my youngest, the only one home, which is kind of weird.
I know it's something like little things like grocery shopping.
I was slowly getting used to, you know, maybe having two kids home.
So there's certain things that certain kids like and certain things that they don't.
And so when someone leaves, you don't need to buy certain things anymore.
Like there's like, oh, like my daughter loved these certain protein bars, but she's not around.
No one else liked them, only her.
So I don't have to buy them anymore.
I really haven't gotten into subbing at all.
because these last few weeks, I wanted time to kind of catch up.
And so yeah, so that's been really good to be able to do that.
We picked up our new to me van.
I mean, it was only like nine months, but it's just like, oh yeah, this is what I drive.
So, and I'm hoping, you know, many fun new memories to come.
You guys, can you believe September?
Like we are literally at the end of September.
This comes out September 26th.
For our kids, we've got like homecoming week or my one kid at home still.
I'm sure many of you have done homecomings yet or they're coming up.
Like, it just, I feel like September sort of flew by for me.
One thing my daughter and I are kind of doing together is finishing up Gilmore Girls.
And we started it last year around this time.
And my daughter kind of started without me.
So it was kind of like I would get cliff note versions.
I don't know what the term is nowadays.
on Friday, we kind of binged like four or five episodes.
We're on season seven, we're almost there.
Like, just, I feel like I'm always like way behind.
I think back like Gilmore Girls came out between 2000 and 2007.
Those were some busy years for me.
That was having lots, like I had my second kid, 2001, then 2005, 2007.
So I'm was more watching like Dora the Explorer and Blue's Clues and all the other
shows that eventually came out.
and then use it in a way that is helpful to my life.
So my husband's been challenging me to use it more with my podcast.
They're called large language models.
Or I've watched my husband use it for work.
But then I've also learned or am learning that everybody's starting to use AI to help them.
So it's formatted in the right way.
And so sometimes you're, if you're choosing to not use it,
Whereas in this book by Ethan Mollick he calls it
So the book is titled Co-intelligence, Living and Working with AI.
Or maybe you're like my daughter who's an artist who's kind of like, this is an abomination.
AI shouldn't be recreating artwork.
So some people just really understand how to use it.
explain exactly how it works per se.
I'm just going to give you a basic.
So I hope that's kind of a good explanation.
And we'll get into that in just a minute.
Okay, so that was kind of a lot of explaining of what I'm going to then talk about.
But brief to understand really quick what AI is.
It's simply like a digital brain, if that makes sense.
We'll get into that a little bit more.
It's been told what is important or a weight that is really important to understand.
And it gets that information and it basically translates it into a math problem or a number.
And then it's given certain weights for certain numbers.
And then it goes through a couple other
neural networks to see how important those numbers are.
And ChatGPT, it's learned what are important things to say.
And we'll get into that in just a minute.
doesn't make it more confusing.
Okay, so the book is called A Co-Intelligence, Living and Working with AI by Ethan Mollick.
His papers have been published in top management journals and have won multiple awards.
His work on crowdfunding is the most cited article in management published in the last five years.
Okay, so that is the author of this book.
So under this co-intelligence, okay?
Because nowadays sometimes we're like, my gosh, students are going to use AI.
It's going to ruin everything.
But what he's saying is how can you help them use AI better as this co-intelligent?
And I gave you this brief understanding.
And now I'm going to kind of tell you a little bit of like how they get, they've learned all
And I'm not going to go into like, well, how do you just program a computer to just learn?
I don't really understand that.
And in 2010, there was this bigger boom of AI that was trained on a lot of data.
And this data would be able to predict things, forecast demands, for instance.
It was much more exact than just guessing.
So these were more of the predictive models that were given lots of data.
They weren't really good at that.
Probably said in the book, and I couldn't find out where it was, and I'll get back to you of that.
And then that is basically what a large language model is based on.
That's a simple explanation, but it's like super complex.
But nobody was programming them.
They were just given this initial understanding of what things were important.
And they were learning by experimenting.
And my question was, how does that even happen?
But somehow these computers are learning.
Okay, so a really good explanation Malik uses in his book is describing
the best with all the ingredients.
Initially, they're just given information and they are basically like a mirror.
They're going to repeat it out.
And they have no ethical boundaries of morality.
They just repeat back all the things that they've learned.
So that's kind of one thing to be aware of.
So this is called the pre-training phase, and it takes a long time and is expensive.
ChatGPT or other large language models.
So they're getting reinforcement from human feedback.
And that's what they call the fine tuning.
So there's just been this growth fine tuning of the AI, if that makes sense.
That has also happened with art and images.
And so you could say the same things happen.
and told if what they're creating is right so that they're trained with images.
And some models now like ChatGPT can do both.
They're multimodal capabilities.
But we're going to mostly talk about large language models.
And it's cute because my daughter is just very offended by the whole
AI art thing, which I totally get.
Where are they getting all this information?
Some of the limitations and some of the things you worry about is like, where is it coming from?
It's going to be biased towards a certain country or certain
So as of this book that came out in 2023, ChatGPT 4 was out.
And just as of August 2025, ChatGPT 5 has been released.
So most of this stuff is going to be information
as we learn about, I guess, how to do ChatGPT 4 and then as you can catch up with ChatGPT 5.
And there were difficulties as you find out with three, just their capabilities.
And solely as they've been learning, they've gotten better and better.
And ChatGPT 4 became very, very capable.
got 5 on multiple AP tests, passed standardized tests.
It said it got a 90% on the bar exam and other tests, and then also maxed out the creativity tests.
it's basically like an open book test.
Think of them as more like they've got this photographic memory, I guess.
They're learning and predicting.
And maybe because it's really good at seeing patterns
And that is creating what it is creating now.
And I don't really understand that part.
So I may not go into that and maybe that will be better understood in a later date.
But we're just going to say from a very basic way for this episode, ChatGPT is
And they have become really good.
maybe means according to this book.
And so I do have to say I've been experimenting a little bit with ChatGPT with my podcast.
I was a little bit behind on updating all of my websites.
I really didn't do much this summer.
So I was trying to catch up and thinking that it could help me.
So they're not true, although they seem to be true, or they can just be incorrect.
So they're just things that you just need to be aware of as you direct ChatGPT on what you need.
But then also they can kind of make things easier.
So that's something we're also going to learn that is important.
And I'm going to be trying to use it while I'm
kind of experimenting with all of this.
So hopefully I made sense about like where AI came from.
And again, it's basically like a digital brain
It turns those words into numbers where it then figures out which is the most important.
Even though it can do all these
So hopefully that's helpful for today.
Okay, so thank you guys for listening.
I hope you have a great week and I will talk
I hope you enjoyed today's episode.
Special thanks to Seth Johnson for creating and performing the theme music.
Come back next week and thanks for listening.