Earlier this week, I spoke with Yoshua Bengio. In 2018, he, Geoff Hinton and Yann LeCun were awarded the Turing Award for advancing the field of AI, in particular for their groundbreaking conceptual and engineering research in deep learning. This earnt them the moniker the Three Musketeers of Deep Learning. I think Bengio might be Aramis: intellectual, somewhat pensive, with aspirations beyond combat, and yet skilled with the blade.

[00:00:00] Azeem Azhar: Today, I am joined by an exceptional researcher. He is widely regarded as one of the scientists whose work has built the foundations for today's progress in artificial intelligence. A Turing award winner with more than 750000 citations. Of course, naturally, a full professor at the University of Montreal. I could go on, but most of all, he's a fantastic human. And I'm delighted to be in conversation with Yoshua Bengio.


Azhar English Speaking Course Pdf Download


Download Zip 🔥 https://tiurll.com/2yGarh 🔥



[00:00:34] Azeem Azhar: Well, thank you for all of your work over the previous decades. You and Geoff Hinton and Yann LeCun have sometimes been called The Three Musketeers of Deep Learning. And I'm curious, you've heard that description. Which Musketeer do you most closely affiliate with do you think?

[00:00:55] Yoshua Bengio: This dates from a time when deep learning was not at all a popular endeavor in machine learning or in AI in general. So, we we have to we had to really, really stand defending these ideas for a number of years at that point.

[00:01:10] Azeem Azhar: Yes. It was not very popular when it when you first started to work on these these processes. I will give you my suggestion as to which musketeer you are, by the way, Yoshua. I see you as Aramis, who ChatGPT tells me is intellectual and romantic, somewhat pensive, with aspirations towards a religious life, but very, very skilled in combat. And I'll explain why I think that which is essentially, in the last couple of years, you have switched the focus of your research towards really humanistic dimensions of AI. And that felt a little bit Aramis to me.

[00:02:07] Yoshua Bengio: Well, I think the part that's really difficult to define is the intelligence than just stick the word artificial since it's in machines. And the way I think about intelligence is taking appropriate decisions.

And in order to get both understanding as in, like, having a good inter you know, internal model of how things work and being able to achieve goals, to take good decisions you need learning because, You know, the the nature of the world and the ways to optimize our decisions are not something that are given to us, we have to acquire that through experience and computation, and that's what learning is about.

[00:03:23] Yoshua Bengio: Yes. In fact, it's even more tricky than that. Given any life experience, rationally speaking, we can't rule out a lot of possible explanations for all our life, a lot of possible world models. And when we acquire new information from data, for example,

We obviously exhibit that intelligence in differing ways from individual to individual. And in also, there there are many expressions of what that intelligence might be and, you know, you just see it in any organization. There are certain organizations where the intelligence that you need to have is dealing with the squishy complexity of human relationships. And there are other organizations where the intelligence you need is very specific, tangible, quantitative. And those different expressions clearly form part of what it is To Be Intelligent.

In other words, you have a lot of knowledge and skills in some domain, but you could be stupid in another domain. Right? And humans vary in the areas of of scale, of course. And we already see this with AI systems that are what we call narrow AIs that are really good at one type of problems.

And and my sense is from the things that you've said publicly that you think things are moving faster now in the last couple of years towards that artificial intelligence than perhaps you had expected, what was a specific turning point for you? What did you see that you didn't expect to see.

[00:05:51] Yoshua Bengio: Yeah. And in fact, going from 3.5 to 4, GPT-4, there was also a noticeable improvement in many areas. And language well, language is important because it's the glue that connects us. It's the social, you know, fabric. And it opens the door to accessing huge quantities of knowledge about the world that humans have put down in, you know, writing and forms that computers can digest.

[00:06:21] Azeem Azhar: In some sense, that corpus of text, the trillions of words that the ChatGPT or GPT-4 is trained on is reflecting many, many different models of the world that we have imperfectly expressed through our own subjectivity and some ways of expression are much more subjective poetry or fiction or my diary, And some have a veneer of more objectivity, for example, scientific research or economic statistics. But there is this corpus out there which, in some sense, reflects the collective human view of what it is to be in the world. Is Is that too grand a statement, do you think?

[00:07:00] Yoshua Bengio: No. No. It's absolutely right. Now researchers are quick to point out that there's still a lot of things that are pieces of knowledge that are not visible in that corpus. For example, you know, bodily experience and things like this. But still If you did master the knowledge in such a corpus, you could do a lot of good in the world. And a lot of damage.

So think about if you're a scientist and you're learning about a science which you don't yourself experiment with, which means you're basically learning about it through, You know, language books, reasoning, and then you can do things. Right? You don't necessarily need to have a body In order to capture that language and exploit it.

[00:07:46] Azeem Azhar: You've touched on one of my favorite uses for this, an area where I think these large language models will be have tremendous potential which is that, you know, the structure of science over the last 30 or 40 years has become more and more specialist and much less interdisciplinary, and the incentives that exist in professional academia and the funding structures Force people to narrow progressively and and I'm sure the PhDs that you may have seen recently are more narrow than the ones 30 years ago. And the beauty of an LLM, whether it's GPT-4 or it's Elicit, is that I can become interdisciplinary quite quickly.

And I you can ask questions. You can say, well, analogously, did some other field of study have a similar structure to this problem? And you know, where were the avenues of inquiry? That feels like it could really deliver some benefits to to pushing the frontiers of knowledge.

[00:08:39] Yoshua Bengio: For sure. For sure. I use it to inquire about areas I'm not familiar with and then get actual papers in those areas based on what ChatGPT tells me because I don't trust what it says. But it could be suggestions for what to look for.

[00:08:57] Azeem Azhar: How should we think about this idea that these systems are getting more powerful? We hear people say that quite a lot, both people who are boosting the technology and those who are concerned about the technology. But I feel it needs to be unpicked a little bit. We need to have a a sort of a clearer definition about what we mean when we say this is a system that is getting more powerful.

[00:09:23] Yoshua Bengio: Well, it's not the system because each system is a snapshot of our, you know, AI capabilities. But it's if you look at the series of systems and the set of systems as they're, brought into the world by AI researchers and engineers. We can see the capabilities are on the rise and people have drawn those curves that really show what even looks like exponential improvements. There's a group that's been plotting these trends. I think they call themselves Epoch.

[00:10:02] Yoshua Bengio: Yes. So you can see that. But but, you know, as a researcher who's been tracking the field, It's very clear that the capabilities are being on the rise. Now, of course, it doesn't mean that it's going to continue, But I think we need to think about what if we continue at the same rate, you know, when will we hit human level or AGI?

[00:10:33] Azeem Azhar: That level is a really interesting and challenging one. I mean, I think back to the efficiency of the internal combustion engine. So there was there was a law of physics that said these things could not get more efficient than than they did, and we've been tending to that limit.

You know, essentially, we hit it 70 or 80 years ago, and we've just been trying get an inch millimeter by millimeter closer to it. And then on the other hand, you have Moore's law, which was all which was about well, it was a social agreement about packing more transistors onto a chip. And Moore's Law was always going to come to an end because of heat and quantum effects and so on. And Moore's Law has been dying for 15 years, and it's like a really dramatic death in a cowboy film where the character just staggers around but still keeps going. And yet we've seen progress in the cost declines of compute, which is what Moore's Law was about consistently and keeps keeps going.

So when we think about the ways we're currently building AI with with LLMs in a way. Is there a is there a Carnot cycle law of physics that says This approach is just gonna hit a limit and we'll need something else, or is this much more like a kind of Moore's law, social fabric, Engineers can keep pushing this thing for a few more cycles.

[00:11:54] Yoshua Bengio: So so first of all, about Moore's law, the physical constraints are really about, like, one chip. Yeah. But a lot of the computational capabilities increases in the last decade have been thanks to parallelization.

I mean, GPUs are already about that. And if you have now these you know, server clusters with 10,000 or 50,000 GPUs, well, we we get the extra power not because each chip is is sort of faster, but because we can paralyze. And it's a huge boost. But you but, yeah, it there's no, there's no, obvious upper limit to intelligence that we know that we can figure out. 152ee80cbc

elektro texnika

computer download game

download gospel spiritual chants