"Switch Up" came to controversy after fans were asking Big Sean on Twitter if the song was a diss to Kid Cudi.[5] Kid Cudi left the GOOD Music label just four days before Big Sean released "Switch Up".[6] The lyrics rapped in the chorus "Who gon' leave you there when who gon' leave which ya?/ This is for the ones thats always ridin' with ya/ Ain't switch, I ain't switch up/ Naw, naw I aint switch up/ The same me, naw naw I ain't switch up/ The same team, naw naw I ain't switch up..." made fans think that Big Sean was angry towards Kid Cudi's departure from GOOD Music. Big Sean clarified the controversy with an interview with MTV News. Big Sean said in the interview that he was still friends with Kid Cudi as he stated "Motherfucker that's the dumbest shit somebody could ever say. That's my fam. When I lost my Jesus piece Cudi gave me his Jesus piece. That's my brother."[5]

Over the last year, AI large-language models (LLMs) like ChatGPT have demonstrated a remarkable ability to carry on human-like conversations in a variety of different concepts. But the way these LLMs "learn" is very different from how human beings learn, and the same can be said for how they "reason." It's reasonable to ask, do these AI programs really understand the world they are talking about? Do they possess a common-sense picture of reality, or can they just string together words in convincing ways without any underlying understanding? Computer scientist Yejin Choi is a leader in trying to understand the sense in which AIs are actually intelligent, and why in some ways they're still shockingly stupid.


Big Sean Ft Common Switch Up Free Download


tag_hash_104 🔥 https://blltly.com/2yjYw4 🔥



0:02:15.4 SC: And one of her emphasis is something that I'm very interested in, which is the idea of how do you get common sense into a large language model? For better or for worse, the ways that we have been most successful at training AI to be human-like, is to not try to presume a lot about what it means to be human-like0. We just train it. We just say, okay, Mr. AI or Ms. AI, here is a whole bunch of text all of the internet or whatever, you figure out, given a whole bunch of words, what the word is most likely to come next rather than teaching it what a table is, and what a coffee cup is and what it means for one object to be on top of another one, et cetera.

0:03:02.9 SC: And that's surprising in some ways. How can AI become so good, even though it doesn't have a commonsensical image of the world? It doesn't truly, maybe, arguably, depending on what you mean, understand what it is saying when it is stringing these sentences together. But also, maybe that's a shortcoming. Maybe there are examples where you would like to be able to extrapolate outside what you've alrea0dy read about on the internet. And you can do that if you have some common sense, and it's hard if all of your training is just, what is the next word coming up? A completely unfamiliar context makes it very difficult for that kind of large language model to make process. So this is what we talk about today. Is it possible for LLMs, large language models to learn common sense? Is it possible for them to be truly creative? Is there some sense in which they do understand and can explain things? And also, will they be able to soon if they can't already right now?

0:09:46.3 YC: I wouldn't know for sure whether that truly knowing whether there was a mistake or not in the following sense. Sometimes, it's going to confirm that what it said was correct. Sometimes, it's going to super apologize and it's going to switch what it said. You need to kinda try in both ways, by the way, not only when it made a mistake, but also when it did not make a mistake and see whether it's actually able to be truthful about what's actually true. The truth is, people do have a bit of a bias so that they ask, are you sure, only when they know for sure that it's not right. So then, ChatGPT has learned that whenever people are asking that question, probably it's a good idea to back off. But then, if you ask the other way when it's like correct, then you ask whether it's really correct, then it gets confused. And then there was this recent, reasonably recent news headline about a lawyer that used the ChatGPT and got into a big trouble.

0:19:53.1 YC: Good to have something to disagree on. So I don't think... Yeah, I guess, we maybe Turing test it, it may seem like it may have tested for the Google guy who believed the, you know, the ChatBot was sentient. But I mean, even so, not really because he knew perfectly well that he was talking to a chatbot. And the thing is, with ChatGPT, due to the way that it has limited interaction mode with you, you know that this is not a human. If you tell it a random, chitchat that you might have during your life, it's not gonna really remember all that in the way that humans are able to, or it's not able to forget in the way that humans are able to forget. So there's gonna be something odd about the way that it's interacting with you. Plus, if you ask a very simple common sense questions, it may also fail in a way that humans wouldn't. So, for one reason or the other, I think it hasn't really passed yet.

0:30:05.9 YC: Yeah, that's exactly right. So if, we train these models only using internet data, then it's not usable because well, us humans have written toxic stuff and the danger of... Dangerous stuff out there. So, it's our fault that this resulting models are not usable as is. So then what happens is what currently is known as RLHF, so Reinforcement Learning with Human Feedback. That's the jargon of it. What happens, which basically is to switch out the way that these language models are trained. So the goal is now different. So once the pre-training stage is over which is, let's predict which word comes next, now the new training objective is, let's try to get good scores based on human's evaluation. So human feedback can be a thumbs up, thumbs down. So, the model now wants to get a lot of thumbs up. And then the model can then learn that, oh, in order to get a lot of thumbs up, it shouldn't say toxic stuff and it shouldn't hallucinate facts. So by doing this RLHF at scale we can, or it has been shown that you can enhance the level of factuality considerably, and then you can reduce the level of toxicity considerably. But the key word here is to reduce considerably. It doesn't eliminate completely.

0:55:01.9 SC: And what does this have to do with, what does this symbolic element that we might want to include have to do with the search for common sense to sort of teach large language model everything that every human being in the world knows? I know that you've given some examples of very commonsensical questions. It's easy to ask ChatGPT and get crazy answers for.

0:55:22.7 YC: Yeah. So, common sense has been the interesting research topic in my heart for a long time, especially that it was considered to be an impossible goal to achieve for a long time, so that I've been almost told not to do that.

0:55:45.0 YC: Or don't even say the word for a long time to be taken seriously. But it's really curious, the thing that humans acquire that easily, even animals acquire that in their lifetime. And so... And common sense is what makes us robust. Basically it's the background knowledge about how the world works that allows us to reason about previously on certain situations in a very robust manner. So, it's just like naive physics knowledge as well as a folk psychology that we acquire. Some of that has a symbolic nature. Not all of them, by the way, because some of the naive physics knowledge that animals acquire may or may not have a symbolic nature in it. But in any case, it's something that current large language models do acquire more and more for sure, because as you scale things up, you're gonna pick up on that as well. But it's also something that is strikingly not as robust as you may have assumed from a model that can pass the bar exam.

0:57:55.2 YC: A lot of people actually though... I should... I would like to mention that a lot of people ask simple things and then they get very good answers about some common sense reasoning questions, and then they're blown away that, oh, look it does have a common sense. Oftentimes though, those questions are mundane questions. So this, especially GPT4 became way much better than ChatGPT. So, there's a minor versioning differences between the two. And then, though these are moving variables in the sense that OpenAI keeps updating both of them so this may or may not be true depending on what, how they update both models in the future. So GPT4 became much better at common sense questions in many ways. But that's in part because people do ask a lot of that to their interface. And now those questions may or may have not been used for their subsequent "RLHF" this adjustment training where you can align your language model to be able to answer common sense questions better. So, especially some of the famous ones that I used in my public talks or interviews before have been all fixed.

0:59:40.1 YC: By the way, humans don't need any of this fixing based on... These are all questions that as a human you will just answer correctly first of all. Even if you were to make a mistake, you don't need to fix yourself by me giving you the exact same question spoken differently, phrased differently 'cause you just understand the same concept and then that's it. So, there's something very dissatisfactory or almost, disappointing about how this smart-looking AI that is also simultaneously quite silly or even stupid in the way that it's not able to really understand the basic common sense.

1:00:30.4 SC: So how do we fix that? I mean, is there... Is it kind of like the working memory thing where we add something on top of it? Can we give it a little common sense module that has a physics engine describing what happens in the world? Like game designers have to make it so that if you put your coffee cup on the table it doesn't fall to the bottom. Right. Can we teach large language models that kind of behavior? 0852c4b9a8

free minecraft multiplayer no download

where can i download free screensavers

download free gospel musics mp3