Trace Alliance reposted this
LLMs are “brilliantly stupid.” - Gary Marcus 👀 Leading AI critic unpacked why LLMs are powerful yet unreliable at the #DKGcon2024. “𝘛𝘩𝘦 𝘱𝘳𝘰𝘣𝘭𝘦𝘮 𝘪𝘴 𝘵𝘩𝘦 𝘤𝘶𝘳𝘳𝘦𝘯𝘵 𝘓𝘓𝘔𝘴, 𝘸𝘩𝘪𝘤𝘩 𝘢𝘳𝘦 𝘵𝘩𝘦 𝘱𝘳𝘪𝘮𝘢𝘳𝘺 𝘵𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺 𝘱𝘦𝘰𝘱𝘭𝘦 𝘸𝘪𝘭𝘭 𝘶𝘴𝘦, 𝘢𝘳𝘦 𝘯𝘰𝘵 𝘳𝘦𝘭𝘪𝘢𝘣𝘭𝘦 𝘦𝘯𝘰𝘶𝘨𝘩.” Watch the full conversation between Gary Marcus and Branimir Rakic at the #DKGcon2024 here 👉 https://lnkd.in/det6Kk7f
Transcript
Hello Gary. Hello. Hello. Welcome. Nice to see you and great to have you at DKG Con 2024. I was just about to introduce you to the crowd, though I assume pretty much everybody knows who you are, but I I'm just going to give a few sentences on who Gary Marcus is. And lately he's named as the leading AI critic for a good reason. He was the one grilling Sam. Altman in front of Congress very recently, which I'm gonna ask you a question on later and but also cognitive scientist, entrepreneur, AI expert studied under the legendary Stephen Pinker, former professor of New York University for many years. Co founded two companies geometric intelligence, robust AI and just recently published a book called Taming Silicon Valley. How can we ensure that AI. Works for us. I read most of the book. It's really good. I recommend you, you, you get it. And yeah, welcome again, Gary. The chat will be between myself, Branimir. I'm again founder and CEO behind Origin trail. And yeah, we just had a really great conference today. This is almost at its end, but the kind of the the final parts of it are usually the most juiciest ones, right. So we, we had conversations today, folks from Google, Amazon. Microsoft experts from BSI standardization organizations like GSW, one builders, they all kind of focused around the problems of misinformation, deep fakes, hallucinations, undermining trust on the Internet, centralization, IP protection issues, and generally this, this, this, this has been kind of the theme of the day. So I'm curious as as an introduction, what's your opinion on the current state of AI space and what would you, how would you reflect on these problems? I've been warning about all those problems for quite a long time. That's why I wrote the book, but it's also why I've been active on Twitter and and why I've been keeping a sub stack and and so forth. I think those are all very real problems, hallucinations, misinformation that the use of them in meddling in political campaigns. I think AI is probably made a wrong turn. I don't think it's an irreversible wrong turn, but if you think where we were, say 10 years ago, I think the field was very healthy. There were many different alternative approaches. Money had not quite corrupted the field in the same way. I think a lot of a greater proportion of people was working on things like AI for medicine, though of course some still are. And chat bots got popular and everybody rushed in, including a lot of grifters who were like, you know, from the NFT world or something like that. I'm trying to find a new home. And the valuations went up and it's changed, I think what the big companies do and, and how. Some of the large startups are are thinking about things and right now it's kind of money grab. It's a money grab that's going to ultimately involve I think a large scale surveillance like we've never seen before. I think open AI and we could talk about this is going to be pushed towards surveillance. And I think the LM that are so popular are both morally and technically inadequate. So they're morally inadequate because it's much easier for bad actors to use these tools than for good actors like in the way that it spam. Doesn't have to work that well. If you use robocalls to try to influence people or, or you know, try to do phishing expeditions or whatever, it doesn't have to work that well. And so bad actors are having a field day with this stuff and a lot of the promised good things aren't really working. And I think we'll get to the technical side. On the technical side, the current systems just don't understand factuality. Their ability to reason is limited. I think that puts an upper bound on the positive uses, but sort of counterfeiting people doesn't require. Perfection in the way that a lot of good good uses do so right now. I don't know whether generative AI has been a net negative, but it certainly hasn't been a huge net positive. There are a lot of downside risks to it, and I would like to see the field of AI trying to build better, safer, more reliable AI instead of just rushing out these chat bots, which are mostly there to trick people. That that resonates very much with all the conversations today, including previous one introduced by Ben. I, I think I even saw a quote by you saying that deep learning is all the hype, but deep understanding is what we actually need. And my earlier book, which was all about that, I mean, we used the phrase deep understanding. This was the book, one of my earlier books, this is the book with Ernie Davis rebooting AI. And that was the conclusion is that deep learning is not going to get us to AGI or to trustworthy AI. That was five years ago. And some people would say there have been huge advances, but I don't think there have been huge advances towards trustworthiness, towards interpretability. Was getting rid of bias and in fact, I think in some ways it's step back. So there is a lot of positive advance. If all you want is an LM for brainstorming like it's way better for that. Let me look, let's be honest and you know they found value in coding, although we could also argue about it what the long term benefit is of that, but we are still lacking deep understanding as much as we were five years ago. That's a, that's a very good point. And I, I think I believe that at least from my knowledge that you've been one of the earliest advocates for a hybrid neuro symbolic approach that we've been talking about throughout the day. And Ben was highlighting as well. Many experts seem to be converging on that idea today. He also the, we mentioned Jan Lakum today during our own award winner, he talks about a world model which is necessary in order for the, the AI. Actually understand what's going on. And for, for this deep understanding, I believe you called it the cognitive model in your in your work in my 2020 article The next Decade in archive, which I think is still pretty relevant. A whole section of the paper was about cognitive models. World model is just another label for that. It's a very old idea. I mean, people have been using world models in AI since the 1950s and you know, explicit world models that you could interrogate. So like right now I have a model. That you're in an auditorium, not that much of which I can see. You're at a lectern, you're wearing AT shirt. I'm being projected or representation of me is being projected on the screen and so forth. So I have all of those things in my head. I can reason about them. If you all left and I heard a fire alarm, I could guess that it wasn't something that I said that offended you, but rather you were concerned about. And so, you know, we can make all of these inferences about world models. I've been trying to emphasize for a long time that you can't really do AI without them. Lacoon has come in. Lately. And he's trying to make it sound like he invented the idea. He's certainly did not. You know it. It proceeds, I think even his birth and certainly mine. Yeah, it's very interesting you put it that way. We, well, the community here at this conference, which is the original community, has been actually working on creating an open source and transparent system, which actually is a decentralized system called the Decentralized Knowledge Graph, where we actually aim to create such a, a world model in real time by crowdsourcing all of this information. And we've been doing quite a lot of work over the years and we've just seen 10 different 9 builders actually, not ten. Presenting and his knowledge obviously comes in many forms and often questionable sources we're really looking for. Good principles on how to build this as a robust foundation, And I remember you specifically suggesting several principles for a better foundation for AI. You were explicitly calling it a better one. Could you perhaps touch upon that? I'm not sure I can do that all seven. I think it was from memory, but I can probably get most of them so. You know, having world models or cognitive models is absolutely fundamental. Having ability to generalize abstract knowledge is fundamental. What we see in alms is always peace meal. They don't really understand an abstract, any abstract principle. So This is why they have trouble with arithmetic or lately we've seen these river crossing problems they have a lot of trouble with. So a man and a woman in a boat have to get across and they just confuse it with other similar problems that are in the database. They never really. Abstractly represent the notion of a boat or crossing a river. Or would have you know wolfer cabbage is right. So you need a much higher level of abstraction. He needs something like human values or Asimov's laws or something like that. I mean, we haven't gone into that kind of AI risk, but you know, we can't even right now say something like be honest to a system or don't use copyrighted materials. Like even if they're, there's a list of what is copyrighted in the training. If you put that in your prompt, don't use copyrighted materials, the systems don't actually have a deep enough understanding of those terms and so they they can't follow it. So we need, you know, to be able to follow those kinds of explicit rules. I don't know how I did on remembering the seven, those were four. I've obviously forgotten 3, but that gives you the the general gist of of the kinds of things that we still absolutely need. Yeah, the the one that you didn't highlight that I kind of really like because it fits our worldview, which is knowledge graph worldview was representation of relationships between entities. Yes, absolutely. And in fact, like I've been thinking about that essentially my whole career. So you mentioned. Was trained by Steven Pinker and we were really looking at an abstract relationship between the adverb and its past tense. And it's a complicated one in English because some verbs are regular, like walk and walk, and some are irregular, like singing sang. But what I showed in my dissertation was even children could learn that abstract rule. It's an abstract relationship. And then later in my 2001 book The Algebraic Mind, it was all about abstract relationships. That's where the title came from, right? Algebra is about. Abstract relationships and the kinds of stuff you guys are doing are compatible with that. And turns out that most, most neural networks as we currently build them are not compatible with that. So there may be some neuro symbolic hybrid much better than we have right now that really kind of smoothly transitions between arbitrary data in let's say a corpus database and also having those abstract representations. I just don't see how we succeeded. AGI if you can't explicitly represent those when you need to do them, and if you can't, like. Take for example, the data in the Wikipedia fact boxes that are perfectly regular and abstract in certain ways. And the elder roll with that. Like if you can't do that, I don't know what you're doing. Yeah, that's a very, very good point. And having that said that you have basically I started at the beginning with you grilling Sam Altman and open AI for valid reasons, I would say. I also read that you have been engaging heavily on the policy side of AI recently. So outside of the Congress hearing there, there's been a bunch of activity here. We also had the regulators present from the EU. Sometimes he U gets a lot of bashing for that reputation that we hear tend to over regulate things and not innovate. I would say this conference could say the opposite. But that aside, what's your opinion? One thing, one thing before the question is a false dichotomy between innovation and regulation in Europe. Has its problems some of those may have to do more with like taxing incentives and things like that like like people try to draw an inference that because you have a lot of regulation you don't have innovation and that's just not true like you know there have been lots of regulations in the US that have led to innovation safer airplanes, safer cars, safer food, safer medicine. So the the idea that those two things are in opposition is just wrong you might want to look at things like what is your capital gains tax you know what incentives do you offer to startups it's just a silly. I mean, logically flawed, fallacious is the word I'm looking for. It's a fallacious inference. To equate these things is just wrong. Yeah, I agree 100%. And and in in the way coming from the ethos of Web three or decentralized technologies, we also see that it's not just the regulation, it's also how you can embed some rules into the system.To view or add a comment, sign in