Episode 344: Unraveling AI: From Turing to Today’s Language Models
Dive into the world of artificial intelligence with Dr. Yeshwant Muthusamy on the Making Sales Social podcast. Explore the evolution of AI from its inception to the rise of large language models, and gain insights into the ethical and responsible use of AI in business. Dr. Muthusamy, an AI principal at Improving, shares perspectives on AI’s impact, the concept of AI governance, and the nuances of intellectual property. Host Brynne Tillman facilitated a rich discussion aimed at equipping business leaders and non-technical professionals with critical AI knowledge.
View Transcript
Intro
0:00:18 – (Bob Woods): Welcome to the Making Sales Social podcast featuring the top voices in sales, marketing, and business. Join Brynne Tillman, and me, Bob Woods, as we each bring you the best tips and strategies our guests teach their clients so you can leverage them for your own virtual and social selling. This episode of the Making Sales Social podcast is brought to you by Social Sales Link, the company that helps you start more trust-based conversations without being salesy through the power of LinkedIn and AI. Start your journey for free by joining our resource library. Welcome to the show.
0:01:00 – (Brynne Tillman): Welcome back to Making Sales Social. I have a really Great guest today, Dr. Yeshuant Muthasamy, who is an AI principal at a company called Improving. He has extensive experience in the consumer electronics, wireless, telecommunications and semiconductor industries, having held leadership positions at companies like Immersion Corporation, Toyota, Samsung Research, Nokia, Texas Instruments, and it goes on.
0:01:29 – (Brynne Tillman): He also teaches a continuing education course called Generative AI for the rest of us, designed specifically for business leaders and non-technical professionals. As a part of his own consulting practice, Yeshvik Solutions LLC, Dr. Muthasani has 6 issued patents and over 30 peer-reviewed publications. He holds a B Tech for JNTU in India and a PhD from Oregon Graduate Institute both in computer science and engineering.
0:02:07 – (Brynne Tillman): Dr. Muthasamy, welcome to the show.
0:02:10 – (Yeshwant Muthusamy): Thanks Brynne. It’s great to be here.
0:02:13 – (Brynne Tillman): I’m excited to have you here because you know, when we were talking I had these aha moments of the AI from a different perspective and I’m really excited to jump into the things that you’re doing now and how your brain works around AI and large language models. But before I jump into the questions we have for you today, we ask all of our guests the same first question, which is what does Making Sales Social mean to you?
0:02:46 – (Yeshwant Muthusamy): It means that you are trying to sell to people based on their needs. I mean that’s the way I see it. If people need what you’re selling, then you are more likely to make a sale. And so to me, I mean selling is a social activity and you’re trying to make a connection with the customer. And if the customer is able to understand, is able to identify with you, then you are more likely to make the sale.
0:03:18 – (Brynne Tillman): That’s great. Thank you, I love that. So I’m going to jump into the hot topic and we’ll start with ChatGPT. However large language models that are accessible to the world have only been around since November 2020. Two, right now there are hundreds, if not thousands, a few cores, but just an enormous amount every day of AI coming out. But after talking with you, I realized that AI has been around way longer than November 2022.
0:03:55 – (Brynne Tillman): Talk a little bit about AI from its inception and what you’ve seen over the incredible, As I think even over 50 years, if I remember what you said. Yes, yes.
0:04:08 – (Yeshwant Muthusamy): I mean, there has been work done in AI. I mean, the first paper on AI really was by the British mathematician Alan Turing, and it came out in 1950 when he posed the question can computers think? Or I think the actual name of the paper was can machines think? And that is where you have the origin of the so-called Turing Test, which by the way, no large language model has been able to satisfy in the way it was originally thought of.
0:04:43 – (Yeshwant Muthusamy): And the idea of the Turing Test is can a machine interact with you in such a way over an extended period of time where you are unable to tell between the machine and a real human being? That is what is the essence of the tune of the so-called Turing Test. And since then, it has been fascinating to see the progression of AI through the years. And by the way, I was not around when Alan Turing wrote his paper.
0:05:16 – (Brynne Tillman): Pretty sure you’re not in your 70s.
0:05:19 – (Yeshwant Muthusamy): So I just thought I should mention that. But it’s been fascinating to see the progression of AI and it’s really the progression of the human thought on how you can make machines think. So we’ve had this concept of expert systems or rule-based systems, and that gave way to what is known as artificial neural networks, which was an attempt by man to kind of mimic how machines think. You have these nodes called different neurons, and the neurons have links, which are supposed to simulate how the human brain works.
0:06:06 – (Yeshwant Muthusamy): And from neural networks, we went on to what is known as deep neural networks, which are much larger neural networks. And then that’s what brings us to this current state of deep neural networks, I mean, this current state of large language models. Large language models would not have been possible without deep learning or deep neural networks. Deep neural networks wouldn’t have been possible without the original artificial neural networks and so on.
0:06:44 – (Yeshwant Muthusamy): So there has been a progression in the technology which has brought us to the large language models. And these large language models have been trained on trillions of words, quite literally trillions and upon trillions of words. And so they have, they have encoded within these networks a large amount of knowledge. But a note of caution there. When these large language models were being trained, they adopted this notion of beggars can’t be choosers, so they trained from data from all over the web, including the Dark Web, which is why you hear all These stories about ChatGPT giving all kinds of responses.
0:07:33 – (Yeshwant Muthusamy): So it is very easy to take ChatGPT down deep, dark rabbit holes by asking it all kinds of weird questions. And the reason it’s able to do that is not because it understands you or not because it’s bad. You’re just directing it towards the portion of the training data that came from the Dark Web. I mean, that’s all it is. A large language model by itself is quite dumb. All it’s doing is giving you the most probable next word in the sequence based on the training data.
0:08:08 – (Yeshwant Muthusamy): It doesn’t like you. It’s not like it likes you, it’s not like it hates you, it doesn’t know you from Adam or Eve. It is just picking the most probable next word in the sequence. Happens.
0:08:20 – (Brynne Tillman): Interesting. So. So, as we’re speaking, there is a new Chat01 preview out, which is supposed to be a more complex thinking model. Are you familiar with what’s going on behind that? Can you talk a little bit about that?
0:08:36 – (Yeshwant Muthusamy): Sure. I mean, it has been all of a week. I mean, the O1. I mean the code name for O1 is actually Strawberry. I mean, you. I mean, I don’t know what goes on and how they name these things. I mean, they make so much money. Most of it goes for the cloud servers. They need to hire a naming guy to actually name these models.
0:09:01 – (Brynne Tillman): Someone from Crayola who did a great job naming crayons, or Benjamin Moore who did a great job naming paints.
0:09:08 – (Yeshwant Muthusamy): Right, right, right. So, but what O one does is it actually takes it a step beyond what ChatGPT 4.0 and others are doing, where it takes time to think about the question. So there is this notion of reasoning that has been added to it. It is still a large language model, but it uses intermediate steps where it tries to reason about the question you have asked it, and which is the reason why its response can go from a few seconds to actually a few or several minutes, because it’s actually processing what you’re saying and trying to.
0:09:58 – (Yeshwant Muthusamy): Trying to retrieve from its bank of training data all possible ways in which it could answer your question. It is, I mean, it is very new and I’ve also seen quite a bit of hype that has been associated with it. But at the same time, it is interesting. Just this morning I saw a series of benchmark tests where it outperforms GPT4,0 and the other large language models, but there are a bunch of other tests where it does worse than the existing large language models.
0:10:36 – (Yeshwant Muthusamy): So it is not a cure-all, it is not the new silver bullet, but it does do certain tasks, tasks which are more complex, which require some amount of thinking. It does really well.
0:10:55 – (Brynne Tillman): Thank you for that, I appreciate it and I’m looking forward to seeing it expand. And I’m hoping as of today, when we’re recording this, you can’t upload a Word doc or a PDF or anything to the O1. So I’m hoping Strawberry does allow us because a lot of what we teach is uploading transcripts from videos or PDFs of LinkedIn profiles to analyze and things like that. So I’m hoping the ability, by the time this is published is there.
0:11:29 – (Brynne Tillman): So thank you so much for that. You know, let’s just talk for a moment about the responsibility and ethical AI and what that really mean? And I guess I’m throwing a couple of questions on this, but how can businesses make sure that they’re being responsible and ethical, but they’re also empowering their team to use AI, Right?
0:11:51 – (Yeshwant Muthusamy): So I mean we’ve seen a lot of, I mean a lot of news items about people getting into bad situations because of the way they’ve used their AI models. And the whole notion of responsible and ethical AI goes to how we as human beings have an intricate, have an inherent responsibility because of these AI models. Being trained on such a large quantity of data, you have to be very careful how you, how you in the types of data that you provide the model because that is going to inform how the model is going to do once it’s less loose in the wild.
0:12:41 – (Yeshwant Muthusamy): And this notion of responsibility is more often more of an organizational norm in the sense that if you’re working, if you’re a big business or even a small business, you owe it to your stakeholders to make sure that the AI models that you, that you deploy are transparent and there is some level of accountability and they are also fair. And so because if you incorporate biased training data, well, guess what?
0:13:18 – (Yeshwant Muthusamy): Your AI model is going to display the very same kind of negative biases which can hurt your business in the long run. And I mean training an AI model and deploying an AI model in the field, they are all intrinsically expensive tasks. And it’s once you have an AI model out in the wild, if you haven’t taken the time to make sure that it’s doing the right thing, it can be very expensive for you to Roll things back.
0:13:52 – (Yeshwant Muthusamy): And in terms of the bad PR that your business has gained in the process, that’s going to be even worse. It’s going to be a lot more expensive to unroll that bad pr.
0:14:04 – (Brynne Tillman): Yeah, just the lawyers alone. So I’m curious, when a company is rolling out an AI product, do you recommend that there are very strict parameters and prohibitions inside of the product they’re rolling out? And how do you talk to your clients about making sure that they are protected?
0:14:34 – (Yeshwant Muthusamy): Right. I mean it all comes down to having an AI governance framework. And you do not have to be a multibillion-dollar firm to have an AI governance framework. Although you hear about AI governance teams and frameworks in large companies like IBM Google, Google and Facebook. I mean in my mind an AI governance framework is more of a mindset. I mean you could be a two-person startup and in that case both you and your co founder are the AI governance framework for that company.
0:15:10 – (Yeshwant Muthusamy): Because you need to make sure that when you collect data, when you train your models when you deploy your models, you’re incorporating all these notions of responsible and ethical AI. And as you grow, I mean once you grow up, then the CEO and the CTO and probably the CISO and the CFO, they are all part of the governance framework where the whole point is you are ensuring that your company’s reputation is not harmed, you’re ensuring that you’re not spending too much of money on this, that that’s where the CFO comes in and you’re ensuring that the data that you have provided is not leaking out pii personally identifiable information of your stakeholders or your end customers.
0:16:02 – (Yeshwant Muthusamy): All those go into this. No notion of an AI, a proper AI governance framework where you are, where you’re trying to harness the incredible power of AI technologies without harming yourself and the people it is supposed to serve. That’s what it comes down to. And it’s not, it’s not a one off, it’s a constant process.
0:16:27 – (Brynne Tillman): Ah, so a lot of my clients, including banks, are considering copilot with Microsoft as an option. Why is that a better option for the companies that have the fear of their intellectual property or customer data being breached?
0:16:46 – (Yeshwant Muthusamy): Well, I mean Microsoft is offering certain safeguards or they are, or they are offering certain assurances that they are not going to misuse your data and things like that? For a lot of companies those assurances might be more than enough. But I want to add a word of caution here. If the data that you’re dealing with is extremely sensitive, then Even those assurances that Microsoft provides may not be enough because once the data leaves the four walls of your business, it is out of your hands.
0:17:24 – (Yeshwant Muthusamy): I mean you are basically at the mercy of companies like Microsoft and OpenAI and whatever they put in their SLAs that they are going to do the right thing. But if your business is of such a kind where you cannot afford to kind of abdicate the responsibility of your data security to another firm, then you shouldn’t be using Copilot and all those other things. But for a member, for, but for a majority of businesses, the assurances that they provide might be okay, good to know.
0:18:03 – (Brynne Tillman): I’m going to take it from the other side of it. Not necessarily that my information, my IP is getting out there, but I might be using somebody else’s IP and not recognizing it. And I know we’ve heard a lot of these content creators who are really filing lawsuits now against OpenAI and other AI companies. So number one, this is a two-part question really. The first part is how do I verify that the content is not stolen from a creator?
0:18:41 – (Brynne Tillman): Number one. And number two, I guess it’s kind of piggybacking on that last question. How do I secure my IP?
0:18:47 – (Yeshwant Muthusamy): Right. So I mean that is a tricky question and I would be very hesitant to believe anybody who claims they have all the answers on that because the IP issue is a very thy issue. And as you probably heard, OpenAI has, I mean the New York Times filed suit against OpenAI because OpenAI was scraping the news items’ content too. Right? Right. And they are actually asking $150,000 per article that OpenAI has actually used in training their more models.
0:19:28 – (Yeshwant Muthusamy): I mean I don’t think that suit is ever going to go to trial. I expect them to reach some kind of a settlement out of court. But what it does is it highlights an important point. You have to be cognizant of the IP in the data that you use for training your models. And the unfortunate fact is that the US existing US copyright law was done in the late 1880s, did not predict the world of large language models and generative AI.
0:20:12 – (Yeshwant Muthusamy): So something has to give. You either have to restrict what is happening now to conform to a law which was done in the 1880s or something which is probably more feasible is to update existing US law to handle the case of Gen AI. And that’s what I think. Right, right. And so that’s what I think is going to eventually happen. But for a small medium business, be very cognizant of the output that you get from Generative AI if it’s a text output.
0:20:49 – (Yeshwant Muthusamy): Always. I mean, to be clear, chat GPT4.0 is very good at any kind of creative text writing. If you have any kind of marketing collateral or business letters. Chat GPT4.0 is your man. I mean, is your man or woman, as the case may be. You just want to use it. And as long as it is in your voice, as long as you’re not trying to be someone else that you’re not, it’s fine. But if you’re, if you’re trying to do something with any kind of image or with video, use multiple text to image models and don’t be afraid to iterate with the, with the text to image model to make the output your own so that the chances of you being accused of infringing on somebody else’s IP is, is kind of much less.
0:21:52 – (Brynne Tillman): Yeah, I’ve heard. Also to make sure you’re saving your, your chat threats.
0:22:00 – (Yeshwant Muthusamy): Yes, yes.
0:22:01 – (Brynne Tillman): So that you have, you can back it up with.
0:22:04 – (Yeshwant Muthusamy): Right.
0:22:05 – (Brynne Tillman): What you’ve done. So that’s, that’s great news. I love hearing.
0:22:10 – (Yeshwant Muthusamy): And if you look at the background, you see that head, that, that head I created using chat GPT 4.0 and it took me four iterations for it to get it to exactly the way I wanted it.
0:22:26 – (Brynne Tillman): And now it’s yours.
0:22:28 – (Yeshwant Muthusamy): Yes. And so that was a great image. And that, that infographic or that graphic of that head is now become kind of a. Somewhat of a trademark of my course. It is in all of the marketing collateral that I send out that’s for my course. And it’s gotten to a point where people in my LinkedIn network, the moment they see that head, they know that I’m talking about the course.
0:22:57 – (Brynne Tillman): That’s great branding. Absolutely. I love that.
0:23:00 – (Yeshwant Muthusamy): Right.
0:23:01 – (Brynne Tillman): I love that. So a quick question. We’ve been talking About Chat GPT OpenAI. What about Gemini and Copilot and Claude and you know, some of the others that are out there? I know we talked on Copilot, but are there any that you feel. Or even imagery like Leonardo. Right. Are there any that you love? Any that you, you just say play with it until you find the one that matches you. What are your thoughts?
0:23:33 – (Yeshwant Muthusamy): I always play these large language model chat bots against each other. If it’s, if it’s a task where I am, I’m going to be on the hook. I always ask the exact same question to multiple large language model chatbots and I see what each one has to say. Now, in general, Claude is really good for more factual information, whereas GPT4 cannot be trusted for factual information. I mean, they have probably improved and now they might say that they are good and I don’t. And I don’t. And I don’t disbelieve them.
0:24:12 – (Yeshwant Muthusamy): But in my experience, chat GPT4 is great for creative writing. Not so much. No, no. Not so much for factual information. Claude and Gemini probably have a leg up on. On chat GPT4O if you want factual date in information. So my general group rule of thumb is don’t be afraid to ask the same question of multiple large language models and see what each one has to say and then, and then put your spin on it. Make sure it’s in your voice.
0:24:51 – (Yeshwant Muthusamy): I am seeing LinkedIn comments to posts which are obviously generated by chat GPT and that is bad form. I agree. If you do not, you know, world. If you do not have the intellectual capacity to write a LinkedIn comment on your own. I don’t know what to say.
0:25:13 – (Brynne Tillman): Well, and I’ll always say this is my. If you don’t have the ability to do that, go grab a quote from the actual piece itself. Say, love this quote. And why.
0:25:27 – (Yeshwant Muthusamy): Exactly. Exactly. I mean, my last piece of advice is do not abdicate your inherent ingenuity and creativity to a piece of code. We should be at the center of everything that you do with AI.
0:25:43 – (Brynne Tillman): I love all of that. Thank you so much. There were some amazing learning moments and I’m very excited that we had this conversation. Is there any question I should have asked you that? I didn’t?
0:25:56 – (Yeshwant Muthusamy): Oh, no. I think you hit all the main points. But you have to keep in mind I can talk all day about AIs, but.
0:26:02 – (Brynne Tillman): Me too, I love it. Well, you know, for other people that want to continue this with you, how, how can people get a hold of you?
0:26:13 – (Yeshwant Muthusamy): Well, I can be reached at my email address. That’s my first name @yashwant.com. I’m happy to chat with them at any time.
0:26:28 – (Brynne Tillman): Great.
0:26:28 – (Yeshwant Muthusamy): And, and if you are, if you are, if you’re local to that, to that Dallas Fort Worth area, you might, you might want to come take my course.
0:26:40 – (Brynne Tillman): Well, I wish I were local there. I’m in New York, but if I’m ever down there. Thank you so much.
0:26:47 – (Yeshwant Muthusamy): Thanks, Brynne. This has been fun.
0:26:49 – (Brynne Tillman): It’s been great. And I know that the listeners learned a ton, so I truly appreciate it. And to our listeners, when you’re out and about, don’t forget to make your.
Outro:
Thanks for watching and join us again for more special guest instructors, bringing you marketing, sales training, and social selling strategies that will set you apart. Hit the subscribe button below to get the latest episodes from the Making Sales Social podcast give this video a thumbs up and comment down below on what you want to hear from us next. You can also listen to us on Apple Podcasts, Spotify, and Google Play. Visit our website, socialsaleslink.com for more information.