Episode 301: Navigating the Ethical Boundaries of AI in Sales
Dive into the ethical boundaries of AI in sales with Stan Robinson, Jr. and Bob Woods. Explore the implications of bias, transparency, accountability, and privacy in leveraging AI responsibly. Discover how companies are leading the way in implementing responsible AI practices. Learn how to navigate the evolving landscape of AI while maintaining trust and integrity in successful sales. Keep the humans in the loop and harness AI potential ethically. Join the conversation on making sales social.
View Transcript
Intro
Bob Woods: Welcome to the Making Sales Social podcast featuring the top voices in sales, marketing, and business. Join Brynne Tillman, Stan Robinson Junior, and me, Bob Woods, as we each bring you the best tips and strategies our guests are teaching their clients so you can leverage them for your own virtual and social selling. Enjoy the show. Welcome to Making Sales Social Live coming to you from the social sales link virtual studios.
0:00:48 – (Bob Woods): Welcome, sales and marketing pros to Making Sales Social Live. Coming to you from the social sales link virtual studios. I’m Bob Woods. Brynne Tillman is off today, but our co-host, Stan Robinson is here to talk about a really exciting subject today. How you doing, Stan?
0:01:06 – (Stan Robinson, Jr): Doing excellent, Bob. Look forward to today’s topic.
0:01:11 – (Bob Woods): Yeah, today I’m really looking forward to. We’re diving into a topic that’s become increasingly relevant in our industry, and it seems like it gets more and more relevant every single day, and that’s navigating the ethical boundaries of AI in sales. So as AI continues to evolve, we are seeing it like daily. It’s amazing the speed that things are evolving at. It’s crucial that we understand not just its capabilities, but also the ethical implications it brings to our sales strategies.
0:01:48 – (Bob Woods): We’re going to be exploring how to leverage AI responsibly, ensuring that our approaches remain transparent, fair, and beneficial for both businesses internally and customers. In other words, treating customers externally. And as always, you can put your questions in chat, and we will answer those along the way. So, Stan, when you think of AI and sales ethics, what comes to mind for you?
0:02:17 – (Stan Robinson, Jr): One thing that comes to mind is that artificial intelligence is being integrated into so many sales tools right now that everyone is going to have to learn how to sort of navigate it, so to speak. We’ll be talking more about issues like bias and that type of thing. And hopefully, the major platforms that are integrating these tools are taking those things into consideration because it’s something that’s going to be affecting a lot of people.
0:02:51 – (Stan Robinson, Jr): One other thought that did come to mind, since voice is going to be an increasingly important part of AI, some companies will be using artificial intelligence to do outbound outreach, and not just via email, but via phone calls.
0:03:12 – (Bob Woods): Yes.
0:03:13 – (Stan Robinson, Jr): So one big question, and the regulators will probably have to tackle this is okay if you get a call from an AI, does it need to identify itself as an AI that’s talking to you?
0:03:26 – (Bob Woods): Yeah. And then what if the company decides to start using their rep’s voices, if they’re not identifying as AI and they’ve got a crew of sales development reps, SDR, for example, what if they decide to clone, for lack of a better word, their voices and use their voices in the outbound? And then, you know, what happens if that SDR leaves? I mean, there are, you know, there are just so many, there are so many different aspects of ethics that are involved here in just sales alone. I mean, not to mention all of the other areas that involve not only just running a business internally, but dealing with customers and vendors and partners and things like that externally. I mean, there’s a lot of stuff to think about here. Stanford.
0:04:19 – (Stan Robinson, Jr): Yep, absolutely. And when chat GPT first came out, I remember having discussions about, okay, do. When I write something in collaboration with Chat GPT, should I put at the bottom, written by Stan Robinson and Chat GPT or something to that effect? And one of the things that dawned on me was that at this point going forward, it is kind of a watershed moment because everyone is going to assume that AI is involved in all content. They’re just going to assume that.
0:04:57 – (Stan Robinson, Jr): And so ownership, provenance, whatever you want to call it, being able to correctly assign ownership to different forms of content is going to be an enormous issue. And of course, that goes way beyond sales.
0:05:14 – (Bob Woods): Yeah, without a doubt. With, without a doubt. So, so for me, it’s about everything that you, Stan, just said, but it’s about a lot more, too. So if you ever watch the older movie I Robot with Will Smith, I’m not going to go into what it was and things like that. But let’s just say that’s a rather dystopian example of AI running a monk, rather. So I don’t think we’re ever going to get to that point.
0:05:43 – (Bob Woods): But there are ethical considerations on a broader scale going on here. People are thinking, you know, oh, it’s going to dehumanize the world. It’s going to make humans lazy, you know, things along those lines. And I think in some cases that that’s starting to happen. But in a lot of other cases, I don’t think that it is. But I do think that we need to think about the perception that AI has out there, even if the reality as we know is it is different.
0:06:14 – (Bob Woods): But you know something, perceptions are reality out there. People depend on perceptions being reality all the time. So we have to really work on that part. We have to practice what’s being called responsible AI in everything that we do. Now, Stan knows I recently took a class at Northwestern University’s Kellogg School of Management called AI applications for growth. A mind blowing class, I might add. If you want details on it, just reach out to me and I’ll send you some stuff on it. But there we discuss four main tenants of responsible AI.
0:06:53 – (Bob Woods): So number one is fairness. Now, for salespeople, this comes along the lines of AI helping to identify potential clients and predict their needs. But it needs to be done in a way that avoids biases. And that’s what Stan was talking about a little earlier. We cannot let bias work in to anything that we do, whether it’s, you know, identifying potential clients and predicting their needs, whether it is.
0:07:23 – (Bob Woods): Whether it is just the raw data that is getting ingested to train AI models. And that’s starting to get a little far afield. But still, you know, we really have to be very cognizant of that, because when AI was starting out, originally, there were cases, and they’ve been well documented in the meeting, in the media, rather, where AI was not being unbiased, was being biased towards or against certain groups or whatever, we have to make sure that we are really making sure that that doesn’t happen, Stan, you know what I mean?
0:08:00 – (Stan Robinson, Jr): Yeah, absolutely. And as you said, that’s a large part of that is in the training of the models, which is before they get to us. But the good news is that the level of awareness has been raised so much among the public and the rest of us that hopefully the providers will keep a tighter rein on that because they’re being much more closely scrutinized than ever.
0:08:30 – (Bob Woods): Yes, they are. Yes, they are. And that’s. I mean, AI and general should be very closely scrutinized. But, but, but biases and fairness is definitely an area where that needs an extra hairy eyeball on it, basically. So I definitely agree with that. So, second, let’s talk about transparency. And now we’re talking about AI driven insights and recommendations when it comes to sales. So if an AI system suggests a particular sales strategy or if it predicts client behavior, it also needs to provide explanations on how these conclusions were reached. And there are a couple different reasons for this. Number one, you want to know where it’s getting this stuff from.
0:09:12 – (Bob Woods): And number two, AI systems still have a tendency to what’s called hallucinate. Hallucinate or I don’t want to say lie. Lie is not a really good way of putting it, but, you know, it tends to create things, I think, just in an effort to please whomever is asking questions or, or whatever. And, you know, maybe it’s not coming up with good stuff. So it’s like, eh, I’ll just say this, hopefully, hopefully this person will be happy.
0:09:38 – (Bob Woods): But, you know, we can’t have that. We need to have transparency so that we are knowing why AI is suggesting or recommending whatever it is recommending. Stan.
0:09:52 – (Stan Robinson, Jr): Yeah. And it’s funny, as simple as it sounds, that you have people programming these sophisticated systems, and you just assume that the people writing the algorithms know how these systems arrive at their conclusions. And unfortunately, that’s not the case.
0:10:13 – (Bob Woods): They don’t know all the time, which is. Which is kind of scary. I mean, because once upon a time, programming, you used to program exactly what you wanted out of it. And if for some reason it wasn’t giving you what you wanted, it was your dang fault because you did something wrong in the programming that’s not really happening anymore, which is interesting. It’s kind of scary at the same time. But, I mean, you really need to, again, keep that hairy eyeball on things to make sure that in this case, you know exactly why AI is recommending whatever it is, is recommending for you.
0:10:49 – (Stan Robinson, Jr): Yeah, yeah. And it’s funny, there’s a whole field of study looking at, okay, how do these machines actually make the decisions or tell us what they’re telling us? So. Yep, big area.
0:11:05 – (Bob Woods): Big area. Yeah. So another big area, number three on our top four list, I guess, in this case, is accountability. This is where sales teams rely on AI tools without fear of unintended consequences. So this means having systems in place to monitor AI actual decisions and being able to trace and understand these decisions. So that actually goes up to, like, numbers one and two. That we just discussed. But, I mean, there does need to be that human accountability there. So that’s why.
0:11:42 – (Bob Woods): That’s one of the biggest reasons why I don’t think AI is going to take over. Because if we’re coding accountability very strongly into there, there’s just no way I can. Because. Because humans still need to be a big part of that. Whatever chain it is that you’re looking to do in AI, humans still need to be a big part of that. If we do let that go, you know, we might get to robot. I don’t know.
0:12:10 – (Stan Robinson, Jr): Yeah, yeah. There’s a good book called co intelligence by, I think he’s a Wharton professor named Ethan Malik that writes a lot about AI. And one of the things he mentions is always keep a human in the loop, as you alluded to earlier. So we can’t become over reliant on AI, we can’t throw common sense out the window because, as you said, these systems will hallucinate and they do want to please us. And the thing is, when they give us answers that are total fiction, they.
0:12:48 – (Bob Woods): Sound very plausible, very convincing, very convincing. It’s like, it’s like, are you sure this is right? Oh, yeah, that’s right. Well, then tell me where you got it from. It’s like, well, okay, I created that. I mean, it’s just like, you know, it’s like that intern there on who is, who is, who is working with you on, on the very first day, and they just want to make the best impression that they possibly can. And, you know, they may let some things slip that may not necessarily be accurate. So, I mean, unlike interns, though, I mean, with, with AI, you can, you can call them out and then even have them train on that so that they don’t make that mistake in the future.
0:13:29 – (Bob Woods): Yeah. Yeah. So, fourth, and by the way, be, before I forget, if you’re joining us live right now, feel free to put, to put questions or comments in chat. We will definitely address those as we go along. So, fourth, we have privacy, and privacy is always rearing its head when it comes to anything having to do with tech. And obviously, there’s a good reason for that because we all do want privacy. So here we’re talking about protecting client data as well as corporate data, both of which are paramount, and b, two b, sales.
0:14:03 – (Bob Woods): So our AI systems must comply with data protection regulations. And until there are regulations, we’ve got to really look at this ourselves and basically say, you know, hey, is this the type of stuff that I would want out in public? Chances are it’s probably no. So until regulations come along, we are definitely at a point right now where we’re regulating ourselves more than anything else. And you do that, of course, through implementing robust, robot, robust whatever security measures. So I think that that’s really important as especially, especially with, I think the self-regulation part, Stan, because Congress right now is just starting to get into this stuff. I think that they’re actually moving faster than, than in other areas where Congress just, you know, lawmakers just move really, really slowly.
0:14:55 – (Bob Woods): I think it’s partially because Congress is also, you know, especially in elections, is also really impacted by AI. So they’re, so they’re looking at this much more closely and taking it much more seriously. But privacy is obviously very important in terms of using AI and sales and anything having to do with business.
0:15:14 – (Stan Robinson, Jr): Yeah, yeah, absolutely. So as you mentioned, one thing that businesses can do right away is put together some type of guidelines for their team because a lot of their team members are already using AI anyway. Yeah, but you’d need to give them some type of guidelines. Like, Bob, as you mentioned, please don’t put confidential data into chat GPT, because it might be used to train future models.
0:15:47 – (Bob Woods): Right, exactly.
0:15:49 – (Stan Robinson, Jr): The legal system and government regulators are scrambling to catch up.
0:15:55 – (Bob Woods): Yeah.
0:15:56 – (Stan Robinson, Jr): And so.
0:15:58 – (Bob Woods): Yeah, absolutely. So, that’s a really good segue to just talk really quickly about. About putting rubber to the road here when it comes to these four tenants. In other words, really look at implementing everything that what we’ve been talking about. Um, so I could go into the type of detail here that would push most people to sleep when it comes to things like this. So back when I was taking my class, we really dove deep into it and trying to sum everything up for a podcast is actually really, really difficult.
0:16:33 – (Bob Woods): But overall, if I were to boil it down, I would say it involves using diverse data sets involving stakeholders in the AI development process, and then continuously monitoring a, monitoring AI systems for ethical compliance. So data in, people in, and then as data is coming out, whatever form that data is, making sure that it’s all ethically compliant, both going in and coming out. So the human element never goes away when using AI, which, you know, again, is a reason why I don’t think we’ll ever see a time when, when, when we’ll be bowing down to our proverbial AI overlord, Stan.
0:17:19 – (Stan Robinson, Jr): Yes, that is a hot topic because you, you have some of the leading thinkers in AI on both sides of questions ranging from existential risk to, you know, how smart are these are, will they ever become sentient? So, yeah, I think two scientists who shared the Turing award, which is, have opposite opinions as to how big a threat AI actually is.
0:17:52 – (Bob Woods): Yeah, but I think if we’re talking about this rather than if everyone starts agreeing about this stuff, that’s where I think that. That we’re in trouble. Because, because you can’t have groupthink on this. You really need to have diverse opinions and that everybody is listening to other people’s diverse opinions as well, which is, which is unfortunately a very difficult thing for people to do nowadays.
0:18:18 – (Bob Woods): But we do have to really listen to all sides here so that, so that we can get AI that is ultimately responsible and really treats people fairly and everything else that we just discussed.
0:18:36 – (Stan Robinson, Jr): Exactly. Yep. Because I think one thing that both sides can agree on is that everyone needs to become better educated about AI so that we can have these types of discussions as far as what direction we should take it.
0:18:53 – (Bob Woods): Yeah. So, and that’s a good segue into the next point that I wanted us to talk about that I think that you have some thoughts on in terms of. Now let’s expand this out to a little further than just talking about companies use it and talking about more society at large and considering how AI initiatives are affecting the broader community, Stan, like jobs and things like that.
0:19:22 – (Stan Robinson, Jr): Yes. AI trying to boil this down also. It’s going to have an impact on areas across society. So whether you’re talking about government, education, legal, business, you name it, AI is going to impact those areas. And jobs. Yes. I mean, there’s some job categories that are not going to be here probably in 18 months as far as humans doing them.
0:20:00 – (Bob Woods): Right.
0:20:02 – (Stan Robinson, Jr): And we’ll have to be creative about, about what to do in those situations.
0:20:09 – (Bob Woods): Absolutely. Absolutely.
0:20:12 – (Stan Robinson, Jr): So the job is just one. Bob, you’ve already mentioned bias, how these models are trained. And so that’s why I mentioned that education is going to just be so important and education beyond what the press reports, because the press reports what will sell.
0:20:33 – (Bob Woods): Right.
0:20:33 – (Stan Robinson, Jr): Which sadly, is not always what is accurate.
0:20:39 – (Bob Woods): Yeah. And fear sells, and that’s what they’re going to keep talking about. So. And people don’t need to be fearful. You just, need to educate. You need to figure out not only how to use AI in your job, but how AI can help you in your job in ways that maybe your company doesn’t even know about. Make yourself valuable to your company, how in, in whatever profession you’re in at this point, it doesn’t really matter. I mean, there are, there are all kinds of, there are all kinds of ways that you can do this in sales, but there are other areas, too, where you can figure out how to help you help with AI, to help you with your job that can help others, too. So I, so I think that that’s really important.
0:21:23 – (Bob Woods): I think we should also be mindful of the environmental impact of AI. And here I’m talking about energy consumption of large AI models. And I’m not going to get into this because, because I’m not a real green person or whatever, but I do think that we need to keep in mind that AI does need to be, does need to have that element of sustainability so that we can minimize negative impacts and contribute positively to the environment with AI.
0:21:52 – (Bob Woods): And I think an extreme, extreme, extreme example of that not happening, as I always remember in the Terminator movies when they flash forward to the future and it’s nothing but the robots and things like that moving around. It’s an absolute hellscape because obviously nobody cares about the environment. AI doesn’t care about the environment. I’m not saying things are going to get like that, but we do need to at least keep that hairy eyeball on environmental impact because, you know, every once upon a time people were talking about that with like, cloud computing, and then they were talking about that with cryptocurrency and those are still going on, too. So now you’re putting AI on top of all this stuff.
0:22:31 – (Bob Woods): Just really need to keep an eye on it and make sure that we’re not messing things up more than we’ve already messed them up.
0:22:41 – (Stan Robinson, Jr): Yep, yep.
0:22:43 – (Bob Woods): And so, yeah. Yes, I want to get to one really, really quick case study of responsible AI. Because, because we’re talking about it, we should, we should mention that that’s actually being used out there. So companies are actually doing this. We’re not talking about pie-in-the-sky type of stuff. This is, this is actually happening is actually going on. So there’s a company called Altaml. I think that’s how it’s pronounced, a L T A M l.
0:23:13 – (Bob Woods): So this is a company that builds AI tools for businesses. Hey, that’s us. They’ve partnered with something called the Responsible AI Institute and they’re conducting, or they have conducted an organizational maturity assessment, say that three times fast. So it helps implement robust AI government structures. It also helps them enhance their market differentation and reinforces their commitment to responsible AI practices. So they’ve got all this stuff going on at the same time.
0:23:47 – (Bob Woods): But, they’re being very proactive in terms of actually doing something about it because they’re focusing on mitigating risks and enhancing trust as well. So they’re actually now known, evidently, for being this type of company. So everything that you’re doing in terms of responsible AI, you know, hey, you can use that as well and say, you know, hey, we’re, we are doing it the right way. Here’s how we’re doing it.
0:24:16 – (Bob Woods): You know, do business with us for like, lack of a better phrase, but to let people know that you are using AI but you’re doing it in the most responsible way, partner or possible, I think that there’s, there’s, there’s nothing but good to be had there.
0:24:32 – (Stan Robinson, Jr): Yep. Yep. Great point. Because there will be others who will not be using AI responsibly. Yes.
0:24:39 – (Bob Woods): Yeah. Yeah. That is, that is an unfortunate but true statement. So I guess so. So I think we’re just going to wrap things up here and just say, you know, every day and essentially, in every way, we’re being transformed with AI. We are transforming, rather than the sales landscape. We’re getting incredible opportunities for growth as well as efficiency. But, you know, as the old phrase says, with great power comes great responsibility.
0:25:07 – (Bob Woods): So we need to navigate the ethical boundaries of AI. There’s just no other way around it. When we do this, we can harness, harness, harness. I’ve just been fumbling all morning long here. We can harness this potential while maintaining trust and integrity, integrity that are the bedrock of successful sales. And like I said before, you can actually promote this fact as, as well, so that people are, you know, it’s like, hey, they’re using AI, but they’re really keeping an eye on it. And that’s good.
0:25:40 – (Bob Woods): So AI, I think, can be good, but we do really need to keep an eye on ethical considerations when it comes to AI. Stan, do you have anything else?
0:25:50 – (Stan Robinson, Jr): That’s a great summary, I think we can agree. Just keep the human in the loop. Keep humans in the loop as much.
0:25:59 – (Bob Woods): As you possibly can. So thanks again for joining us on this episode of making sales social live. If you’re with us live on LinkedIn, YouTube, Facebook or X right now, we do this every week, so keep an eye out for our live sessions. If you’re listening to us on our podcast, it means that we’re recorded and hopefully you’ve subscribed to us already. If not, go ahead and hit that subscribe or follow button.
Outro
Don’t miss an episode. Visit the Making Sales Social podcast. Leave a review down below. Tell us what you think, what you learned, and what you want to hear from us next. Register for free resources at linkedinlibrary.com. You can also listen to us on Apple podcasts, Spotify, and Google Play. Visit our website, socialsaleslink.com for more information.