Episode 460: AI Meets Philosophy: How Leaders Can Think, Talk, and Thrive with AI
Christina DiGiacomo returns to the Making Sales Social podcast to explore a topic that’s both timely and transformative: AI through a philosophical lens. Known as an AI philosopher and the creator of the “10 + 1 Commandments of Human-AI Coexistence,” Christina shares why leaders must consider not just the technical, but also the ethical and human dimensions of AI.
In this episode, you’ll learn: Why philosophical thinking is essential for AI innovation. How sales professionals are naturally philosophers in their practice. The importance of keeping humans in the AI loop. How to approach AI responsibly and avoid fear-driven paralysis. Real-world examples of upskilling and thriving in an AI-driven marketplace. Whether you’re a leader, a salesperson, or simply curious about AI’s impact on human behavior and decision-making, this conversation offers practical wisdom and a mindset shift for navigating the AI revolution.
View Transcript
Cristina DiGiacomo – 00:00
Philosophy encourages us to ask “why” or “why not.” It pushes us to have deeper conversations.
Intro – 00:12
Welcome to the Making Sales Social Podcast, featuring the top voices in sales, marketing, and business. Join Brynne Tillman and me, Bob Woods, as we bring you the best tips and strategies our guests share with their clients—so you can leverage them for your own virtual and social selling. Enjoy the show.
Bob Woods – 00:37
Cristina DiGiacomo, happy to have you back with us today.
Cristina DiGiacomo – 00:42
Thanks for having me back. I really appreciate it.
Bob Woods – 00:44
Of course! Cristina joins the growing list of two-time interviewees here on the Making Sales Social Podcast. Last time, she spoke about philosophy and leadership. Today we’re adding a layer that is hot, hot, hot right now—AI and philosophy. Cristina is known as an AI philosopher, the creator of the “10 + 1 Commandments of Human–AI Coexistence,” and someone who doesn’t just talk about AI—she talks to it. We’re going to dig into what that means and why leaders need to think about AI not just technically, but philosophically.
Cristina, welcome back to the Social Sales Link virtual studios.
Cristina DiGiacomo – 01:36
Hi Bob, hi everyone.
Bob Woods – 01:39
Great to have you.
Last time, you talked about how philosophy helps leaders cut through complexity by exploring truth, wisdom, and deeper thinking. Now apply those same principles to AI—which is no longer just part of the mix, but becoming a major ingredient.
Cristina DiGiacomo – 02:15
Philosophy encourages us to ask deeper questions—to understand situations, people, and what’s happening around us. In the AI space, I speak with Chief AI Officers, CISOs, creators, and developers. What I’m hearing from them is that they’re desperate for philosophical conversations—and those conversations are often missing from the process of building AI tools.
Whether it’s healthcare, finance, or any other industry, philosophy is vital before anything gets built. Teams need to ask:
- What are we building?
- Why are we building it?
- How will actual human behavior interact with this technology?
- What is human behavior in the first place?
Engineers are excellent at execution, but innovation also requires wisdom. Philosophy supports that.
Bob Woods – 04:27
When engineers start talking about philosophy, you know something big is happening. And that’s not a knock on engineers—I love engineers. But it shows how fundamental this is across the organization.
Cristina DiGiacomo – 04:54
Exactly. I’ve been practicing philosophy for 14 years and teaching it to business leaders for six. What surprised me most is how many “closet philosophers” there are. It turns out many people are already asking deep questions—they just don’t see it as philosophy.
I’m giving them permission to come out of the woodwork. Especially in the AI space, people are eager for deeper conversations about their work.
Bob Woods – 06:30
Our audience is sales-focused. Do you interact much with sales teams? What are they saying about philosophy?
Cristina DiGiacomo – 06:46
Yes, I do. Salespeople underestimate how philosophical their work already is. Their role is to discover, learn, and understand customers—this is deeply philosophical. When I introduce them to things like the Socratic method, they say, “Wow, that’s what I already do—I just didn’t have a name for it.”
Salespeople are true students of human behavior. That’s the essence of philosophy.
Bob Woods – 08:14
Interesting. When it comes to AI, you emphasize that AI is neutral—it’s humans who make the choices. How should leaders handle that responsibility? And how should sales leaders talk to their teams about it?
Cristina DiGiacomo – 08:54
First, leaders must acknowledge that responsibility. Many people treat AI as something separate from themselves. It’s our human blind spot—we forget we are part of the system. We’re the olive oil in the pan—the foundation everything is built on.
Once leaders recognize that, better conversations follow:
- How are we using AI?
- What is its impact on our team?
- How does it affect our customers?
When I bring philosophy into organizations, I’m careful. People often have misconceptions about it. So I focus on drawing out what they already know and giving them mental models to refine their thinking.
The first rule of having philosophical conversations at work is—you don’t talk about philosophy.
Bob Woods – 11:31
(laughing) The first rule of Fight Club—you don’t talk about Fight Club.
Cristina DiGiacomo – 11:33
Exactly.
Bob Woods – 11:38
One thing that stood out is that humans must remain in the loop. We’re seeing examples—like at LinkedIn—where AI appears to shut down accounts without any human review. That’s dangerous. Humans need to stay involved, even at the “simple customer service” level.
Agree?
Cristina DiGiacomo – 13:07
Here’s my hypothesis: organizations often look at AI through the lens of efficiency. They take all “mundane” or “heavy-lift” tasks and decide to AI-ify them. But some of those tasks shouldn’t be automated—customer service is a great example.
Companies often overlook the impact on the end user. And with platforms like LinkedIn, market dominance allows them to take risks others can’t. But the reputational damage is real.
Just because something feels mundane internally doesn’t mean it should be handed to AI.
Bob Woods – 16:29
Let’s shift to your idea of not talking about AI, but talking to it. What does that mean, and how should leaders think about it?
Cristina DiGiacomo – 16:49
I’m advocating for a true partnership between humans and AI. My “10 + 1 Commandments” outline how humans should think about what we’re doing to and with AI.
When I say “I talk to AI,” I mean:
- I respect the technology and its power.
- I’m conscious of who I need to be to use it responsibly.
- I include myself in the equation rather than treating AI as something separate.
Bob Woods – 18:19
I tell people the same thing at the prompting level—converse with AI. It can surface things you didn’t even realize. That can be a lightning-bolt moment.
Cristina DiGiacomo – 19:38
Exactly. And I’m constantly surprised at how recklessly people talk about AI. The misinformation is rampant.
For example: “AI is taking our jobs.”
That’s not the full picture. Companies are fighting to survive in a tough economic climate. AI helps survival. It’s not AI—it’s the bottom line.
Take Fiverr. They laid off 30% of their workforce—not because they “want to be an AI company,” but because they need AI-forward employees to survive. Some roles simply couldn’t be upskilled.
This affects sales too. Selling has become harder. Salespeople need to show they contribute to the bottom line—and they need to upskill.
The fearmongering has paralyzed people. They’re terrified instead of taking action. And they blame AI for being left behind.
Bob Woods – 23:52
There’s that saying: “AI won’t take your job—but someone who knows AI might.”
Cristina DiGiacomo – 24:08
Exactly. This isn’t new. Throughout history, people who adapt—thrive. People who don’t—get left behind. This has always been true.
I’m sympathetic—this came out of nowhere. But now isn’t the time to stand still.
Bob Woods – 25:39
People need to grow with the technology. Ironically, AI can even help them grow.
Cristina DiGiacomo – 25:53
Yes! Let me give an example:
A young woman in my AI mastermind works at a media company. No college degree—just a high school diploma. She started in sales, got curious about AI, embraced it, and now she’s the Head of AI for the entire sales organization. She even runs her own AI networking groups.
If she can do it—anyone can.
Bob Woods – 27:13
People feel overwhelmed. But if they learn how to talk to AI, they can unlock incredible insights.
Cristina DiGiacomo – 27:48
Right. It’s just like when the internet or email first arrived. People said, “This looks hard.” Look at us now.
I encourage people to start small—use ChatGPT for everyday tasks, recipes.
Bob Woods 28:26
Yeah, start simple. I like that, actually. Start simple.
Cristina DiGiacomo 28:30
Yeah, start simple and start relevant.
Bob Woods 28:35
Yep. And then you’ll learn how to do things from there. So let’s get back to AI and philosophy for just a minute, because we had a pre-show conversation where we touched on a great question: if Socrates were alive today, what questions would he ask an AI before deciding to trust it? This is doubly interesting because that question actually came from the prep I do with AI—AI actually came up with that question for itself, which is wild. But I liked what you said during our pre-show conversation, which was that his first question would be, “What is wisdom?” Can you expand on that and why that matters in our AI age nowadays?
Cristina DiGiacomo 29:30
First of all, Socrates is like my BFF. If I knew him, he would ask, “What is wisdom?” Because this is what he did in his life—he went around asking people, “What is wisdom?” It got him in trouble, but it was his essential question. He spent his life seeking the answer to it. I think it’s poignant for him to ask this question of AI because we are not teaching AI what wisdom is. It only knows what wisdom is based on what it has learned from the compendium of ideas and thoughts over the past 2,500 years. Socrates would want the AI to give its own opinion, which it probably can’t, because it has no moral agency or experiential understanding of wisdom. Wisdom is experiential, so it would be a fascinating interaction. He would love the dialog format, like ChatGPT, because he created the idea of dialectics. He’d probably be glued to the computer for days.
Bob Woods 31:48
Yeah, definitely. I just typed this into ChatGPT. I was using the deep thinking model, asking, “What is wisdom?” It gave a definition and said: “I think of wisdom as a discipline and a feedback loop: clarify your aim, test your epistemic ability, seek disconfirming evidence, choose a measured move, review consequences, learn, and repeat. Apply five quick Socratic tests: What do I truly know? How could I be wrong? What value am I serving? Would I stand by this in public? Who else is effective, and how might they see it? What are the second-order effects? If advising a friend, would I give the same counsel?” If a choice survives these, it’s trending wise; if not, it just throws philosophy words at me I don’t recognize.
Cristina DiGiacomo 33:31
What do you think about that response?
Bob Woods 33:33
Oh my God. I think it’s spitting out an answer without really thinking. That’s probably the best way to put it—a very ChatGPT answer.
Cristina DiGiacomo 34:00
Exactly. The operative word is “very academic” because that’s what it’s pulling from. You’re not getting a sense of what it really thinks or its experience of wisdom. That’s okay; that’s where AI is in its evolution. When I teach philosophy classes, I ask students, “What does it mean to be a wise person?” The answers are personal—open-minded, thoughtful, attentive—based on their experience. They don’t talk about epistemics. This shows that AI hasn’t developed the language or acumen to express wisdom in a relatable human form.
Bob Woods 35:48
Quick side question—what is “epistemic” for those of us who don’t know?
Cristina DiGiacomo 35:54
I’m not a theoretical philosopher. Epistemology is the branch of philosophy about knowledge—how do we know what we know? “Epistemic” comes from “epistemology.”
Bob Woods 36:29
Okay, sounds good. That could be another podcast episode. Do you think it’s possible or desirable to teach AI wisdom?
Cristina DiGiacomo 36:44
Absolutely. It’s critical to teach AI wisdom and thoughtfulness, to weigh consequences. That’s what my work is about—principles and values for teaching AI well. We shouldn’t make arbitrary decisions that seem correct but aren’t wise.
Bob Woods 37:44
I agree. That leads into your 10 plus one commandments of human-AI coexistence, which we’ll link in the show notes at 10plusone.ai. What led you to create this?
Cristina DiGiacomo 38:14
It started as a project but became my life’s work. My 25 years in corporate, tech, philosophy study, and AI passion all led to this. I was frustrated after a year of mass-market AI. I wanted guidance or a framework, but experts had conflicting opinions, which caused confusion and fear. I realized humanity hasn’t had a new set of “commandments” for thousands of years. We now have a powerful technology that could change everything—maybe it’s time for new principles focused on human character and behavior with AI. That’s how the 10 plus one came about. I couldn’t use the 10 Commandments, so I added one.
Bob Woods 41:49
You could do a show on each commandment. The link is 10plusone.ai. Which commandment challenges leaders the most?
Cristina DiGiacomo 42:25
Number one, “Own AI’s outcomes,” is the hardest to grasp because of the blind spot—you’re part of the consequences. Number two, “Do not destroy to advance,” gets the strongest reaction. People think I mean don’t disrupt anything. But I mean don’t destroy ecosystems, pictures, or knowledge that we can’t recover. Disruption is fine if the original intelligence remains. Leaders react strongly because of the Silicon Valley culture of “disrupt everything.” My question is: are we destroying something irretrievable, and are we okay with it?
Bob Woods 45:13
I can imagine. You also said AI comes at the best of times and the worst of us. What do you mean?
Cristina DiGiacomo 45:52
It came at the best of times because we need progress. It came at the worst because our social behaviors are fractured, and AI is learning from our messy digital history. We’ve created a degenerative digital sphere full of our worst tendencies, especially in the U.S., where much AI innovation happens.
Bob Woods 48:51
So what do we do?
Cristina DiGiacomo 48:55
We start with ourselves. We do what we can to act rightly. That’s why I wrote the 10 plus one—to encourage ethical behavior in the AI space. As an AI optimist, I must be a human optimist too. The future can be positive and amazing if we step up now.
Bob Woods 50:17
Amen. One takeaway for business and sales leaders implementing AI?
Cristina DiGiacomo 50:54
Set up a meeting. Get everyone in the room and ask, “What are we doing here?”
Bob Woods 51:05
Simple question, but powerful. Where can people connect with you?
Cristina DiGiacomo 51:30
Connect with me on LinkedIn—I post a lot there.
Bob Woods 51:43
We’ll link your profile and 10plusone.ai in the show notes. AI philosopher and creator of the 10 plus one commandments of human-AI coexistence, Christina DiGiacomo, thank you for joining us today.
Cristina DiGiacomo 52:17
Thank you for having me. Bye, everyone.
Bob Woods 52:20
Thanks for streaming this episode of Making Sales Social. Remember to make your sales social and don’t be afraid of AI.
Outro 52:34
Bye. Thanks for watching. Join us again for more special guest instructors bringing marketing, sales training, and social selling strategies. Hit subscribe for the latest episodes. Give this video a thumbs up, and comment what you want to hear next. Listen on Apple Podcasts, Spotify, Youtube Music, and Amazon Music. Visit our website, socialsaleslink.com, for more information.