The Vaughn Tan Rule: A Simple Framework for Working with AI

an artist s illustration of artificial intelligence ai this image represents how machine learning is inspired by neuroscience and the human brain it was created by novoto studio as par

I have a friend who manages the customer contact centre for a large MNC. Last year, senior leadership issued him a directive: Make sure everyone uses AI in their work. So my friend gave his entire team access to an AI tool and told them to integrate it into their daily work.

The results were disastrous.

Customer complaints shot up. Cases that used to take hours were now dragging on for days, and the team’s overall productivity tanked overnight. When my friend dug into what was happening, he discovered that his staff were simply copying customers’ emails into the AI, generating a response, and firing it back without reading it, let alone editing it.

The technology wasn’t the problem. The issue was how the call centre workers were using it. They’d outsourced the wrong part of their job.

This story led me to discover a simple framework that’s changed how I think about AI at work: the Vaughn Tan Rule.

The Vaughn Tan Rule

I first came across The Vaughn Tan Rule in Cedric Chin’s blogpost, “How to Use AI Without Becoming Stupid.” It states:

“Do NOT outsource your subjective value judgments to an AI, unless you have a good reason to, in which case make sure the reason is explicitly stated.”

The keyword here is subjective value judgments: how you decide what is good or bad, what’s worth pursuing, or what’s better than something else. According to Vaughn, current AI systems are primarily useful for synthesising information and recognising patterns. They do not possess consciousness, agency, or the ability to make meaning.

This was the cardinal rule my friend’s call centre staff were breaking. They outsourced the inherently human part of their job (replying to customers with empathy) to an AI. They should have used AI to summarise case details, verify policies, or draft a first-pass email. But pressing “send” on the final response? That still required their judgment.

I see the same dynamic in my work as a media seller. There’s no single “right” way to craft a strategy, pitch for new budgets, or navigate the politics within client teams. Sure, there are best practices. But the best salespeople I know are the ones who can make judgment calls when there’s no clear right answer: Should I push for this budget now or wait? Is this client being political or genuinely concerned?

These decisions require intuition, experience, and human empathy. We can’t outsource them to AI and then point fingers when things go wrong. That would be like a car mechanic blaming his spanner when an engine fails.

What This Looks Like in Practice

Here’s where I’ve learnt this lesson the hard way.

I once used AI to prepare for a meeting with a new C-level stakeholder. I uploaded the client background, our work history, and asked AI to draft some discovery questions I could ask during the meeting. The results weren’t terrible, but they were just… odd.

For example, it suggested that I asked the question: “I know we’re facing technical blockers around implementing feed tracking. What are your thoughts on the potential for this project, and how can we help you overcome this blocker?”

On the surface, this question sounds fine. But an experienced seller would identify two problems with this question:

  1. A C-level stakeholder wouldn’t care about this granular level of detail
  2. Asking “how we can help” for an issue like this is usually a waste of time. If the client knew how we could help, they would have already asked us. A better question would be “Who can we speak to to unblock this, and what do they care about?

I’d assumed that uploading enough context would give AI everything it needed. And yes, more context does improve AI’s responses. But AI struggles to differentiate which context matters versus which should be ignored. The technical blocker was mentioned in my context document, but AI didn’t grasp that it wasn’t appropriate to surface in this conversation.

I scrapped the AI-generated questions entirely. Instead, I went old-school and spoke to other account teams who’d worked with this stakeholder before. Then I used their experience combined with my intuition to craft some broad talking points. That approach worked far better.

Looking back, I’d fallen into a trap. I was hoping AI would help me quickly prepare for discovery meetings, saving me time and mental energy. But after multiple iterations of prompting and re-prompting, the output still felt off. Sometimes, going with my gut and focusing on connecting at a human level works better than overpreparing by chatting endlessly with an AI.

Contrast this with where AI actually shines. For example, NotebookLM has become a lifesaver for product-related knowledge. I can upload dozens of PDFs, docs, and decks, then query it when a customer throws me specific questions like, “Do custom segments apply for Customer Match first-party audiences when activated on App Campaigns for Engagements?” This saves me hours of digging through documentation.

What was the difference in where AI works vs where it doesn’t?

  • In the first case, I was asking AI to make a subjective judgment about what questions would land well in a specific sales context, which required experience & intuition.
  • In the second, I was asking it to retrieve and organise information that already existed. This doesn’t require any subjective judgment, and speeds up the grunt work of sifting through information

The Exception: When You Can Outsource Judgment

The Vaughn Tan Rule includes an important caveat: “unless you have a good reason to (outsource your subjective decision-making), in which case make sure the reason is explicitly stated.”

There are legitimate situations where outsourcing judgment to AI makes practical sense. Take summarising my meeting notes. Technically, this involves subjective judgment to decide which details to include versus omit. But I choose to outsource this task to AI because it’s not the highest-value activity I could spend my time on.

I explicitly accept the benefit of having faster meeting notes for the trade-off that AI sometimes misidentifies nuances. For example, a client might say, “Let’s revisit this next quarter.” An experienced seller knows this is often a polite way of saying, “No, I’m not interested.” But AI might flag it as an action item: “Lionel to bring up Product X discussion in 3 months.

I’ve learnt to scan AI summaries with this limitation in mind, knowing the risks, in exchange for more time with customers or crafting pitch positioning. The key is being deliberate about the trade-off.

What This Means for Our Jobs

My friend eventually fixed the mess at his call centre. He didn’t ban AI, but he changed how the team used it.

Now, the call centre staff use AI to summarise case details and draft initial responses. But every email goes through a human review where the staff member asks: “Does this response actually address what the customer is feeling? Does it solve their problem?” The final send button requires human judgment.

This is why I’m optimistic about AI’s role in our careers, not anxious.

When calculators were invented, many mathematicians feared that their jobs would become obsolete. Instead, calculators eliminated tedious arithmetic and freed mathematicians to tackle more complex, creative problems.

Technology automates tasks, but jobs are complex systems of interconnected tasks that require human judgment to orchestrate.

I’m already seeing this shift in my own role. I spend less time on reporting, summarising, and hunting for information. That time gets reallocated to higher-value work: understanding client motivations, navigating organisational dynamics, crafting narratives that resonate.

AI is a powerful processor, not a judge. The future of work isn’t about being replaced by AI. It’s about leveraging it to free ourselves up for the deeply human, strategic work of meaning-making.

In your job, what are you processing, and what are you judging? In the parts where you’re processing information, AI can probably help. And in the parts where you’re making subjective calls about value, quality, or meaning?

That’s the part that only you can do.

Scroll to Top