Neil
How do you move beyond AI experimentation and actually prove it's delivering value inside a business?
Well, my guest today has somewhat of a unique perspective on this question.
Her name's Angela Virtue, and she is a professor of IT and analytics at American University's Kogod School of Business, where she also serves as the AI Instruction Faculty Fellow.
And not only that, she's also helped lead one of the first large scale AI transformations in higher education, while also bringing hands-on experience from building AI solutions inside fast growing SaaS companies.
So she's been on both sides of the fence here.
But in our conversation today, I want to learn more about why Angela believes that 2026 feels like a turning point for AI adoption, why so many organizations are stuck in what she calls pilot purgatory, and why focusing on real business problems and measurable outcomes is the only way forward.
And we'll also explore the shift towards vertical AI, the growing importance of governance, and how leaders listening can balance innovation with trust in a rapidly changing landscape.
But now on with today's show.
Let me introduce you to Angela right now.
So thank you for joining me on the podcast today. Can you tell everyone listening a little about who you are and what you do?
Angela
Hi, Neil, thanks for having me.
So my name is Angela Virtu. I'm a professor at the Kogod School of Business, where I teach artificial intelligence, and I also am in charge of our culture building with AI.
Neil
Well, it's a pleasure to have you join me today because I really love taking a unique angle at the whole AI conversation. There's no avoiding it, right?
You're someone that's worked both in academia and building AI solutions inside fast growing SaaS companies. I've got to ask—how has that combination shaped the way that you think about AI adoption in businesses today?
Angela
Before I joined academia, I worked in tech startups where I was building AI solutions both internally and for customers.
That experience gave me a deep technical foundation—how AI works and its limitations. What carried over into academia is a solution-oriented mindset.
In industry, everything had to solve a business problem and be measurable with ROI. We love building things, but if it doesn’t solve a problem, there’s no point.
Neil
That’s so refreshing—starting with the problem first.
As AI Instruction Faculty Fellow at Kogod, you helped guide one of the first large-scale AI transformations in higher education. What did that process look like, and what lessons apply to businesses?
Angela
It started three years ago when our Dean brought in executives like Rod Smith from Microsoft. They told us AI is as important as electricity.
Even if that’s only 1% true, it has massive implications—especially in higher ed where our goal is workforce preparedness.
We brought in faculty across disciplines, trained them on AI, and aligned on what students need to succeed.
The biggest challenge was culture change. Faculty are often resistant to change. AI exposed existing problems we could no longer ignore.
The key was leadership willing to experiment—even if things fail.
Neil
What about students—are they excited or overwhelmed?
Angela
It’s a spectrum. About 10% are early adopters, 80% will experiment with guidance, and 10% resist.
There’s anxiety about job placement, but overall it mirrors faculty behavior.
Neil
Many organizations are stuck in “pilot purgatory.” Are we at a turning point in 2026?
Angela
Yes—we’re entering the “trough of disillusionment.” Companies spent heavily on pilots, but results are mixed.
The shift is happening in software engineering—AI is making developers 10x more productive.
Next is vertical AI—industry-specific systems for law, marketing, and more. That’s where real operational value will come from.
Neil
What technical developments excite you most?
Angela
Two things:
First, orchestration—structured workflows combining prompts, tools, and data in a scalable way.
Second, physical models—AI trained on real-world actions like making coffee or folding clothes. This will push robotics and potentially lead toward AGI.
Neil
What should companies measure to prove AI ROI?
Angela
Move beyond adoption metrics—they’re weak indicators.
Start with the business problem and existing KPIs. For example, in customer support: resolution time, accuracy, success rate.
Then run A/B tests comparing human vs AI performance. That’s how you prove value.
Neil
Let’s talk governance—what does good AI oversight look like?
Angela
A distributed model, similar to cybersecurity.
A central team sets standards and response plans, but every employee shares responsibility.
This creates culture buy-in and better outcomes.
Neil
How can companies balance innovation with risk?
Angela
First, build closed systems to protect proprietary data.
Second, prioritize transparency and trust. A bad example is AI pretending to be human—like some customer service systems. That erodes trust.
Neil
Looking ahead—what trends should leaders watch?
Angela
Trust will be everything.
Companies will need to serve both human customers and AI agents. This creates a new “AI agent economy.”
Some companies are already creating AI-readable website layers to communicate with agents.
Neil
Let’s bust a myth—what frustrates you most?
Angela
AI isn’t the enemy—existing systems are.
AI exposes flaws in how organizations are built. Until we rethink those structures, we’ll keep seeing friction.
Neil
Where can people find you?
Angela
LinkedIn at Angela Virtue and Kogod Business on social channels.
Neil
Thanks for joining me today.
One thing that stood out was Angela’s focus on discipline and clarity. AI moves fast, but business fundamentals still matter.
If you can’t tie AI to a real problem and measurable outcome, you’ll get lost in the noise.
And governance is becoming a shared responsibility—not just a centralized function.
Where does your organization stand? Are you still experimenting or seeing real results?
Let me know at Tech Talks Network.com.
Thanks for listening—I’ll be back tomorrow.