Our Approach to Learning

MBA

How Do We Build Trust in AI as It Becomes a Central Part of Daily Life?

As the world navigates the profound shift toward AI-enabled work and learning, Kogod faculty are asking a critical question: how do we build trust in technologies that increasingly shape our daily lives?

Kogod Hero Image_AI in the Classroom 2
Listen To: How Do We Build Trust in AI as It Becomes a Central Part of Daily Life?
5:29

Artificial intelligence is transforming how we learn, work, and live—and its rapid expansion has sparked urgent conversations about trust, responsibility, and the future of the American workforce. 

Why Is AI Adoption Rising While Public Trust Remains Low? 

Over the past year, the use of generative AI tools has surged. Many of us now rely on platforms like ChatGPT to refine a recipe or use Gemini to map out travel plans. At the same time, organizations across the country are integrating AI into workflows to streamline operations, analyze data, and unlock productivity gains once considered impossible. 

Yet even as adoption accelerates, skepticism has grown. An April 2025 Pew Research Center study found that only 17 percent of US adults held a positive view of AI’s long-term impact. Confidence was even lower regarding AI’s influence on elections and civic life. 

Meanwhile, businesses continue moving full speed ahead. By November, 88 percent of companies surveyed by McKinsey reported using AI in at least one business function—evidence that AI is no longer experimental but foundational. 

This divide raises a critical question: How can we build trust in AI—and in each other—as these tools become deeply embedded in work, education, and society? 

How Does Trust Shape the Future of AI? 

For Alexandra Mislin, professor at the Kogod School of Business at American University, the answer begins with understanding the role trust plays in human relationships—and how that extends to our interactions with technology. 

“Trust is fundamental to all human relationships,” Mislin said. “And now that trust needs to be extended to our relationships with machines, so to speak.” 

Mislin and several coauthors recently published a paper titled "Global Research Collaboration Suggesting Framework to Examine Trust in Artificial Intelligence," urging cross-industry research to establish norms around AI usage. They argue that as AI becomes a deeper part of daily life, society must confront “grand challenges” that require a reevaluation of how trust is built, maintained, and sometimes breached. 

AI’s risks and opportunities both hinge on trust. Misuse of AI by colleagues, students, or leaders raises ethical red flags; yet ignoring AI entirely may impose a steep opportunity cost for organizations trying to stay competitive. 

“Trust in AI really matters because it’s already in the systems having an impact on people’s everyday lives,” Mislin said. “It affects decisions across finance, hiring, healthcare, and education.” 

How Is Kogod Preparing Students to Use AI Responsibly and Effectively? 

At American University’s Kogod School of Business, AI is not a distant concept—it is a curricular and cultural transformation. Kogod has launched one of the most comprehensive AI initiatives in business education, reshaping programs across undergraduate and graduate levels. 

Kogod’s approach focuses on two parallel goals: 

  • AI proficiency – ensuring students can identify use cases, evaluate tools, and prompt effectively 
  • AI stewardship – preparing students to engage critically, ethically, and responsibly with emerging technologies 

“We want to make sure that our students graduate with AI fluency in whatever discipline they choose to go into,” said Casey Evans, Kogod's associate dean for undergraduate programs and student services. 

Evans notes that AI fluency requires more than technical skill. It also demands curiosity, skepticism, and the ability to challenge machine-generated output. 

“Not only how to use it, but helping them poke holes in what the machine puts out,” Evans said. “That’s where they’re going to add value to their employer—by using AI fluency and then enhancing it to meet organizational needs.” 

Through redesigned coursework, new faculty expertise, and partnerships with leading AI innovators, Kogod is preparing students to enter a business world where AI literacy is just as essential as data literacy or strategic thinking. 

Why Is Building Trust in AI a Societal—Not Just Technological—Challenge? 

Mislin believes Kogod’s holistic approach offers a model for how institutions across sectors should prepare for an AI-driven future. Trust cannot be built by fearmongering or blind acceptance; it emerges from thoughtful engagement, transparency, and education. 

“We know our students are using these tools,” Mislin said. “So we have to be creative in ensuring we’re teaching them to think with these tools, and to use them well—not to blindly trust them, not to fear them, but to work with them.” 

The challenge ahead is not simply integrating AI into daily life—it’s integrating it responsibly. That requires building shared norms, strengthening cross-sector collaboration, and equipping the next generation of leaders with the skills needed to navigate uncertainty with confidence. 

Kogod’s ongoing work reflects that mission. By cultivating AI fluency and fostering critical thinking, Kogod empowers students to shape a future where AI strengthens—not undermines—our social and organizational systems. 

As AI becomes as commonplace as email or search engines once were, trust will be the foundation on which innovation stands. Kogod is preparing students to build that foundation—thoughtfully, ethically, and with an eye toward the profound societal changes ahead. 

Learn more about AI at Kogod.