Google President Kent Walker Joins Kogod Dean to Discuss Responsible AI

After the fireside chat, a panel discussion tackled critical questions about innovation, privacy, and ethics with artificial intelligence.

David Marchick and Kent Walker

Kogod School of Business Dean David Marchick and Google's President of Global Affairs Kent Walker.


In front of an audience of students, faculty, staff, and alums from across American University, top on-campus experts joined leaders from one of the world’s most powerful companies to discuss artificial intelligence's benefits—and potential ramifications.

Google President of Global Affairs Kent Walker joined Kogod School of Business Dean David Marchick for a February 24 fireside chat called “Responsible AI.” Their discussion, and a subsequent panel, sought to answer one of the questions most central to the future of tech: Can artificial intelligence (AI) be both transformative and responsible?

Timely in nature, the conversations came amid what Marchick called a “frenzy” over AI, particularly given the recent rise in prominence of the AI-driven chatbot “ChatGPT.”

“AI has been on the front page of newspapers almost every day for the last month,” Washington College of Law (WCL) Dean Roger Fairfax said in his opening remarks, noting the forum’s relevance to a cross-section of campus: At Kogod, where faculty study and teach how business leaders use technology and data to make responsible decisions; at the School of Public Affairs (SPA), which focuses on emerging policy issues related to AI; and at WCL, which identifies potential challenges and solutions related to the law and technology.

“How do you balance this issue of transformation and responsibility?” Marchick asked Walker, a 16-year veteran of Google, who’s seen the tech giant grow from 5,000 employees to around 180,000 during his tenure.

“I think we need to be very thoughtful about the implications, the new laws, the new regulations, but also the new social mores,” Walker answered.

We need to ask how we create a new era of digital literacy, so people are using the tools in good ways, but are also appropriately skeptical about potential misuse.”


Kent Walker

President of Global Affairs, Google

It’s a critical balancing act that prompted the Federal Trade Commission’s February 27 memo entitled, “Keep your AI claims in check,” which warned businesses against using AI-driven programs for biased or discriminatory purposes or exaggerating those programs’ potential benefits.

An ensuing panel discussion centered around those very issues: privacy, inclusive design, as well as the need for public-private partnerships to accelerate advances in technology with AI and related tools.

“How do we make sure they’re not falling into the wrong hands, and how do we protect them?” asked panel moderator SPA Dean Vicky Wilkins.

“I think it starts with how we in the private sector build and then release them,” said panelist Karan Bhatia, Google Vice President of Government Affairs and Public Policy.

An example of such an ethical decision point: earlier in the forum, Walker pointed out how Google previously released open-source data related to AI lip-reading technology to assist hard-of-hearing customers but chose not to publish the data that could facilitate lip-reading from a football field away at an angle (the latter has sparked concerns about potential monitoring by authoritarian regimes).

It's the sort of ethical crossroads that panelist Heng Xu, Kogod Professor of Information Technology and Analytics and director of the Kogod Cybersecurity Governance Center, said comes up repeatedly in her research: a push and pull of sorts between privacy, fairness, and data utility.

Xu believes making AI truly transformational and responsible hinges on solving that “triangle.”

Let’s have privacy, fairness, and data utility together.”

Heng Xu

Heng Xu

Professor of Information Technology and Analytics, Kogod School of Business

Accomplishing that, though, may well require federal legislation — which both Google executives said they support, to the extent it’s thorough and well-informed — not to mention grooming the next generation of scholars in policy, law, and technology required to invent, legislate and regulate an increasingly complex digital world.

“What are the gaps, and what should be done?” Wilkins asked panelist Diana Burley, SPA Vice Provost for Research and Innovation, to broach the topic of public and private partnerships.

“The workforce of the future, particularly in the technology space, is everyone,” Burley explained. “And, how do we continue to foster growth in that workforce? The government is certainly putting money behind that. But they can always do more.”

At a time when, Walker said, advances in artificial intelligence are spawning technological innovation at a rate unseen by even Silicon Valley’s most experienced professionals, the program’s closing remarks offered perhaps the clearest consensus among experts on stage—not to mention the stakes related to AI.

“It matters because AI is here with us now, and it’s going to stay with us for a very, very, very long time,” said Professor Gwanhoo Lee, Kogod Department of Information Technology and Analytics Department Chair. “It matters because it’s our responsibility to make AI not only transformative but also responsible and inclusive.”