We need to ask how we create a new era of digital literacy, so people are using the tools in good ways, but are also appropriately skeptical about potential misuse.”
It’s a critical balancing act that prompted the Federal Trade Commission’s February 27 memo entitled, “Keep your AI claims in check,” which warned businesses against using AI-driven programs for biased or discriminatory purposes or exaggerating those programs’ potential benefits.
An ensuing panel discussion centered around those very issues: privacy, inclusive design, as well as the need for public-private partnerships to accelerate advances in technology with AI and related tools.
“How do we make sure they’re not falling into the wrong hands, and how do we protect them?” asked panel moderator SPA Dean Vicky Wilkins.
“I think it starts with how we in the private sector build and then release them,” said panelist Karan Bhatia, Google Vice President of Government Affairs and Public Policy.
An example of such an ethical decision point: earlier in the forum, Walker pointed out how Google previously released open-source data related to AI lip-reading technology to assist hard-of-hearing customers but chose not to publish the data that could facilitate lip-reading from a football field away at an angle (the latter has sparked concerns about potential monitoring by authoritarian regimes).
It's the sort of ethical crossroads that panelist Heng Xu, Kogod Professor of Information Technology and Analytics and director of the Kogod Cybersecurity Governance Center, said comes up repeatedly in her research: a push and pull of sorts between privacy, fairness, and data utility.
Xu believes making AI truly transformational and responsible hinges on solving that “triangle.”
Let’s have privacy, fairness, and data utility together.”
Accomplishing that, though, may well require federal legislation — which both Google executives said they support, to the extent it’s thorough and well-informed — not to mention grooming the next generation of scholars in policy, law, and technology required to invent, legislate and regulate an increasingly complex digital world.
“What are the gaps, and what should be done?” Wilkins asked panelist Diana Burley, SPA Vice Provost for Research and Innovation, to broach the topic of public and private partnerships.
“The workforce of the future, particularly in the technology space, is everyone,” Burley explained. “And, how do we continue to foster growth in that workforce? The government is certainly putting money behind that. But they can always do more.”
At a time when, Walker said, advances in artificial intelligence are spawning technological innovation at a rate unseen by even Silicon Valley’s most experienced professionals, the program’s closing remarks offered perhaps the clearest consensus among experts on stage—not to mention the stakes related to AI.
“It matters because AI is here with us now, and it’s going to stay with us for a very, very, very long time,” said Professor Gwanhoo Lee, Kogod Department of Information Technology and Analytics Department Chair. “It matters because it’s our responsibility to make AI not only transformative but also responsible and inclusive.”