Anybody can do anything. We’re sort of relying on ourselves to police ourselves on what we do.”
It’s a phenomenon that took center stage this past winter when Google global affairs president Kent Walker joined Kogod’s dean, David Marchick, and a panel of faculty experts on the AU campus to discuss the technology’s intersection with privacy concerns and corporate responsibility.
While Marchick at the time noted a “frenzy” of coverage related to AI, the headlines have only grown in scope since then.
In late March, a letter signed by a long list of tech industry insiders and academic thought leaders called on companies like Google, Microsoft, and Open AI (ChatGPT’s parent company) to pause the training of more powerful AI systems to allow for a more thorough reckoning of the technology’s scope.
Others have claimed that such a pause could drastically hamper AI’s growth.
Meanwhile, a more formal public policy debate on the topic recently began to take shape just miles from campus, on Capitol Hill. On May 16, lawmakers in the Senate Subcommittee on Privacy, Technology, and the Law heard testimony from Open AI’s CEO, Sam Altman, who himself called for regulation of AI tools.
As part of the hearing, the subcommittee chair, Sen. Richard Blumenthal (D-CT), warned of the potential for a “new industrial revolution,” …one that “could displace millions of workers.”
“We could be looking at one of the most significant technological inventions in human history,” added ranking member Sen. Josh Hawley (R-MO), who pondered whether the future of AI might more closely resemble the widely beneficial invention of the printing press or, he said, the more historically complex creation of the atomic bomb.
Indeed, as panelists noted during Kogod’s February 27 discussion, Google’s top leader has declared that AI’s potential effects on humanity may be “more profound than fire or electricity.”
But that breadth is accompanied by potential uses, good and bad, from developing uniquely targeted cancer treatments to helping authoritarian regimes heighten their oppression.
As policymakers and tech companies grapple with AI's substantial reach, these tools—and how to use, evaluate, and govern them—have become a point of reflection for faculty, students, and researchers at AU…from technology to policy and the law.
That includes discussion in Kogod’s MS in analytics program, which develops students’ skills in key areas like evidence-based data gathering, data modeling, and quantitative analysis.
AI is rapidly changing the world, and its impact on business and our daily lives is only going to grow in the next couple of years.”
“I think the business impacts will be in automated customer service, personalized marketing, self-driving cars, smart homes, and more,” predicted Sobanaa Jayakumar, a 2022 graduate of Kogod’s MS in analytics program, who today serves as senior quantitative analytics associate for home lending decision science at JPMorgan Chase.
However, how significantly AI will ultimately affect our day-to-day lives remains a mystery. This question leads Parker to a slightly more conservative prediction than some have made in recent months.
He pointed to the Gartner Hype Cycle, which predicts society’s view of technological innovation over five distinct phases. Parker believes we’re currently at the ‘peak of inflated expectations’ concerning AI. If you follow the logic of the Hype Cycle, this peak would likely serve as a precursor to disillusionment in AI and, eventually, a more realistic understanding of its role in everyday life.
“It’s going to change many things, don’t get me wrong,” Parker said of AI. “But I think we’re at the point where we think it will change more things than it is. We still need people to be able to interact with AI…and to understand its limitations.”
Those limitations have been a focal point in Parker’s Python classes—the course being an integral part of the Kogod MS in analytics curriculum.
In one problem, Parker and his students analyzed how adept a particular AI product is at predicting the most valuable player award recipients in cricket based on statistical data—a test of the program’s strengths and weaknesses.
ChatGPT does remarkably well if you can very specifically tell it what you want it to do. It doesn’t necessarily catch what we call ‘edge cases,’” or scenarios that might be a little less straightforward.”
Generative AI programs have also been criticized for their so-called “hallucinations,” which see programs declaratively sharing false statements. ChatGPT, for instance, outright warns users of its limitations, noting it “may occasionally generate incorrect information” and “may occasionally produce harmful instructions or biased content.”
Grappling with these problems and other potentially harmful ramifications of AI will be at the cornerstone of critical decisions, Parker said, as society must “decide where we’re going to be in the next ten years.”
“It certainly is a huge leap of innovating that would make our lives both easier and scarier,” Jayakumar added, speaking about AI more broadly. “Scary regarding how it could make humans obsolete or allow exploitation. Easier in terms of living—saving time and effort.”
It’s a poignant observation, considering it took ChatGPT just five seconds to deliver this chilling 45-word paragraph:
If AI goes unregulated, it could lead to significant consequences. Lack of oversight may result in biased algorithms, privacy breaches, and potential misuse of AI technologies. Job displacement without proper support systems and ethical concerns could exacerbate inequality and undermine societal trust in AI systems.