Sean Cudahy
Portions of this article were written with the help of the generative AI program ChatGPT for demonstrative purposes.
Artificial Intelligence (AI) is experiencing rapid growth, revolutionizing industries worldwide. Its benefits include increased productivity, informed decision-making, and personalized experiences. However, concerns arise regarding job displacement, privacy, and bias. Striking a balance between harnessing AI's potential and addressing its drawbacks is essential for responsible and inclusive development.
Congress must address the rise of AI by enacting legislation that addresses ethical concerns, privacy protection, and bias in algorithms. They should foster research and development, promote transparency and accountability, and encourage collaboration between industry, academia, and government to shape responsible AI governance and policy frameworks.
Ask yourself this: Did those two paragraphs sound in any way robotic? Was the information insightful?
A human didn’t write it!
The text is the product of ChatGPT, the generative AI program that’s been a sensation since being released to the public in late 2022.
ChatGPT produced those explanations following prompts requesting 50 words respectively focused on AI’s benefits and drawbacks and what Congress must do to address the fast-growing technology and innovations that, as the AI product itself correctly stated, has significant upsides and potential dangers.
“It’s the wild west,” said Kogod information technology and analytics professor Chris Parker, describing a rapidly-evolving technological frontier that, today, has few policy guardrails
Anybody can do anything. We’re sort of relying on ourselves to police ourselves on what we do.”
Chris Parker
Professor of Information Technology and Analytics, Kogod School of Business
It’s a phenomenon that took center stage this past winter when Google global affairs president Kent Walker joined Kogod’s dean, David Marchick, and a panel of faculty experts on the AU campus to discuss the technology’s intersection with privacy concerns and corporate responsibility.
While Marchick at the time noted a “frenzy” of coverage related to AI, the headlines have only grown in scope since then.
In late March, a letter signed by a long list of tech industry insiders and academic thought leaders called on companies like Google, Microsoft, and Open AI (ChatGPT’s parent company) to pause the training of more powerful AI systems to allow for a more thorough reckoning of the technology’s scope.
Others have claimed that such a pause could drastically hamper AI’s growth.
Meanwhile, a more formal public policy debate on the topic recently began to take shape just miles from campus, on Capitol Hill. On May 16, lawmakers in the Senate Subcommittee on Privacy, Technology, and the Law heard testimony from Open AI’s CEO, Sam Altman, who himself called for regulation of AI tools.
As part of the hearing, the subcommittee chair, Sen. Richard Blumenthal (D-CT), warned of the potential for a “new industrial revolution,” …one that “could displace millions of workers.”
“We could be looking at one of the most significant technological inventions in human history,” added ranking member Sen. Josh Hawley (R-MO), who pondered whether the future of AI might more closely resemble the widely beneficial invention of the printing press or, he said, the more historically complex creation of the atomic bomb.
Indeed, as panelists noted during Kogod’s February 27 discussion, Google’s top leader has declared that AI’s potential effects on humanity may be “more profound than fire or electricity.”
But that breadth is accompanied by potential uses, good and bad, from developing uniquely targeted cancer treatments to helping authoritarian regimes heighten their oppression.
As policymakers and tech companies grapple with AI's substantial reach, these tools—and how to use, evaluate, and govern them—have become a point of reflection for faculty, students, and researchers at AU…from technology to policy and the law.
That includes discussion in Kogod’s MS in analytics program, which develops students’ skills in key areas like evidence-based data gathering, data modeling, and quantitative analysis.
AI is rapidly changing the world, and its impact on business and our daily lives is only going to grow in the next couple of years.”
Sobanaa Jayakumar
Kogod School of Business Analytics Alumna, Class of 2022
“I think the business impacts will be in automated customer service, personalized marketing, self-driving cars, smart homes, and more,” predicted Sobanaa Jayakumar, a 2022 graduate of Kogod’s MS in analytics program, who today serves as senior quantitative analytics associate for home lending decision science at JPMorgan Chase.
However, how significantly AI will ultimately affect our day-to-day lives remains a mystery. This question leads Parker to a slightly more conservative prediction than some have made in recent months.
He pointed to the Gartner Hype Cycle, which predicts society’s view of technological innovation over five distinct phases. Parker believes we’re currently at the ‘peak of inflated expectations’ concerning AI. If you follow the logic of the Hype Cycle, this peak would likely serve as a precursor to disillusionment in AI and, eventually, a more realistic understanding of its role in everyday life.
“It’s going to change many things, don’t get me wrong,” Parker said of AI. “But I think we’re at the point where we think it will change more things than it is. We still need people to be able to interact with AI…and to understand its limitations.”
Those limitations have been a focal point in Parker’s Python classes—the course being an integral part of the Kogod MS in analytics curriculum.
In one problem, Parker and his students analyzed how adept a particular AI product is at predicting the most valuable player award recipients in cricket based on statistical data—a test of the program’s strengths and weaknesses.
ChatGPT does remarkably well if you can very specifically tell it what you want it to do. It doesn’t necessarily catch what we call ‘edge cases,’” or scenarios that might be a little less straightforward.”
Chris Parker
Professor of Information Technology and Analytics, Kogod School of Business
Generative AI programs have also been criticized for their so-called “hallucinations,” which see programs declaratively sharing false statements. ChatGPT, for instance, outright warns users of its limitations, noting it “may occasionally generate incorrect information” and “may occasionally produce harmful instructions or biased content.”
Grappling with these problems and other potentially harmful ramifications of AI will be at the cornerstone of critical decisions, Parker said, as society must “decide where we’re going to be in the next ten years.”
“It certainly is a huge leap of innovating that would make our lives both easier and scarier,” Jayakumar added, speaking about AI more broadly. “Scary regarding how it could make humans obsolete or allow exploitation. Easier in terms of living—saving time and effort.”
It’s a poignant observation, considering it took ChatGPT just five seconds to deliver this chilling 45-word paragraph:
If AI goes unregulated, it could lead to significant consequences. Lack of oversight may result in biased algorithms, privacy breaches, and potential misuse of AI technologies. Job displacement without proper support systems and ethical concerns could exacerbate inequality and undermine societal trust in AI systems.