Students, Faculty Cautiously Embrace AI as a Supplementary Learning Tool

Author: Margaret Fosmoe ’85

Ai Featured Illustrations by Pete Ryan

A photographic image flashes onto a classroom screen in the Hesburgh Library. It shows Pope Francis in a long, luxurious, white puffer jacket, a cross hanging around his neck.

Of course, the pontiff never posed for such a picture. Created by artificial intelligence software, the image has spread wildly online.

Today Notre Dame undergraduates are examining that image and other AI-generated material with Professor John Behrens ’83 and weighing the potential promise and peril of the technology. The course is Generative AI in the Wild, a new, multidisciplinary offering in which students use ChatGPT, DALL-E and other AI software tools, share the results of their explorations, and study the economic, social, educational, legal and ethical implications of such technologies.

The students know that “deep fakes,” such as the Pope Francis image, will become more commonplace as AI tools proliferate.

Artificial intelligence is the simulation of human intelligence processes by computers using algorithms to break down vast amounts of data. The almost-instantaneous results — text, photos, videos, computer codes, music and more — look and sound like they were created by humans. AI can quickly produce essays, research papers, legal documents, photographic images, videos and many other things once considered safely within the domain of human thought and creativity.

“There are these sweet spots where ChatGPT is really good . . . and people are really bad,” Behrens tells the class. AI technology is reliable at producing job descriptions for employment postings, writing sales letters and creating text for advertising and business web pages. “Little efficiencies add up over and over. This is going to affect every industry. In some places there will be dramatic disruptions,” says Behrens, a professor of the practice of technology, digital studies, and computer science and engineering, and director of the Technology and Digital Studies Program in the College of Arts and Letters.

AI holds hope for remarkable advances in such areas as medical screenings and treatment, assisting the disabled and predicting future climate patterns. But observers also fear job losses to AI, the impact of fake videos on democracy, AI-driven drone warfare — and even that AI will exceed its creators’ control and destroy humanity.

The students know that AI is pretty good at handling routine clerical tasks. They find AI not reliable at producing specific facts or generating long strings of computer code. They are amused at its tendency to create clumsy images of human beings, such as a person with distorted facial features or seven fingers on one hand.

“This class has taught me how much I didn’t know before,” says freshman Kaila Bryant, who plans to major in computer science. She says the AI tools help her to learn, rather than completing the work for her. “Among students, it’s something you can’t really ignore,” she says. “By the time I graduate, I’m sure AI is going to be unavoidable.”

Human beings have been fascinated by and fearful of new technology at least as far back as the Industrial Revolution. Think of the Luddites, those early 19th-century English textile workers who opposed the use of some types of machinery and often destroyed them. Recall, too, reactions to the introduction of the pocket calculator, personal computers and the internet.

When ChatGPT — a free AI tool — burst upon the public consciousness in late 2022, many educators considered generative AI a new tool for cheating, while others thrilled at its possible uses in teaching, learning and research. Already, generative AI is widely used and here to stay. Young people are intrigued by it. Among teens who know about ChatGPT, a 2023 Pew Research Center survey found, 19 percent say they’ve used it for schoolwork.

Notre Dame leaders are taking a measured approach, aiming to include AI as a tool for supplementing education rather than banning it, and training students and faculty on its potential uses and limitations. Under University guidelines, professors are free to experiment with AI and decide whether to allow students to use AI in their courses. They may permit its use for some assignments, all assignments or none.

Elena Mangione-Lora ’98M.A., a teaching professor of Spanish, has found AI to be a useful tool in her classroom — for translation work and vocabulary exercises, for example. The students enjoy the experience, and Mangione-Lora has noticed improvements in engagement and learning. She permits AI use on some assignments but not others.

“I have an obligation to teach [students] how to use AI appropriately,” she says. “It’s a powerful tool that can do really cool things. I will set the parameters . . . and show them how to use it appropriately, and when it’s not appropriate to use it.”

Mangione-Lora doesn’t think AI will replace human professors anytime soon. “The human element is more important than ever,” she says.

Collier’s students must cite which generative learning tool they relied on and for what parts of the assignment. For some assignments, he tells his students they must set aside AI and instead use RFHI — ‘real freaking human intelligence.’

Brian Collier has been enthusiastically using ChatGPT himself and with students in his classes since last spring. He permits students to use AI on some assignments — such as having them read a book chapter, ask ChatGPT to render a summary, then provide their own written analysis about what the tool got right and wrong.

When permitted to use AI tools, his students “read much more, and much better than they had in the previous 10 years,” says Collier, an associate professor of the practice in the Education, Schooling and Society program. They read more closely, because they have to know what AI isn’t getting right, he says.

“It was kooky to me how unprepared the entire academic universe was for this,” he adds, expressing optimism about AI’s place in higher education. “I have not felt this way about any technology since the first time I [used] the internet. Now it eclipses what the internet can do.”

Collier thinks Notre Dame is duty-bound to teach students about AI’s proper use. “If students are not exposed to this, we’re just negligent as a university,” he says. Alumni who haven’t studied and used AI will be at a disadvantage in the workplace. “Their colleagues who have been exposed to it in all walks of life are going to have a significant edge on them.”

Collier’s students must cite which generative learning tool they relied on and for what parts of the assignment. For some assignments, he tells his students they must set aside AI and instead use RFHI — “real freaking human intelligence.”

Notre Shortcut Final

Kayle Lauck is a junior political science major who took Collier’s course. She and her classmates used ChatGPT for such tasks as synthesizing essays written by several students into a single group-writing project. It readily combined their four drafts into one document.

The students analyzed the result and how it compared to their original texts. “It does a fair job,” Lauck says, although the tool repeated some facts and required human editing. She says the experience was valuable. “I have a really good tool set now for analyzing what comes out of AI. I feel better equipped,” she says.

Ardea Russo ’01M.T.S., ’09PhD., the University’s faculty honor code officer and director of academic standards, says ChatGPT sounded at first like a cheating tool. But as she talked with professors in technical fields, she learned how excited they were about the possibilities of generative AI in research and teaching. Across campus, faculty members are using it for research in such areas as tracking chemotherapy complications in pediatric cancer patients, helping urban designers trace a city’s decay and developing ways to reduce the spread of tropical diseases.

Last spring, an official University statement declared that the use of generative AI “to replace the rigorous demands of and personal engagement with their coursework” runs contrary to Notre Dame’s educational mission “and undermines the heart of education itself.”

The University’s generative AI policy recommends that professors become familiar with AI tools and take advantage of training opportunities offered on campus. It permits faculty members to determine appropriate student use of such technology for their courses and encourages them to present those parameters to their students clearly. It further recommends that professors remind their students that wholesale reliance upon generative AI tools does not meet the standard expected of academic work.

“We wanted to make clear that the University of Notre Dame wants to embrace every tool available to fulfill its mission. We want to make sure our students have the ability to use new technology appropriately,” Russo says.

“We really have embraced the idea as a university that this generative AI is a fantastic tool for supplementing education,” she says. But everyone on campus must be aware of the line between acceptable and dishonest use.

In August, students received notice of the policy and a reminder of the honor code. Generative AI may be used, for instance, to create study guides or flash cards or to explain complicated concepts, but misuses won’t be tolerated. “Think carefully about the difference between supplementing your education and replacing it,” the statement reads. The use of AI in violation of a professor’s course policy constitutes an honor code violation, and students who cross those lines face the same penalties as they would with any other form of academic dishonesty.

Halfway through the fall semester, Russo says, “a handful” of potential honor code violation cases had been referred to her office, with about half involving student use of AI in potential violation of a professor’s policy. She expects the percentage of such cases to increase.

The alleged AI-related cases are complicated, because Russo’s office has found that available AI-detection tools aren’t reliable — they produce many false positives and leave students no way to prove their innocence. “It’s a brave new world,” she says.

More students need to be involved in conversations regarding AI, Russo continues. “One of the things I’ve heard from a few students is that they’re worried that their employers will think their degrees are worthless because they just ‘AI-ed’ their way through college.” Alumni and students want to make sure the value of a Notre Dame degree is protected, she says.

Some faculty members are approaching AI with caution — or postponing its use entirely. Stefanos Polyzoides, the Francis and Kathleen Rooney Dean of the School of Architecture, says the growth of AI isn’t affecting Notre Dame architecture students as it may those in other disciplines. Students in the classically focused program learn in small groups with extensive one-on-one feedback from professors and until their fourth year are expected to draw by hand, he notes.

“Students using AI are doing so at their peril,” he says, because those relying on machine learning are learning nothing themselves. The essence of life is continuous learning and continuous searching for things that matter, the dean says. “It’s a dreadful idea and a waste of time if you’re coming to Notre Dame to be an architecture student and using AI.”

While some engineering professors are eagerly experimenting with AI, Peter Bui ’06, ’10M.S., ’12Ph.D. has chosen not to use it himself and doesn’t allow its use among students in his introductory coding courses. Within the department, “we have this middle ground. We have instructors who feel like it could be a good thing to allow it. For those who aren’t comfortable, we can state clearly to students: ‘This is why we don’t allow it,’” says Bui, a teaching professor of computer science and engineering.

Bui reasons that his students must learn coding basics without such assistance in order to flourish in their careers. “Eventually you have to correct things. And if you don’t understand the fundamentals, you can’t fix those things,” he says.

He acknowledges his students may eventually use AI tools to increase their productivity while working in industry, but says they need a firm foundation of knowledge. “I’m probably more of the Luddite contingent,” Bui says. “People get wrapped up in the idea that technology is the solution to everything and that all change is progress, but I don’t believe that.”

Nathaniel Myers ’15Ph.D., an assistant teaching professor in the University Writing Program, teaches writing and rhetoric courses to first-year students. He is working with colleagues to determine when AI might benefit students. For now, he occasionally demonstrates AI tools in the classroom, but discourages his students from using them in their assignments. “I’m not interested in what a computer has to tell me. I’m interested in you, the student, your life experiences, and how you’re encountering this world and what you wish to share with me,” he says.

In class, he’ll sometimes generate writing via AI and then ask students to analyze it for strengths, weaknesses and outright mistakes. Among his goals is teaching critical AI literacy. He notes most academics are just starting to study potential uses of generative AI in their teaching. “The tension for us is figuring out where does AI replace the cognitive work we want students to be doing, and where does it help cultivate that cognitive work.”

Generative AI is a hot topic of discussion among students, too.

“I think definitely the lower-level coding jobs will be replaced with AI,” says Samuel Huang, a graduate student in computer science and engineering. Huang thinks students should be given AI training, because they’ll need that knowledge in the workforce. “This is pretty much an essential skill for the next generation,” he says.

Junior Toby Bradshaw, an electrical engineering major, says some professors don’t mention AI and don’t specify whether its use is allowed on assignments. He relied on ChatGPT as a virtual tutor last spring to help him understand complicated linear algebra concepts. “That’s the only time I really used it. When you get into higher-level engineering classes, it just doesn’t really know what it’s talking about,” he says.

Classics major Mary Griffin, who took the Generative AI in the Wild course, is exploring job opportunities. “As . . . a senior going into the recruiting world, AI is very much the buzz for businesses and employers. I tend to put it as an interest on my resume, and that has led to a lot of follow-up questions and interviews for casual networking chats,” she says. AI “is a tool for good. If we don’t learn about it as it’s emerging, Notre Dame is going to be behind the curve,” Griffin says.

Students are both excited and unsure about how AI will change their world, says Ranjodh Dhaliwal, the Ruth and Paul Idzik collegiate assistant professor of digital scholarship and English, who co-teaches the Generative AI course with Behrens. Leaving the decision about use of AI in courses to the instructor is the right approach, Dhaliwal says. “I teach across different disciplines, and I do not have the same policy for all classes because the expectations for my students aren’t the same in all cases.”

Notre Dame graduates have a history of becoming leaders in their chosen professions and in society, notes Behrens, who had a long career in the technology industry. He believes an educated, tech-savvy liberal arts major will be much sought-after, as will a computer science student with a solid foundation in the liberal arts. “Our students can’t lead if they don’t understand how the world works technologically,” he says.

“You want to engage, but you want to be careful. You want to guard against hubris, but you also want to explore,” Behrens says. “If you don’t explore, the world is going to go on exploring without you.”


Margaret Fosmoe is an associate editor of this magazine.