CMU Examines How AI Tools Are Reshaping Learning for Both Teachers and Students

Generative AI is becoming an integral part of college life, whether through formal coursework or self-guided learning. As students and instructors learn the evolving technology together, they must navigate big questions, like whether the artificial intelligence tools students use are actually helping them learn or if access to the tools is equitable. At Carnegie Mellon University, an initiative to research the impacts of generative AI tools on teaching and learning is helping the university take an empirical approach to studying whether, when and how generative AI can have a positive effect on student outcomes.

“Before we can productively govern AI tools in education, we need to understand their impacts,” said Marsha Lovett, CMU’s vice provost for teaching and learning innovation.

Lovett’s colleagues at the Eberly Center for Teaching Excellence & Educational Innovation started the Generative Artificial Intelligence Teaching as Research (GAITAR) initiative to lower the barriers to innovating with AI and systematically collect data on those innovations. GAITAR forms a community of practice around the technology by providing education, consultation and support for research comparing what happens when AI is incorporated in a course to when it is not. 

“It is important to collect rich data on the key student behaviors and outcomes we care about during these innovations in order to promote exploration and refinement of more promising directions,” Lovett said.

Marsha Lovett, CMU’s vice provost for teaching and learning innovation

As faculty formally incorporated AI into their courses, the results showed there is a lot to learn about how to use AI tools effectively.

Fethiye Ozis, an associate teaching professor of civil and environmental engineering, wanted to see if AI tools like Perplexity or ChatGPT could help her students improve their data science skills. 

As part of a project in one of Ozis’ undergraduate courses, students analyzed air quality data from sensors they placed on campus. They were asked to use the data to identify patterns for the best week and time of day to host an outdoor activity, compare the air quality of newer and older buildings on campus and answer other research questions. The sensors collected information every two minutes, leaving the students with massive amounts of data to make sense of. 

“The first two times I taught the course, AI tools were not available,” Ozis said. “My first impression was that students were not necessarily prepared to deal with the amount of data the sensors collected.” 

Ozis wanted to know how using AI tools impacts students’ skills for data processing, cleaning and visualization and better understand students’ attitudes about using AI tools for these tasks.

Though the number of students who participated was small, performance data showed no difference between students who chose to use AI tools compared to those (from the same class and a prior cohort) who did not. Interestingly, when students had the option to use AI, 44% chose not to, citing reasons related to critical evaluation of the tools’ utility and confidence in their own data skills. This result opens further questions about what factors influence students’ decisions to use AI tools or not, and how we can prepare students to make well-informed decisions.

Alan Thomas Kohler, a senior lecturer in the Writing and Communication Program in CMU’s Dietrich College of Humanities and Social Sciences, wanted to experiment with how AI tools could be engaged in the writing process. Kohler teaches a professional and technical communication course for computer science majors. Giving and receiving feedback is a core component of the course. 

“We reinforce the idea that peer review is good for both the giver and the receiver of feedback equally,” he said. “There is value in centering your reader by getting an understanding of the reader's experience of your text, but there is also value in thinking about the choices that other people make for a given assignment, how those choices differ from your own, and what you might learn from those differences.” 

Kohler wanted to know if there was potential for AI tools like Microsoft Copilot to support peer review. Over two semesters, he incorporated a standardized prompt that students could use with Copilot to get feedback on their projects, which included cover letters, persuasive emails and communication plans.

He plans to continue to use and study AI tools. 

“I'm very interested in this field and all the different ways that generative AI can be used. We can engage with it and embrace it in ways that are beneficial to our students and don't replace their learning,” he said. 

Ozis and Kohler’s projects are two of many ongoing at CMU. Lovett said she thinks the GAITAR projects will have a long-term impact at the university. 

“I really hope that, with such studies becoming part of our standard practice in higher education, we can be more informed as we explore novel applications of AI and even consider changes at the systems level in terms of the degree programs we're offering, our approaches to assessment and offering greater opportunity for student access and equitable outcomes,” she said. 

The Generative AI Teaching As Research (GAITAR) Initiative

This content was paid for and created by Carnegie Mellon University. The editorial staff of The Chronicle had no role in its preparation. Find out more about paid content.