Exploring the Ethical Quandaries of Artificial Intelligence
How one UVA professor is bridging the
gap between AI and human dignity
Dorothy Leidner is one of eight Distinguished Professors at UVA currently funded by the Jefferson Scholars Foundation.
These days, generative artificial intelligence pervades the headlines, with some possibly penned by AI itself. OpenAI’s ChatGPT, the chatbot that has dominated public discourse over the last year or so, showcases remarkable proficiency in generating text indistinguishable from human writing and promises to rapidly change all aspects of business, education, and society.
But the emergence of new technologies that transform the way we live, work, and learn is nothing new. So, what exactly is all the fuss about?
Dorothy E. Leidner, the Leslie H. Goldberg Jefferson Scholars Foundation Distinguished Professor in Business Ethics at the University of Virginia, has spent decades grappling with questions like this one. Her research examines the intersection of technology and ethics, highlighting the impact of information systems and technologies on individuals, organizations, and society at large.
Leidner explains that generative AI has triggered a maelstrom of utopian predictions and dystopian warnings, in part, because there are no guardrails in place, legal or otherwise.
“Emerging technologies like generative AI invariably gain widespread use long before legal frameworks and policies are implemented,” she says. “We saw—and continue to see—this same predicament with the introduction of social media.” Many users presume companies like Meta, X, and TikTok have a legal obligation to safeguard their data and privacy, but, as Leidner notes, policy and lawmakers are still playing catch-up to hold these companies accountable for data breaches and misuse.
From a business standpoint, this lag means it is up to organizational leaders to regulate employee use of new tools like ChatGPT and develop policies that promote the company’s mission. It also means there is space for business ethics scholars like Leidner to contribute to the conversation and provide insight on how we might use these tools responsibly for the common good.
Leidner, who has held academic appointments at schools across the globe, including INSEAD (France), University of Lund (Sweden), University of Mannheim (Germany), and ETH-Zurich (Switzerland), founded the information systems Ph.D. program at Baylor University’s Hankamer School of Business prior to joining the faculty at UVA.
Her recruitment to UVA in 2023, made possible by the Jefferson Scholars Foundation, an independent organization whose mission is to attract to UVA exceptionally talented students and faculty, was a major win for the school and marked a shift in her teaching and research focus.
She is among a growing cohort of UVA faculty who are using their expertise across a range of disciplines to dive into the study and practical applications of AI.
Last spring, a seven-person AI task force at UVA issued a report on the impact of AI on student learning and assessment. A series of town hall meetings and surveys revealed a strong consensus among faculty and students alike that gaining AI literacy should be an essential part of the academic experience.
Given her background in information systems and ethics, Leidner was hired to help prepare undergraduate students in the McIntire School of Commerce to use AI conscientiously and effectively, both in their personal lives and eventually in the workplace.
“At the beginning of my career, my work was all about technology and how it can help business performance. Over time, it became more about using technology responsibly in a way that protects the individual, the environment, and society. And now, my work is almost exclusively about examining AI through an ethical lens,” says Leidner.
Outreach and collaboration with scholars across the country remains an important aspect of Leidner’s approach. She is part of a research team at MIT’s Center for Information Systems Research that is examining how AI is transforming the employee experience in the workplace, and she has presented at several AI-related conferences, including most recently at University of Hawaii’s Shidler College of Business.
This fall, Leidner designed and taught a seminar at UVA entitled “Ethical Application of Artificial Intelligence,” introducing students to a range of emerging uses of AI. Through a series of seven modules, the course prompts students to engage in discussions that weigh the benefits of AI against the impact on human dignity.
In one module, Leidner asks her students to consider questions like: How can a company ensure employee dignity when using AI to make inferences about employee skills, knowledge, and contributions? How would you feel about management accepting AI-generated recommendations on what projects you should be assigned to based upon inferences drawn from your digital trace?
Some modules delve deeper into specific forms of AI, asking questions such as: What factors do organizations need to consider when implementing facial recognition technology? Is it right for an organization to use facial recognition technology to screen candidates for potential mental or emotional issues that might endanger other employees or customers? Or does such use of technology create unfair and unintended biases against certain groups?
Artificial intelligence already has become ubiquitous in everyday life, Leidner acknowledges, so there seems to be no escaping the fact that today’s college graduates will be in a position throughout their careers to direct the future of its use and application across all sectors of society.
By the end of the course, Leidner ensures students have gained practical experience. They have a chance to put themselves in the shoes of government and corporate leaders. They are able to identify and propose new policies that advance the common good and maintain human dignity through innovative applications of AI.
“With every great thing AI can do for us—write code, improve the quality of healthcare, create new forms of art, assist individuals with disabilities—it can do just as much harm,” says Leidner. “Infringements on our personal privacy, the potential for discriminatory treatment, and, more broadly, the potential to turn individuals into artifacts with little, or no, free choice are only some of the threats it poses.”
“At the end of the day, my goal,” she says, “is to help make sure my students are fully prepared to rise to these challenges and make the best of this exciting new technology.”
This content was paid for and created by Jefferson Scholars Foundation. The editorial staff of The Chronicle had no role in its preparation. Find out more about paid content.