Responsible AI: where student expectations meet academic integrity
How higher education can respond to rising student expectations for AI while safeguarding academic standards
The widespread adoption of AI in the classroom has arrived. Students now regularly use AI-powered tools to reinforce or even introduce new concepts into their studies. That shift has academic leadership asking, “How can we provide learners with AI tools they actually want to use, while ensuring that those tools honor academic integrity and align with faculty-selected content?”
The instinct may be to restrict AI, but the answer is to rethink it.
Unlike consumer AI tools, which are designed for breadth and convenience, the classroom demands precision, transparency, and pedagogical grounding. AI in education must be purpose-built, rooted in learning science, and intentionally constrained to support the learning process. That distinction is critical, particularly as emerging research reveals a more nuanced reality about student AI use than headlines suggest.
Students Are Still Early in Their AI Journey
Despite the visibility of generative AI, most students are not yet sophisticated users of these tools. Recent research from VitalSource and collaborators, presented at the 11th IAFOR International Conference on Education and based on student surveys across multiple courses, found that most students report using AI only “seldom” or “sometimes,” and primarily for tasks like grammar and formatting rather than concept mastery.
To quote the research, “these survey responses serve as a valuable reminder that despite the current widespread enthusiasm surrounding generative AI, not all students have expertise in, or even regularly make use of, these tools. It could be easy for faculty, administrators, and technology developers to amalgamate the sudden popularity of generative AI with ubiquitous use and expertise by students. But this is not yet the case.”
These findings mirror other current research studies from other major universities that found widespread but infrequent use, a lack of confidence in using AI effectively for academic purposes, and limited understanding of the strengths and limitations of AI tools.
At the same time, students clearly recognize AI’s potential. When asked, they cite academic support and efficiency as the most valuable benefits. However, these are tempered by concerns about academic integrity, fairness, and unclear institutional policies.
This creates a paradox for higher education. Students expect AI to be part of their learning experience, yet many are not fully equipped to use it effectively.
Despite the visibility of generative AI, most students are not yet sophisticated users of these tools. Recent research from VitalSource and collaborators, presented at the 11th IAFOR International Conference on Education and based on student surveys across multiple courses, found that most students report using AI only “seldom” or “sometimes,” and primarily for tasks like grammar and formatting rather than concept mastery.
To quote the research, “these survey responses serve as a valuable reminder that despite the current widespread enthusiasm surrounding generative AI, not all students have expertise in, or even regularly make use of, these tools. It could be easy for faculty, administrators, and technology developers to amalgamate the sudden popularity of generative AI with ubiquitous use and expertise by students. But this is not yet the case.”
These findings mirror other current research studies from other major universities that found widespread but infrequent use, a lack of confidence in using AI effectively for academic purposes, and limited understanding of the strengths and limitations of AI tools.
At the same time, students clearly recognize AI’s potential. When asked, they cite academic support and efficiency as the most valuable benefits. However, these are tempered by concerns about academic integrity, fairness, and unclear institutional policies.
This creates a paradox for higher education. Students expect AI to be part of their learning experience, yet many are not fully equipped to use it effectively.
Why Perception Matters as Much as Performance
Decades of learning science research have demonstrated the effectiveness of formative practice, particularly the “doer effect,” which shows that actively engaging with material is significantly more impactful than passive reading.
AI now makes it possible to scale this kind of practice by generating questions and feedback directly from course materials. But effectiveness alone is not enough.
Student perception plays a critical role in whether these tools are actually used and whether they succeed.
In a multi-course study examining student perceptions of AI-generated practice within course materials, researchers found that most students believe practice while reading improves learning and that a strong majority find AI-generated questions helpful. At the same time, comfort and trust in AI in general varied widely, with most students falling in the middle of the spectrum rather than at either extreme.
Perhaps most importantly, many students do not even realize when AI is being used. In that same study, over 80 to 100 percent of students were unaware that their practice questions were AI-generated. When informed, most said their perception of usefulness did not change, but a meaningful minority reported decreased confidence.
These findings highlight an important insight for institutions. Trust, transparency, and context are just as important as tool performance.
Designing AI for the Realities of the Classroom
To be effective in higher education, AI must do more than generate answers. It must integrate into the learning experience, reinforce faculty intent, build student confidence, and promote effective learning practices.
In Fall 2025, VitalSource launched the next phase of AI learning, purpose-built for higher ed, Bookshelf+. Bookshelf+ is an AI-powered study partner built to deepen engagement while confining its responses to instructor-assigned content.
Rather than pulling from the open web, Bookshelf+ is grounded entirely in instructor-assigned content, ensuring that every interaction reinforces the materials faculty have intentionally selected. It is designed to keep students engaged in context, within the book and at the moment of learning, rather than diverting them to external tools.
This model reflects two core principles:
· Responsible AI design: Responses are constrained, explainable, and aligned with academic integrity standards
· Equity and access: Delivered within the existing Bookshelf platform, with no additional integrations or barriers to entry
Just as importantly, it answers to the reality that students are still developing AI literacy. By embedding AI directly into course materials, institutions can guide students toward more effective and responsible use.
Why Responsible, Learning Science-Driven AI Matters
Using responsible AI tools that are backed by learning science is important for various reasons:
· Faculty and students are assured that their AI tools are tethered to curated, approved materials.
· Instructors can trust that AI support is aligned with their content selection and academic integrity standards.
· Students benefit by getting in-context assistance at the point of learning and in the book, rather than jumping off to another app.
AI is part of the future of higher education, and it needs purpose-built technology, not adaptations of consumer-tools. In developing Bookshelf+, VitalSource made a deliberate decision to apply learning-science insight across the full content-provider ecosystem.
A Path Forward: Aligning Innovation With Intention
Research on student use and perception shows a clear pattern: students value AI, but approach it with caution, and many are still early in their understanding of how to use it effectively. That creates a meaningful opportunity for institutions to shape not just access to AI, but how it is experienced. Selecting tools grounded in learning science, providing transparency into how they work, and embedding them directly into the learning experience can help ensure AI supports, rather than disrupts, academic goals.
With solutions like Bookshelf+, institutions can meet students where they are while maintaining alignment with faculty intent and academic standards. The result is a more cohesive learning environment where students receive meaningful support, faculty retain control over content and outcomes, and institutions move forward with confidence in an AI-enabled future.
For more information on the learning science principles that inform how VitalSource approaches responsible AI, look at this research on scaling the Doer Effect and the impact of AI-generated questions on the effectiveness of studying.
Find out more about the features and capabilities of Bookshelf+.
This content was paid for and created by VitalSource. The editorial staff of The Chronicle had no role in its preparation. Find out more about paid content.


