Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.
Future ready: Elsevier’s AI in higher education newsletter
This issue introduces Elsevier’s next-generation AI platform, explores Deep Research, agentic AI and strategies for building AI literacy in academia.
November 2025
What if? The wish list for research-focused AI
If you were given three wishes for the future of research-focused AI, what would they be? At Elsevier, we’re about technology rather than magic, but we’ve been listening to our customers across higher education and beyond and here are our thoughts.
1. What do researchers do all day?
What do researchers do all day? If you’re a researcher yourself, you’ll already know that it isn’t always research. Depending on where a researcher is in their career, they might, on average, spend only a third of their day undertaking literature reviews, formulating hypotheses, writing up results, and all the other activities that we could call “advancing knowledge.” They might typically spend the rest of their time on research-adjacent tasks like finding collaborators or applying for grants, or performing other functions like teaching, supervising postgrads, or attending departmental meetings. While most occupations include tasks not covered by the core job description, for researchers this “extra work” has sometimes come to dominate. This is especially true for early career researchers who, despite being the engine room of university research outputs, often have to manage huge heterogeneous workloads. The ensuring pressure drives many to leave academia, especially women researchers who typically pick up more family commitments than their male peers. While the ready availability of new postdocs creates an illusion of continuity, the system often feels broken and institutional research outputs are undermined as a result.
These problems ultimately need systemic solutions – and while developments like AI tools can help by streamlining some aspects of research process, current offerings are limited in their scope.
But what if an AI tool could cover more of the researcher workflow, creating more time for research?
2. Only human after all
While AI providers seldom acknowledge that researchers have varying roles and career stages, it’s even rarer for them to recognize differences in disciplinary orientation. Most of the academic-focused AI solutions that have appeared over the last few years have been designed with science and technology (S&T) use cases in mind, partly because of the bias of bibliographic databases towards journals – less important for social sciences and humanities than S&T – and partly because of the associated focus on citations and funding in these areas. It’s worth noting, however, that disciplines oriented towards language and subjective, cultural and human experiences may need a more inclusive approach to AI. Part of the problem is that these subjects typically draw on attributes such as creativity, empathy, and moral judgment, which remain the exclusive preserve of human researchers. Understandably, critics with non-S&T backgrounds (and some with) have been wary of AI tools, raising questions about how human agency can be protected and supported, rather than crudely replicated. These concerns are allied to those often voiced by university librarians about the danger to human critical faculties posed by tools that provide simple “answers” for passive consumption by users—especially by less experienced students.
What if there were an AI tool whose design respected the role of the human researcher, stimulating creativity and encouraging critical thinking?
3. Access all areas
Academic AI applications are often about realizing latent potential to deliver new benefits—whether by enhancing researcher’s workflows or by enabling new ways for researchers to interact with research tools. In this spirit, we recently made some of our solutions available to the Cure Sanfilippo Foundation to explore how they might help accelerate ongoing research into this rare neurodegenerative disease. While we’ve already seen some promising results, Sanfilippo researchers have told us they would like to use AI to gain insights from areas such as methods, results, discussions, and hypotheses—information that is typically omitted from journal abstracts. Of course, abstract-based tools like Scopus AI have revolutionized the way researchers search across the entire scholarly landscape. Meanwhile ScienceDirect AI is almost alone in applying AI to a substantial corpus of peer-reviewed paywalled and open access full-text content. However, no current AI tool offers this breadth and depth of academic inquiry together in one place.
What if there were an AI tool that covered peer-reviewed journal abstracts and a substantial corpus of peer-reviewed content, from both paywalled and Open Access sources, across multiple publishers?
Towards the AI solution academia needs and deserves
So, here are our wishes for research-focused AI: a whole workflow tool for researchers, a human-centred AI tool, and a publisher-neutral abstracts and full-text AI tool. Quite some wish list. And, if we allow ourselves one final “what if,” what if these three tools were all the same AI solution?
They are. In November, Elsevier will be introducing a next-generation solution that enhances its existing AI capabilities with new functionality – broad research workflow coverage and a unified abstracts and full text, multi-publisher content base, all informed by human-centred design and data assurance. Currently, the solution exists in Beta form and is being tested by a select group of customers, with a full commercial launch expected in early 2026. While the new solution will build on Elsevier’s existing AI offerings and amplify their capabilities, making it easy to transition to the larger offering, our ambitions extend across the whole research organization – be it a university, a funding body, or a company R&D team.
With problems around government funding, falling student enrolments, and declining public trust in research, much of global higher education is currently facing a moment of crisis. Gen- and agentic AI may seem like wish-fulfilment to some, but we recognize that for others they are a part of the problem. With an apparently thin line between game-changing benefits and research integrity disaster, the stakes have rarely been higher. At Elsevier, we are intent on rising to this challenge and are working with the community to create the AI solution academia needs and deserves.
Four big questions researchers ask about Deep Research
If 2024 was the year GenAI went mainstream in Higher Education, how should we see 2025? For many universities it has been a time of consolidation, with growing AI literacy, more organized institutional adoption, and a clearer sense of what AI tools can and cannot do. While the mood feels calmer, AI technologies continue to evolve. One of the most visible innovations is the arrival of Deep Research capabilities in leading AI tools, with Elsevier’s Scopus AI launching its version during the past summer.
Beyond the "wow factor"
As you may be aware, Deep Research allows AI tools to go beyond summarization and information retrieval to perform in-depth, multi-step investigations of a topic. Utilizing Agentic AI, Deep Research responses are more sophisticated and comprehensive than standard summaries, and produced with a higher degree of autonomy, although the extent to which users can shape and revise the ensuing outputs varies from tool to tool.
What many users have agreed on is the “wow factor” of seeing Deep Research reports generated for the first time – the sense of a real leap forward having been made. For some, this is wildly exciting – one of our product managers reported being embraced by a tearful research leader after demonstrating Deep Research to his team – but for others there is an unsettling blurring of the lines between “tool” and human-like “collaborator.”
The top four questions
With this in mind, we wanted to share the four questions most frequently asked by our customers since the release of Scopus AI Deep Research, together with brief responses to them.
1. How does Deep Research search Scopus?
Last year, when we asked Scopus AI users how we could improve the tool’s existing summaries, they told us they wanted the option to see more detail and more perspective. By “perspective” they meant the whole spectrum of the work that is influential in a particular research space, with indications of what is most current and most impactful. Scopus AI Deep Research undertakes this task by taking the initial user query, breaking it down, then systematically looking at it from different perspectives. Each of these perspectives is then investigated by an individual AI agent – a system that autonomously performs tasks by designing workflows with the tools and information available to it – which asks fundamental questions, evaluates the responses, then asks more supplementary questions, and so on. In technical terms, Deep Research orchestrates both vector and keyword searching, with the agents deciding which to use at each step, then finally ranks and fuses results. We believe Scopus AI Deep Research is distinguished from other tools by this rigorous emphasis on first principles thinking.
2. What makes agentic Deep Research different from GenAI summarization?
When researchers see Deep Research for the first time, they often want to know what is behind this higher degree of sophistication. As explained above, Deep Research uses a mixture of techniques that allow our agents to retrieve information, undertake a process similar to basic human reasoning, then explore. We have developed agents to work specifically with academic content that perform tasks like meta-analysis, identifying uncommon or unconventional links between the pieces of information surfaced – the interesting anomalies that could trigger new insights or signal knowledge gaps in specific fields. In other words, besides highlighting what we know, we also buck the trend of AI tools by trying to indicate what we don’t know.
3. When should I use regular summaries and when should I use Deep Research?
Talking to researchers and other users, we have been surprised by the number of distinct use cases for both the standard Scopus AI summaries and Deep Research. Although the decision about which approach to employ when will be informed by the working habits of the individual researcher, some basic guidance can be provided:
Deep Research - for complex, multi-step tasks that need synthesis across many documents
answering open, multi-domain questions
making cross-disciplinary connections
informing ideation and planning
identifying gaps and stimulating new directions
refining scope with filters like country, time, document type
Standard summaries - for orientation and overviews
learning a new topic quickly
getting a bird’s-eye view of a field
kickstarting literature reviews
formulating questions
running efficient natural-language searches
4. Deep Research reports are too similar to research papers and encroach on the role of human researchers
A statement rather than a question, but one that is usually delivered in a challenging tone – and rightly so. The importance and complexity of the relationship between the human researcher and AI tools will only grow as users become more adept and technologies advance. To make a small confession: in the early days of Scopus AI Deep Research, we did attempt to build a very comprehensive report that looked a little like an academic paper, but this was rejected by virtually all the researchers who tested it, sometimes in quite virulent terms. As recounted above, we then went back to first principles – quite literally – and created a report that does not set out to deliberately convince users of an “answer,” but provides a series of insights into a research space and the conclusions which we believe can be deduced from them. While many AI tools work hard to synthesize all the available viewpoints into a single fluent response, we know this can be misleading and are at pains to distinguish between all the available perspectives. In short, for all their new autonomy, our reports respect the integrity of the human researcher. They are designed to challenge critical thinking around a problem or a field – hence their success in educational as well as research settings.
Challenging thinking, not replicating it
Paradoxically perhaps, engagement with the frontiers of AI can foster a heightened sense what it means to be human. Successfully developing Deep Research capabilities that emulate aspects of human reasoning only serves to highlight the limitations of the systems themselves. In the context of Higher Education, the onus is on enabling archetypally “human” qualities like creativity, empathy, moral judgement, emotional intelligence in the context of research. The goal remains to challenge and support, rather the replicate, the original thinking of our users.
At the end of 2024, Google announced the "agentic era", while OpenAI’s Kevin Weill said agentic AI would be a "big theme in 2025", with both tech giants following up with agentic-based Deep Research tools (you may have read about Elsevier’s academia-focused iteration of Deep Research above).
But what is Agentic AI, and what does it mean for Higher Education?
To answer these and other questions, we’ve produced a comprehensive new guide to Agentic AI in academia. Full of practical advice, the guide explains how agentic AI moves beyond GenAI’s specialty of content creation, setting and actioning objectives, and iterating and interacting with other tools and systems to support a user’s goals. There is a comparative overview of types of AI, an “under the hood” look at the workings of the technology, and an examination of the use cases for research and education. There is also a section of “Tips for choosing and using agentic AI,” which engages with areas like risk and responsibility and the need to keep “humans in the loop.”
So, like Walt Whitman, AI contains multitudes. While this pluralistic new world might feel overwhelming, the key to successful agentic adoption is much the same as the key to successful GenAI adoption – temper your excitement at the potential of the technology with caution, ethical awareness, and a healthy dose of pragmatism. Or as Yoav Shoham, professor emeritus at Stanford University puts it, “…we need to match the ambition with thoughtful design, clear definitions, and realistic expectations. If we can do that, agents won’t just be another passing trend; they could become the backbone of how we get things done in the digital world.”
With AI spreading rapidly across universities and students demanding a higher education that equips them for a changing workplace, we all know we need more AI literacy – but what is it?
The question isn’t as simple as it seems. We can probably all agree that staff and students should understand, use, and evaluate AI tools, but how much should we stress ethics and the wider human context? Is AI literacy a branch of the information literacy taught by librarians, or has it superseded it? And are we talking about formal trainings or a more inclusive cultural phenomenon?
It is perhaps not surprising that the academic AI literacy landscape is still remarkably diverse, with a range of approaches and frameworks on offer. While it’s important to move towards shared standards and protocols, it’s not necessarily a bad thing to see different ideas being tried out in different organizational contexts. In the words of Leo S. Lo, Dean of the College of University Libraries and Learning Sciences at University of New Mexico, "I like that people propose a new framework - but honestly, they are all very similar, so whichever you support will be OK. The point is to have something to work from instead of just ad hoc approaches.”
This quote comes from a new article that surveys the AI literacy strategies of three top library leaders. Showcasing best practice from the US and the UK, the piece also includes advice for developers and an overview of emerging challenges in the area. Key among these, says Josh Sendall, Director of Library Services at University of Leeds, is the need to ensure “…the technology is transparent and can leave room for creativity and critical human intelligence.”
We agree – and in the spirit of pragmatism espoused by the leaders interviewed in the article have included a section on safeguarding Human Critical Engagement in our new AI Literacy Checklist, a handy reminder sheet that can be displayed in physical libraries at the point of AI tool use. If feels particularly important to get in this cautionary last word because, perhaps, in the end, there are as many AI Literacies as there are users.
Join the community and receive future editions directly in your inbox
Sign up for our AI in Higher Education newsletter to receive updates on AI-related content, featuring thought-leadership, practical insights, and Elsevier's newest offerings in research and education.