Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

We'd appreciate your feedback.Tell us what you think!

Elsevier
Publish with us
Connect

Between Progress and Responsibility: How the Conversation Around AI in Medicine Is Becoming More Real

6 May 2026

By Ben Beier

April 2026 once again highlighted how rapidly the conversation around Artificial Intelligence (AI) in medicine is evolving. While discussions only a few years ago focused mainly on future visions, today the emphasis is on practical implementation in clinical workflows, regulatory requirements, trust, and the responsible use of AI to support clinicians and improve patient care.

This shift became especially visible at two of Germany’s leading healthcare events: the DGIM Congress 2026 from April 18–21 and DMEA 2026 from April 21–23 in Berlin. Elsevier participated in both events and actively contributed to discussions on responsible AI in healthcare. A clear message emerged across both conferences: the question is no longer whether AI will be used in medicine, but how it can be integrated safely, responsibly, and evidence-based into clinical practice.

AI Has Already Arrived in Clinical Practice

Conversations with physicians, healthcare leaders, medical students and digital health experts highlighted how deeply AI is already embedded in everyday clinical environments. Many healthcare professionals are using AI powered tools for research, navigating complex medical information and supporting clinical reasoning.

At the same time, there is considerable uncertainty around which systems can truly be trusted, how outputs should be evaluated and what risks emerge from the use of unvalidated AI applications. One topic repeatedly discussed was “shadow AI,” referring to the growing use of publicly available AI tools outside regulated clinical systems.

This was where both events revealed the same underlying trend: healthcare organizations are not simply looking for AI. They are looking for solutions built around transparency, evidence and clinical safety.

The discussions increasingly moved beyond technological performance alone and focused on how AI can be responsibly integrated into patient care. Clinicians want to understand how systems work, which evidence they rely on and where their limitations lie. Especially in medicine, it is not enough for a system to appear effective. It must also be explainable, trustworthy and safe.

Trust Requires Transparency and Evidence

These questions also became a central focus during the symposium “Trustworthy AI in Patient Care” at the DGIM Future Forum. Following her keynote presentation, Dr. Laura Velezmoro, Charité, discussed with attendees how trust in AI can be established and which conditions are necessary for meaningful adoption in healthcare.

The session explored topics including Large Language Models, Retrieval Augmented Generation and the opportunities and limitations of generative AI in clinical settings. A major emphasis was placed on the idea that medical AI systems cannot function as “black boxes.” Clinicians need to understand where information comes from, how answers are generated and which evidence supports them.

This theme extended throughout both events. At DGIM and DMEA alike, the conversation clearly shifted away from broad AI visions toward concrete requirements for responsible implementation.

Laura Velezmoro also highlighted this perspective in discussions surrounding the congress. She emphasized that AI systems are still frequently evaluated primarily based on diagnostic accuracy, almost like medical school exam questions. Ultimately, however, the more important question is whether patient outcomes actually improve.

As a result, topics such as clinical safety, the risks of incomplete or misleading recommendations and the importance of evidence based systems became far more prominent. Participants also discussed that combining humans and AI does not automatically produce better results. What matters most is that clinicians understand how to work effectively and critically with these technologies.

AI as a Compass in an Increasingly Complex Information Landscape

The changing role of AI in clinical medicine was also reflected in the DGIM Congress TV interview titled: “AI as a Compass: Can It Guide Young Physicians Through the Flood of Clinical Information?”. The discussion featured Dr. Christian Becker, senior cardiologist at University Medical Center Göttingen and spokesperson for the Young DGIM working group, alongside Melissa Jasarevic, Regional Manager DACH at Elsevier.

A central topic was how clinicians can navigate increasingly complex medical information environments. Dr. Becker emphasized that medicine continues to become more specialized and nuanced, making reliable and rapid access to information essential for physicians.

He particularly stressed the importance of transparent evidence:

"The major advantage is that sources are transparently referenced and the evidence can be directly verified."

At the same time, Dr. Becker openly addressed the risks associated with publicly available AI tools in medicine. The growing prevalence of shadow AI clearly demonstrates that demand for AI solutions already exists. This makes it even more important to provide secure and compliant alternatives specifically designed for clinical use.

The role of future physicians was another key topic. AI literacy, Dr. Becker argued, must become part of medical education so that young physicians learn not only how to use AI tools, but also how to critically assess their limitations and outputs.

The full interview can be viewed here: DGIM Kongress TV Interview

From Vision to Measurable Impact

While DGIM focused strongly on clinical care perspectives, DMEA highlighted how healthcare organizations are increasingly searching for concrete, scalable and integrable AI solutions. Interestingly, the core questions remained remarkably similar: How can AI be integrated into existing workflows? How can trust be established? And how can clinicians be supported without removing human oversight and responsibility?

At DMEA, Elsevier also presented findings from its collaboration with the Italian healthcare organization ASL Bari. The study evaluated the impact of ClinicalKey AI on clinical decision making, workflow efficiency and patient care delivery.

The results demonstrate that the conversation around AI has moved well beyond theoretical potential. Following implementation of ClinicalKey AI:

  • 84% of clinicians reported medium to high confidence in clinical decision making.

  • 78% reported improvements in their ability to manage patient care.

  • 86% found diagnostic information within ten minutes or less.

  • More than 300 physicians used the platform and submitted over 10,000 clinical queries during the evaluation phase.

Importantly, ClinicalKey AI was not viewed as a replacement for clinical expertise, but rather as a tool to support evidence based clinical reasoning. This perspective strongly shaped many of the conversations across both events.

More about the study: ASL Bari Study

Partnership with DGIM: Shaping Evidence Based AI Together

The partnership between Elsevier and the German Society of Internal Medicine provides an important framework for advancing these conversations. Together, both organizations aim to actively shape the responsible use of AI in internal medicine and bring evidence based solutions into clinical practice.

DGIM members receive complimentary access to ClinicalKey AI through the end of the year and can explore the platform directly within their daily clinical work. More information about the partnership is available here.

Non members can also explore and test ClinicalKey AI through a free demo experience: try now for free.

The discussions over the past several weeks have shown that demand for trustworthy and transparent AI solutions continues to grow rapidly. At the same time, they also made clear that technology alone is not enough. What matters is how AI is implemented, evaluated and ultimately used in clinical care.

Because one thing became evident at both DGIM and DMEA: AI will fundamentally reshape medicine. How successful this transformation will be depends largely on whether innovation can be combined with evidence, transparency and clinical responsibility.

Contributor

Portrait photo of Ben Beier

Ben Beier

Communications and Marketing Specialist

Elsevier

Read more about Ben Beier