Insights Commentary Ethics, Regulation, and Responsible Use

Why we all belong in the AI conversation

April 01, 2026 By Deputy Editor: Vishnu Priya Pulipati, MD, FACE, DipABCL 3 min read
Share Share via Email Share on Facebook Share on LinkedIn Share on Twitter

Artificial intelligence (AI) is already part of the healthcare ecosystem, whether we participate or not. The question is no longer if we will encounter it, but how we choose to engage with it.

My first encounter with artificial intelligence in healthcare was marked by hesitation rather than enthusiasm. When AI first appeared within my clinical environment, embedded in the electronic medical record, it felt unfamiliar and difficult to trust. The idea that algorithms could assist in interpreting clinical data or influencing medical decisions raised more questions than confidence. At the core of that hesitation was a principle deeply ingrained in medicine: first, do no harm. Could I trust it enough to incorporate it safely into patient care?

Beyond safety, there were other concerns. Would reliance on AI erode clinical judgment? Could it make us less essential? Would AI amplify the burden of medical misinformation? How can I realistically review large volumes of AI-generated data for accuracy in an already busy schedule?

There were also ethical questions. How do we ensure accuracy, avoid bias, and maintain accountability when decisions are influenced by algorithms? And more broadly, what kind of healthcare system were we moving toward?

At that stage, my use of AI was limited. I treated it as a sophisticated search engine, not as something that could meaningfully intersect with clinical care.

But that began to change.

Artificial intelligence is now embedded across the systems we use daily, including electronic medical records, education, and clinical discussions. Its presence is no longer optional.

For many clinicians, the language of AI can feel disconnected from training. But medicine has always required pattern recognition, probabilistic thinking, and decision-making under uncertainty. In that sense, AI is not entirely foreign. It is another way of processing information, at a different scale. These tools can process data, but their role in clinical judgment, context, and human connection is still uncertain.

What makes this moment different is the speed of integration. These tools are entering practice faster than traditional education can keep up. As a result, clinicians are encountering AI before fully understanding it.

This creates a tension. There is hesitation, with valid concerns about accuracy, bias, and ethics. At the same time, these tools are beginning to shape how care is delivered.

It is within this tension that participation becomes necessary.

Clinicians do not need to become experts in AI. But we do need to become informed participants. Understanding the basics, questioning outputs, and recognizing limitations may be a good start. Engagement also means using these tools with appropriate caution and responsibility.

My own perspective has shifted from skepticism to curiosity. I have started exploring AI in small, practical ways, not to replace clinical thinking, but to understand where it fits.

Artificial intelligence will continue to evolve within health care. Its role will expand, its capabilities will improve, and its limitations will become clearer over time. The future of AI in medicine is still being written.

But choosing to remain part of the conversation, rather than outside of it, may shape not only these tools, but the future of health care itself. Ultimately, the goal is not technology itself, but better patient care and outcomes. If we are not part of the conversation, we may not be part of the decisions that follow.

 

 

AACE Endocrine AI is published by Conexiant under a license arrangement with the American Association of Clinical Endocrinology, Inc. (AACE®). The ideas and opinions expressed in AACE Endocrine AI do not necessarily reflect those of Conexiant or AACE. For more information, see Policies.

Related Content