AI in endocrinology: Promises, risks, and responsibilities
These are transformative times for endocrinology as artificial intelligence (AI) rapidly advances, offering new tools to enhance diagnosis, personalize treatment, and expand clinical capabilities.
However, with these opportunities comes a responsibility for physicians to remain informed and vigilant. AI systems are not without limitations, and their integration into the field raises many concerns.
“No one really knows where AI is heading next—only that it’s moving fast,” says Vishnu Priya Pulipati, MD, FACE, DipABCL, Deputy Editor of AACE Endocrine AI. “Our field has an obligation to pause deliberately and engage in broader conversations about how AI is currently used and how it should be integrated into practice moving forward.”
In this context, Dr. Pulipati highlights the most urgent issues endocrinologists need to understand, and the key questions that must be addressed as AI continues to evolve.
Understand That AI Has Limits
AI has already shown that it’s far from perfect, and those imperfections can have real consequences if we’re not careful. One of the most legitimate concerns is “hallucinations,” where AI does not recognize when it is wrong and does not tell you it is wrong. That creates an ethical risk because it can sound confident even when it is incorrect. In endocrinology, this could translate into insulin dosing errors or misinterpretation of guidelines, leading to real patient harm.
Another concern is bias. Medical data is already uneven across ethnicities, ages, and genders, and AI learns from that same data. If the input is not representative, the output will not be either, which can worsen disparities in care.
There is also the risk of over-reliance. If we depend too much on AI, it can erode clinical reasoning, especially among early-career physicians. Growth in medicine comes from thinking through problems, learning from mistakes, and refining judgment. If AI does that thinking for us, we risk losing that process. There may not be a perfect "solution" to these important concerns, but there is a way to approach them. It starts with ALL of us being a part of this conversation and participating in our own ways. Clinicians should consider approaching AI as a tool that supports, not replaces, clinical thinking.
The responsibility does not shift to AI; it stays with us. That means using AI outputs as a starting point, applying clinical judgment before acting, and maintaining a healthy level of skepticism, especially in high-stakes decisions. It also requires staying grounded in core knowledge, being mindful of data bias, and using AI deliberately rather than by default. The takeaway is simple: AI can enhance care, but only if physicians remain actively engaged, thoughtful, and accountable in every decision.
AI Cannot (and Should Not) Replace the Human Aspects of Care
AI cannot individualize care the way physicians do because it does not truly know the person it is treating. While it can process data, it cannot understand lived experience, preferences, or the subtle context that shapes decisions.
Physicians go beyond the chart, building relationships over time, understanding what matters to patients, and recognizing what is not explicitly said. That human connection and clinical intuition often change the course of care and cannot be replicated.
AI can support standardized care and improve efficiency, but it cannot carry out the responsibility of the physician–patient relationship. Even as AI advances, it should augment, not replace, decision-making. Human oversight remains essential, as clinicians bring the judgment, context, and expertise needed to determine what truly fits the patient in front of them.
There Are Serious Privacy Concerns
Privacy has always been a chronic issue in medicine, and with AI, we still do not fully understand what happens to patient data once it is entered into these platforms. There are real concerns about how data may be accessed, interpreted, or even misused, especially when patients may not want their information shared beyond the clinical encounter.
Entering sensitive information into large, global systems without clear transparency creates a legitimate risk. The takeaway is caution with intent, not avoidance. Endocrinologists should not assume that AI platforms are inherently safe or compliant, particularly with respect to identifiable patient data. Until there is clear transparency and regulation, protected health information should not be entered into general AI tools, and use should be limited to de-identified or hypothetical cases.
AI can still be valuable for learning, organizing thoughts, and summarizing guidelines, but it should not function as a repository for real patient data. At the same time, clinicians should stay informed about institutional policies, approved platforms, and evolving regulations. The goal is to engage with AI responsibly while prioritizing patient trust and confidentiality, recognizing that protecting data remains a core responsibility that cannot be outsourced.
There Are Gaps in Regulation and Oversight
AI is evolving at a pace that far outstrips our policies, safeguards, and regulations. Unlike traditional medical advances that unfold over years, AI is advancing in months, sometimes even faster. That gap creates real uncertainty around regulation, liability, and safe use. If AI contributes to patient harm, responsibility is not clearly defined, raising important questions about accountability and malpractice.
At the same time, we need to establish clear workflow boundaries. There are areas where AI can be helpful and appropriate, such as documentation or organizing clinical information. But there are also areas where caution is necessary, especially when it comes to direct patient assessment or treatment recommendations. Defining where AI should assist and where physicians must lead is not just a technical decision; it is a clinical and ethical one that we need to actively shape.
Do Not Fear the Unknown
Amara’s Law reminds us that while we may overestimate the short-term impact of technology, we often underestimate its long-term effect. The uncertainty around AI should not push us away from it. If physicians step back and wait for every concern to be resolved, AI will still evolve, just without clinical input guiding it toward patient-centered care.
Continued engagement is how we shape that future. By using AI thoughtfully, questioning its outputs, and participating when it is applied in clinical settings, physicians develop both familiarity and accountability. That involvement creates a sense of ownership, which is essential for ensuring AI is used safely, ethically, and appropriately. In other words, engagement is not just participation, it is influence.
AACE Endocrine AI is published by Conexiant under a license arrangement with the American Association of Clinical Endocrinology, Inc. (AACE®). The ideas and opinions expressed in AACE Endocrine AI do not necessarily reflect those of Conexiant or AACE. For more information, see Policies.