During a recent Sunday afternoon, I observed my son participating in the long-standing tradition of medicine at the University of Chicago’s white coat ceremony. I saw him stand tall, uttering words that doctors have recited for hundreds of years: “First, do no harm.” As a physician, this principle has guided me through every diagnosis, surgical procedure, and challenging discussion.
However, upon leaving the auditorium, a thought persistently occupied my mind: By 2025, another professional cohort will influence human health, intellect, and destiny with a similar — or potentially greater — impact than myself or my son.
This group consists of AI engineers, currently developing systems that will affect every aspect of human existence – our well-being, security, how we nurture each other, and the fundamental factors determining our longevity – for many decades ahead.
The immense responsibility inherent in their work cannot be emphasized enough. AI engineers possess the capacity to achieve remarkable positive outcomes—such as bridging educational disparities, facilitating medical advancements, and speeding up climate solutions—or, should they neglect the human element their creations are meant to benefit, to inflict significant damage.
AI represents more than just a novel gadget or a digital diversion. It is the initial technology engineered to imitate and substitute the core language and interactions that have profoundly influenced human evolution over thousands of years.
Moreover, AI is progressing at an astonishing rate, with its capacities roughly doubling every seven months since 2019. Should this progression persist, by 2030, AI might be able to finish in days tasks that presently demand months of human intellectual effort. Numerous AI authorities have posited that the technology could attain human-equivalent general intelligence within several years.
However, unlike students of medicine, AI engineers complete their studies without ever making a commitment to the principle of “first, do no harm.”
Nobel laureate Geoffrey Hinton, a foundational figure in AI, has warned that the systems we develop now might evolve beyond our capacity to manage. This is why I contend—in agreement with numerous AI researchers and ethicists—that a “Hinton Oath” is necessary: not to impede engineers, but to steer them with the identical ethical precision that has underpinned medicine for countless centuries.
A Hippocratic Oath for AI engineers
In the Hippocratic Oath, graduating medical students vow the following:
- I will apply my knowledge and expertise to assist the ill to the utmost of my capability and discretion; I will refrain from causing injury or injustice to any individual through my actions.
- I will uphold the self-governance and inherent worth of those under my care.
- I will not hesitate to admit ignorance, nor will I neglect to seek the assistance of fellow professionals when their particular skills are required.
- I will endeavor to prevent illness whenever possible, recognizing that prevention is superior to treatment.
In the Hinton Oath, AI engineers could recite similar promises such as:
- I will utilize my abilities to enhance human existence, rather than detract from it.
- I will prioritize human autonomy and interpersonal bonds when facing uncertainty regarding potential harm.
- I will bear in mind that each collection of data embodies an individual’s narrative, and afford it due respect.
- I will openly acknowledge the origins of my training data and maintain transparency regarding the information employed by my systems.
- I will engineer systems that reinforce, rather than undermine, human capacities and discernment.
- I will refrain from developing instruments that are beyond supervision, regulation, or reversal.
- I will avoid constructing systems intended to mislead, influence unfairly, or distort actuality, such as deepfakes and fabricated identities.
- I will pursue collaborative efforts, openness, and the enduring well-being of humanity and the Earth over immediate profits.
- Prior to deployment, I will inquire: Who stands to gain, and who faces potential hazards? Have we evaluated the long-term repercussions extending beyond our present application?
As a surgeon, I have endeavored to formulate this oath, presenting it as a foundational concept for those with a more profound grasp of the field’s intricacies than my own. While it is true that AI and human health disciplines are separate, technologies like AI undeniably influence our well-being. Each algorithmic suggestion modifies synaptic pathways. Every interaction with AI alters dopamine secretion. Every tailored feed reconfigures attention mechanisms. In medicine, substances that traverse the blood–brain barrier are meticulously examined due to their known capacity to alter brain function. Nevertheless, AI systems, possessing equally significant neurological impacts, integrate into our daily routines, mostly unexamined.
Furthermore, AI’s impact will reach well beyond our cognitive processes. It will contribute to defining our work methods and which individuals prosper economically. It will exert sway over legislation, legal frameworks, and geopolitical stability. It will propel environmental strategies, scientific innovations, and cultural progression. These are not incidental consequences; they are primary outcomes. This underscores the increasingly critical need for a guiding ethical framework.
The pertinent question is not if AI will transform human society, but rather whether its creators will exercise this power with careful consideration.
Many aspiring engineers are drawn to this field for the same motivation that led my son to medicine—to transform the world for the better. When AI is developed with a commitment to the collective welfare, its potential is immense: achieving positive outcomes earlier than any doctor could, addressing global challenges, accelerating progress in various fields, and equipping people with enhanced capabilities. Conversely, if we disregard this responsibility, AI could readily turn into a source of detriment, eroding confidence, exacerbating inequities, or diminishing the fundamental traits that define humanity.
For this very reason, AI warrants a solemn promise, structured similarly to the Hippocratic Oath, centered on a core tenet: Unfettered authority devoid of ethical consideration poses a threat.
At my son’s white coat ceremony, physician parents were asked to join their children in reciting the oath. As my son and I uttered those enduring words in unison, I experienced the profound impact of becoming part of something grander than our individual selves—a legacy that unites every physician throughout history in dedication to human welfare. AI engineers ought to share that identical sense of mission, that same bond to something more significant than any singular undertaking or corporation.
My aspiration is that, in the near future, observing a nascent AI engineer pledge the Hinton Oath will evoke the same sense of emotion and reassurance as witnessing my son don his white coat.