top of page
hgleece1

AI & Medical Gaslighting



As an English major, it may not be expected that I would have an interest in the medical field, but in reality, it is one of my biggest research interests, particularly the medical humanities. The “medical humanities” refers to how humanities intersect with the medical field, where English and other “soft skill” subjects intertwine with the medical community, and how it informs discussions about the medical field. This year, I have been part of the Fresno State College of Arts & Humanities (CAH) Honors Program completing a large project in this area of study, specifically in relation to medical gaslighting.


My project is an investigative study into medical gaslighting, including the history of it, and analyses of different kinds of cases and experiences people have had of medical gaslighting. Through my research on this subject, I have determined that while technology such as ChatGPT may be helpful in certain aspects of the medical field, it may be harmful when it comes to the issue of medical gaslighting.


“Medical gaslighting” refers to the act of questioning the validity of any person’s medical experience, though it now usually applies to patients of marginalized status, such as women or, more specifically, women of color. This is often done by a medical professional in a medical setting such as a doctor’s office or hospital and can have extreme consequences – many patients have either died or had their quality of life significantly altered as a result of medical gaslighting because of misdiagnosis or dismissal of their symptoms.


When I first considered the effect of ChatGPT on the medical field, one of my first thoughts is that ChatGPT would negatively affect patients by applying the same standard to all patients, not considering factors such as race, gender, and other factors that affect a patient’s susceptibility to different illnesses and diseases. For example, the common symptoms known of a heart attack are typically only common in men and women’s heart attack symptoms often go unnoticed and ignored if brought to someone’s attention. As a result of this, a doctor who does not consider this fact may ignore a woman’s heart attack symptoms because he was only taught to recognize a man’s symptoms.


This is where ChatGPT can be so harmful – while an AI machine itself may be unbiased, the information it may receive is not, and when ChatGPT takes information that is racially biased or otherwise biased against people of marginalized communities, it will produce results that are inherently biased as a result of that information. If ChatGPT only learned the heart attack symptoms for men, for example, and a woman presented heart attack symptoms and ChatGPT or similar programs was diagnosing her as opposed to a doctor who would recognize those symptoms for what they are, it would produce incorrect results. This could affect the woman’s ability to recover quickly from the heart attack, or worse, could prove fatal to the woman and other patients of similarly marginalized status.


One of the main points of my project is that doctors have become too familiar with treating patients as a set of symptoms as opposed to an actual person; this leads to them not considering factors that may be affecting the patient and determining a correct course of treatment as a result of that consideration. ChatGPT takes this to an extreme – because it is not a human and will not see a patient beyond their set of symptoms (and possibly a few of their main characteristics, if given that information), it will only see a patient as sort of an equation to solve rather than a human with characteristics that affect the treatment they will receive from doctors.


Additionally, one of the largest reasons that medical gaslighting occurs in the first place is because of how doctors were taught when they first went to medical school, especially those in older generations, when prejudiced views were encouraged and taught to a variety of doctors. For example, it was not very long ago when people falsely believed that African-Americans feel less pain than white people. If ChatGPT was given misinformation that discriminates against people of marginalized status, it may perpetuate stereotypes.


For this reason, ChatGPT can have many benefits, but in the area of medical gaslighting, it has the power to be extremely harmful. Especially in consideration of the fact that people will not understand how an AI system can be unbiased – when it comes to a doctor themselves, people can accuse him or her of being prejudiced because of their own personal views but believe that since ChatGPT is a technology, it cannot be biased or prejudiced in any way. However, this is ignorant of the fact that the information ChatGPT receives can itself be biased and prejudiced, making the program itself such. If people falsely believe ChatGPT cannot perpetuate negative and harmful stereotypes and/or information, it will lead people to idolize it as something that is always correct.


One of the main reasons medical gaslighting occurs in the first place is because people idolize doctors as people who know everything about everything and never get anything wrong because they went through so many years of medical school. While it’s true that doctors have a lot of knowledge and went through a lot of school, that is not to say that they are superhuman – they are still prone to errors and mistakes, and to deny the possibility of a doctor being wrong under some circumstances is to label them as perfect when they are not. Doctors are smart, but they are susceptible to error, especially when it comes to stereotyping or being ignorant of the prejudices against people of marginalized communities. ChatGPT only furthers this – many people believe that such things as ChatGPT are infallible and perfect, that it can make no mistakes ever. This can result in the dismissal of patients who have been gaslighted in a medical setting, resulting in disastrous consequences.

Recent Posts

See All

Comentarios


bottom of page