Is artificial intelligence polluting journalism and medicine?
Thoughtful analyses in both professions
This article, in the Bulletin of the Atomic Scientists, is written by Susan D’Agostino, a journalist who has reported on artificial intelligence for years, and who earned her PhD in mathematics from Dartmouth College. (So much for journalists often being math-phobic!) She sets up her premise this way:
Headlines about AI in journalism swing between clickbait panic and sober alarm. They can feel speculative, even sci-fi—but also urgent and intimate:
“Your phone buzzes with a news alert. But what if AI wrote it—and it’s not true?” an editor at The Guardian wrote.
“It looked like a reliable news site. It was an AI chop shop,” two reporters at the New York Times wrote.
“News sites are getting crushed by Google’s new AI tools,” two reporters at the Wall Street Journal wrote.
Misinformation is hardly a modern invention, but with AI as an amplifier, it now spreads faster, adapts smarter, and arguably hits harder than before.
It’s too long - 2,500 words - and too good for me to do it justice with excerpts, so I urge you to read it yourself. I was intrigued by her closing, in which she compared AI-enabled misinformation with climate denialism - “not merely a failure of facts.” I thought many of my readers would be interested in this article, and I’m guessing that few of you regularly read the Bulletin of the Atomic Scientists.
AI in medicine - competing claims
A recent Perspective article in the New England Journal of Medicine raised these concerns:
Excerpt:
Medical AI tools, including those that are introduced with a goal of improving patient care, also create a glide path for turning clinicians into “quantified workers” — workers whose daily tasks are monitored and controlled by AI technologies, which denies them autonomy and the benefit of discretion based on their expertise
For example, could AI:
be used to assess how often clinicians’ recommendations deviate from institutional guidelines?
detect clinicians who spend more time conversing with patients than employers consider ideal?
evaluate clinicians’ tone, response time, and diagnostic reasoning in messages with patients on electronic portals to push alignment with what the health system views as "ideal"?
point to potential lapses in attention during a procedure that could come up during malpractice investigations.
The authors - from Harvard and Emory - point to a possible "Clinician Bill of Rights for AI" that “could encourage hospitals and health care systems to voluntarily adopt protections against problematic uses of AI.”
On the other hand, the Annenberg Public Policy Center at Penn released new survey data that showed:
Some results:
Most (79%) U.S. adults say they’re likely to look online for the answer to a question about a health symptom or condition.
Three-quarters (75%) of people who search online say that AI-generated responses provide them “sometimes” (45%) or “often or more” (31%) with the answer they need.
Most Americans (63%) think AI-generated health information is somewhat (55%) or very (8%) reliable.
Nearly half (49%) are not comfortable with health care providers using AI tools rather than their experience alone when making decisions about their care.
“Despite the disclaimers that accompany some AI-generated summaries, there is potential for confusion and even harm among vulnerable individuals if they are not aware that these responses are not a substitute for the personalized expert health guidance that their health care provider can offer,” says Kathleen Hall Jamieson, director of the Annenberg Public Policy Center.
(See Judith Garber’s note in the Comments section below for another contrasting perspective on patients’ views of doctors who use AI.)
Headlines like these were bound to be controversial.
Time Magazine: Microsoft’s AI Is Better Than Doctors at Diagnosing Disease
The Guardian: Microsoft says AI system better than doctors at diagnosing complex health conditions
My favorite response to these headlines came from emergency physician Rick Pescatore on his LinkedIn page. He called it:
10 Things This AI Got Dead Wrong About Emergency Medicine
1. It’s solving for the wrong problem.
I don’t need an AI to ace rare case reports.
I need help getting a psych eval on a Friday night.
Finding a shelter bed. Getting insulin covered before discharge.
2. It wasn’t tested in real life—it was handed a quiz.
The AI got perfect, clean data from a textbook.
Room 3? He’s drunk, septic, schizophrenic—or all three.
Doesn’t know his meds. He’s peeing blood on the floor.
3. Diagnosis is easy. Disposition is war.
You figured out it’s Crohn’s. Cool.
She has no insurance, no ride, and no GI consult.
You didn’t solve anything. You just gave it a name.
4. When AI fails, nothing happens.
When I fail, I lose my license.
Or my sleep. Or the patient.
No model testifies. I do.
5. I work in a system designed to collapse.
One psych bed across three counties.
No social worker after 5 PM.
A kid seizing, and the peds ICU is 3 hours away.
6. There’s no CPT code for trust.
But I earned it three times before noon.
From a refugee mom.
From a woman who finally whispered what her husband did.
From a man who came in for “chest pain” and left with Narcan and a second chance.
7. Real medicine isn’t clean.
It’s duct tape, instinct, and risk.
It’s calming chaos with your hands and your eyes.
It’s keeping someone alive until the sun rises and the crisis resets.
8. Misdiagnosis isn’t the biggest failure.
Abandonment is.
By hospitals. By insurers. By systems.
Now by Silicon Valley, chasing PR in a broken house.
9. If you want to help, stop showing off. Start showing up.
Build AI that finds a bed.
Explains a plan in Spanish.
Pre-authorizes meds.
Discharges safely. Follows up.
10. The future I want?
One where I don’t have to choose between treating the pain
or charting a discharge that won’t get them killed.
Where AI doesn’t try to replace me—
but finally, finally—has my back.
AI passed a test.
I worked a shift.
They’re not the same job.
Because no AI was there at 2:43 AM
when the bleeding started again.
No AI called the mother.
No AI cleaned the gurney.
Obviously and understandably, all of the AI hype rubs people in the trenches a little raw.
The finding from the Annenberg Public Policy Center that most people think AI health advice is somewhat reliable is very interesting when compared to this recent finding from JAMA study that patients view doctors who use AI are less competent and trustworthy. https://www.medpagetoday.com/practicemanagement/practicemanagement/116560