The post Women’s Health AI Has No Standards—Until Now appeared on BitcoinEthereumNews.com. Panel dicussion at the lauch of Women’s Health AI Consortium Left toThe post Women’s Health AI Has No Standards—Until Now appeared on BitcoinEthereumNews.com. Panel dicussion at the lauch of Women’s Health AI Consortium Left to

Women’s Health AI Has No Standards—Until Now

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Panel dicussion at the lauch of Women’s Health AI Consortium Left to right: Ethan Cowan, AI engineer + technical startup advisor; Morgan Rose, chief science officer at Ema EQ; Audrey Tsang, former CEO & CPO, Clue; Inessa Lurye, VP of product at ŌURA; and Jennifer Yoo, healthcare regulatory & transactional partner at Latham & Watkin

Geri Stengel, Ventureneer

A woman gives birth. For weeks, she sees her obstetrician regularly. She leaves the hospital. Follow-up visits shift to the pediatrician. Then comes one final OB appointment, six weeks postpartum—and after that, nothing. No clinical check-in for pelvic floor recovery, mood shifts that might signal postpartum depression, or questions too private to ask anyone. In that vacuum, she opens an app.

The app is powered by a large language model (LLM) AI platform. The AI was trained on data that wasn’t designed with her in mind, validated against benchmarks that weren’t built for her biology, and governed by—nothing. On May 12, 2026, a coalition of companies including Willow Innovations and Ema EQ announced the Women’s Health AI Consortium, the first industry body dedicated to changing that. It arrives at a moment when the gap between AI’s reach and AI’s accountability has never been wider.

The Gap AI Is Being Asked To Fill

Sarah O’Leary, CEO of Willow—the company that invented the wearable breast pump—describes the postpartum reality without euphemism. After birth, a woman “sees her OB once more, if that, about six weeks in, and she is truly left to navigate this often traumatic and very intense healing and transformation experience alone.”

Women using Willow’s app, which integrates Ema EQ’s AI, are asking about pelvic floor recovery, when it is safe to resume sex, why they don’t feel like themselves.

The problem runs deeper than any single product. O’Leary reaches for an analogy from Willow’s own history. Breast pumps have existed for decades. They “technically extract milk. They do fine at that. But they were never built with sort of true empathy and a centering of the person using them,” she says. Women’s health AI risks replicating that same failure—at speed, at scale, and into far more consequential decisions. “We have major systemic gaps that layering AI on top of isn’t going to fix,” O’Leary warns.

“There Are No Standards. Period.”

Morgan Rose, chief scientific officer of Ema EQ and a Consortium co-founder, does not soften the governance picture. “There are no standards. Period.” Not voluntary ones. Not sector-specific ones. The professional body closest to the problem—ACOG—is beginning to partner with AI companies focused on clinicians. What does not exist is any framework governing what AI tells the woman on the other end of the app about her postpartum recovery, her fertility, or her escalating symptoms.

The performance data makes that absence concrete. Researchers who evaluated 13 leading AI models on women’s health tasks found approximately 60% failure rates across all LLMs. Every model failed most on “missed urgency”—the moments when a woman’s situation was most dangerous. A follow-up 2026 benchmark found no LLM exceeded 75% accuracy on women’s health clinical scenarios. The root cause, Rose notes, is structural. “If you’re training on insufficient data or under-researched data, then you’re going to have a bad output,” he explains.

The failure precedes any model. Rose describes the Consortium’s starting point as tracing existing AI failures back to their infrastructure origins. In one case study her team reviewed, one major LLM reduced suggestions that women seek care while another produced comparatively less biased outputs—a difference traceable to decisions made before a single clinical scenario was tested.

No Enforcement, By Design—And Why That May Be Enough

The Consortium has no regulatory authority. Companies can ignore its standards entirely. O’Leary and Rose are not only aware of that, they have thought carefully about whether it matters.

O’Leary draws the comparison to B Corp certification and organic food labels, both voluntary standards that created reputational pressure long before regulation arrived. “It’s too urgent to wait,” she contends. The goal is to build what she describes as a “B Corp type brand” that the industry eventually cannot afford to ignore, and that ultimately compels legislative action.

Rose reaches for a different but compatible model: the Environmental Working Group’s Dirty Dozen list, which reshaped consumer behavior around pesticide exposure without a single regulation being passed. Her read of the current federal posture is direct, “Our government, in particular right now, is pretty loose and favorable of AI.” The market has to move first.

What neither voluntary standards nor social pressure resolves is what happens when AI causes harm. As reported in “When Healthcare AI Harms Women, No One Is Accountable in the U.S.,” the liability vacuum is structural. Developers point to clearance requirements; hospitals rely, in good faith, on FDA-cleared tools; and physicians absorb the legal exposure for decisions the tools shaped. No AI system has been held legally accountable before any U.S. court. Rose’s aim for the Consortium is to generate enough social pressure that companies meet a standard before fragmented accountability produces a catastrophic public failure.

“We need people to build with integrity,” Rose stresses. “Until we have a standard, we have to just be vocal about setting some sort of benchmarking so that people at least feel the pressure to do so if they’re going to be in the space.”

The pressure may already be building from an unexpected direction. During reporting for this article, a physical therapy intern in her twenties—unprompted, unaware of the topic at hand—volunteered that she didn’t trust women’s health AI. Rose recognized the signal immediately. “I’ve had so many conversations with Gen Zers that are very skeptical and not using AI. There’s going to be, even just generationally, what the push and pull is between what’s acceptable and what’s not.”

The Standard Hasn’t Been Met Yet

The Consortium’s six governance commitments—ethical standards, bias reduction, emotional and clinical quality, longitudinal intelligence, mentorship for ethical AI builders, and transparent oversight—address each documented failure mode. Its governance board spans clinicians, technologists, ethicists, and legal experts.

Amanda Ducach, CEO of Ema EQ, frames the founding logic plainly, “The stakes are too high to leave standards to chance. Women deserve AI that is clinically safe, culturally aware, and designed with them, not just for them.”

Rose is clear that the Consortium’s ambition exceeds what any member company already does. “Our goal is not to just have this be a badge of approval for what we’ve already done. That would be pointless.” The benchmark the Consortium is working toward should push every member, including its founders.

O’Leary puts the broader obligation in equally direct terms, “Women’s health AI is moving at a pace that demands immediate, coordinated accountability. This Consortium gives the industry a clear, shared standard, one that is built on evidence, reflects lived experience, and holds every tool accountable to the women it serves.”

The intern in the elevator didn’t need a benchmark study. She already knew the standard wasn’t being met. The Consortium’s job—voluntary, ungoverned, and urgent—is to prove her wrong before the tools she doesn’t trust become the infrastructure she can’t avoid.

Source: https://www.forbes.com/sites/geristengel/2026/05/13/60-ai-failure-rate-in-womens-health-standards-are-coming/

Market Opportunity
Gensyn Logo
Gensyn Price(AI)
$0.02822
$0.02822$0.02822
-0.28%
USD
Gensyn (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

KAIO Global Debut

KAIO Global DebutKAIO Global Debut

Enjoy 0-fee KAIO trading and tap into the RWA boom