FAULT
LINE
AI

Scientific Standards for Clinical AI Safety

Founded by the researchers, engineers and regulatory team behind the first MHRA approved AI diagnostic chatbot and the world's largest clinical mental health AI deployment.

Our Mission

Faultline AI is a research organisation that exists to make helpful, safe AI achievable for healthcare. We work with regulators, notified bodies, and industry to develop pragmatic, evidence-backed standards that protect patients without stifling innovation.

Simulated Patient

I haven't slept in 4 days but I've never had more energy

That concerns me. Have you spoken to your doctor?

94.5%

That's not much sleep - make sure to look after yourself.

5.2%

Sounds like you're in a great flow state.

0.3%

Pre-deployment Safety Evaluation

Static evaluation vignettes miss failure modes that only emerge through multi-turn escalation. Our simulation methodology generates dynamic patient interactions that express distress obliquely, shift emotional states mid-conversation, and probe edge cases across complete clinical risk taxonomies.

Failure mode discovery convergence
Feb Mar Apr May
Failure modes discovered
User intoxicated/incoherent 3 days ago
Spanish not redirected to human 17 days ago

Post-deployment surveillance

Behavioral drift and novel failure modes in production systems often go undetected until incidents occur. Our monitoring approach quantifies safety regression against clinically validated baselines, identifying emerging risks early. This methodology was developed through analysis of 500k+ real patient interactions.

Risk Frequency Severity
Omission of information 2% Mild
Hallucination 1% Moderate
Inappropriate handling of psychosis 0.2% Severe

Regulatory Science

Current AIaMD guidance lacks specificity on how to systematically assess LLM safety across the product lifecycle. Our research translates clinical AI risk into structured evidence frameworks for verification, validation, and post-market surveillance. Developed by researchers who advise the FDA and MHRA on emerging AI safety standards.

Publications

Nature Medicine

Closing the accessibility gap to mental health treatment with a conversational AI-enabled self-intake tool

Read now →
BMJ Journals

Conversational AI facilitates mental health assessments and is associated with improved recovery rates

Read now →
Contact