FHIR and Vietnam's AI Law 134/2025 in healthcare
The Artificial Intelligence Law 134/2025/QH15 (passed 10/12/2025, effective 01/03/2026) classifies AI systems into three risk tiers and imposes strict obligations on AI that affects human health. FHIR provides the standardized data foundation that clinical AI needs to consume clean inputs, leave a traceable processing record, and support accountability under regulatory inspection.
This article is written for hospital CIOs, healthcare software vendors, developers integrating CDS Hooks, and regulators evaluating the role of FHIR in complying with Vietnam's AI Law 134/2025 in the healthcare sector.
Quick summary
- Law 134/2025/QH15 sorts AI systems into three risk tiers — high, medium, and low; most clinically impactful healthcare AI is likely to fall into the high-risk tier.
- Medium- and high-risk AI systems must notify the Ministry of Science and Technology of their classification result; detailed procedures will follow government guidance.
- Core requirements: transparency (model card, training data sources), accountability (decision audit), and a mechanism for human oversight and intervention in AI-driven decisions.
- FHIR primitives available in R4:
Devicedescribes the AI model,GuidanceResponserecords AI output,Provenance+AuditEventcapture the trail,ClinicalImpressionrecords the clinician's review. - Connection to Law 91/2025 and Decree 356/2025: health data is sensitive personal data — Consent, a DPIA file, and DPO involvement are required when processing.
On this page
- Context — why Vietnam passed an AI Law
- The three AI risk tiers under Law 134/2025
- Where healthcare AI lands under Law 134/2025
- Five core requirements for high-risk AI
- How FHIR helps comply with AI Law 134/2025
- FHIR Resource patterns for healthcare AI
- CDS Hooks — the real-time integration pattern
- Bulk Data API for training data
- Bias mitigation — diverse data from FHIR
- Interplay with Law 91/2025 and Decree 356/2025
- Frequently asked questions
- References and further reading
1. Context — why Vietnam passed an AI Law
The 15th National Assembly enacted the Artificial Intelligence Law 134/2025/QH15 on 10/12/2025, with effect from 01/03/2026. It is one of the earliest dedicated AI statutes in Southeast Asia and places Vietnam alongside jurisdictions whose AI legal framework — like the EU AI Act — is built around a risk-tier structure.
Its scope covers the development, supply, deployment, and use of AI systems on Vietnamese territory, regardless of where the model originates. Foreign vendors whose AI products are used in Vietnam still fall within the compliance perimeter.
Law 134/2025 does not replace existing health-sector legislation. Instead, it adds an AI risk-evaluation axis on top of the Law on Examination and Treatment 2023, Circular 13/2025/TT-BYT (Ministry of Health) on electronic medical records, the Personal Data Protection Law 91/2025/QH15, and Decree 356/2025/NĐ-CP. Healthcare facilities deploying clinical AI must reconcile multiple frameworks at once.
2. The three AI risk tiers under Law 134/2025
Article 9 of Law 134/2025/QH15 sorts AI systems into three risk tiers based on their impact on people's health, safety, rights, and lawful interests. Providers self-assess their system against the statutory criteria and then notify the regulator of the resulting classification.
| Risk tier | Definition | Legal obligations |
|---|---|---|
| High | Capable of seriously affecting life, health, fundamental rights, or national security. | Classification notice, conformity assessment, quality control, post-deployment monitoring, audit log. |
| Medium | Significant but non-severe impact on user rights and interests. | Classification notice, AI labeling, transparent disclosure of information. |
| Low | Limited impact, no direct effect on user-affecting decisions. | Recommended best practice; notification not mandatory. |
The detailed catalogue of AI systems in each tier will be updated through guidance from the Government and the Ministry of Science and Technology. When classification is uncertain, the precautionary principle requires moving up one tier.
3. Where healthcare AI lands under Law 134/2025
Most healthcare AI systems with clinical impact are likely to fall into the high-risk tier. Concrete classification depends on Article 9 and other risk-criteria provisions of Law 134/2025/QH15, together with the implementing guidance that will follow the Law.
Examples typically classified as high-risk:
- Imaging diagnostic support on X-ray, CT, MRI, and ultrasound.
- Disease screening — cancer, cardiovascular, diabetic retinopathy.
- Treatment-plan recommendations, dose calculation, and drug-interaction checking.
- Emergency triage and prediction of inpatient deterioration.
- Clinical decision support (CDS) that influences laboratory orders or medication choices.
Tasks typically classified as low or medium risk:
- Chatbots delivering general health information to the public.
- AI suggesting appointment slots, room scheduling, or other non-clinical operational optimization.
- Speech recognition for chart entry — provided every output is reviewed by a clinician before signing.
The boundary between tiers is not always clear-cut. A speech-recognition AI used to auto-sign medical records without clinician readback shifts upward into a higher risk tier because its clinical impact has increased.
4. Five core requirements for high-risk AI
Law 134/2025 imposes a fairly comprehensive set of obligations on high-risk AI systems. The five groupings below summarize the most common requirements in healthcare, drawn from the structure of the Law and cross-referenced with EU AI Act implementation practice.
4.1 Transparency
Providers must publish a model card describing training data sources, methodology, performance metrics by population subgroup, and operational limits. End users — including patients — must be informed when they are interacting with an AI system.
4.2 Explainability of outputs
Every AI output must come with a basis for explanation appropriate to the model type: feature importance, similar cases, or attended image regions. Clinicians have the right to ask why the AI suggested a given diagnosis or dosage, and the system must answer at a clinically meaningful level.
4.3 Human oversight and intervention
The Law requires a human-in-the-loop mechanism for high-risk AI. In healthcare, this means AI does not make the final decision in a medical record; a clinician must have both the right and the means to accept, modify, or reject the AI's suggestion.
4.4 Safety and security
High-risk AI systems must include adversarial-attack countermeasures, version control, and model lifecycle management. When personal data is involved, AI is simultaneously bound by Law 91/2025/QH15 on personal data protection and Decree 356/2025/NĐ-CP that implements it.
4.5 Comprehensive audit trail
Every inference must be logged in enough detail to be reproducible: model version, input parameters, output, requester, and clinical context. The audit log must be retained for the same period as medical records — at least 10 years under the Law on Examination and Treatment 2023.
5. How FHIR helps comply with AI Law 134/2025
FHIR R4 (version 4.0.1) provides a Resource set rich enough to model the full healthcare-AI lifecycle inside clinical data — from training data, to model registration, to recording each inference, to storing outputs, to capturing the clinician's review. That is why FHIR is the natural choice for the data backbone of healthcare AI in Vietnam.
End-to-end integration diagram:
[FHIR Server (EMR / VN Core)]
│
│ Bulk Data $export → NDJSON (de-identified)
↓
[Training Pipeline] ← diverse, structured, audited data
│
│ Trained model (registered as a Device)
↓
[AI Inference Service]
│
│ CDS Hook trigger (patient-view, order-sign…)
↓
[FHIR Server] ← writes GuidanceResponse + Provenance + AuditEvent
│
↓
[Clinician UI] reads → ClinicalImpression confirms or rejects
Three core benefits FHIR delivers for Law 134/2025 compliance: standardized data reduces bias, Provenance + AuditEvent satisfy the audit-trail requirement, and GuidanceResponse + ClinicalImpression operationalize the human-in-the-loop principle directly in the data structure.
6. FHIR Resource patterns for healthcare AI
This section presents Resource patterns valid under FHIR R4. VN Core is expected to profile certain elements and add extensions for Vietnam-specific legal requirements — for example AI risk classification and the corresponding registration code.
6.1 Device — registering an AI model as a logical device
An AI model is described with the Device Resource together with VN Core extensions that carry the legal information required by Law 134/2025. The canonical extension URLs below are anticipated in the VN Core IG.
{
"resourceType": "Device",
"id": "ai-cardio-v2",
"identifier": [{
"system": "http://fhir.hl7.org.vn/core/sid/ai-registry",
"value": "AI-001-CARDIO-V2"
}],
"type": {
"coding": [{
"system": "http://fhir.hl7.org.vn/core/CodeSystem/vn-ai-type-cs",
"code": "clinical-decision-support",
"display": "Clinical decision support AI"
}]
},
"manufacturer": "Omi HealthTech",
"modelNumber": "CardioRiskNet-v2.1",
"version": [{ "value": "2.1.0" }],
"extension": [
{
"url": "http://fhir.hl7.org.vn/core/StructureDefinition/vn-ai-risk-class",
"valueCode": "high"
},
{
"url": "http://fhir.hl7.org.vn/core/StructureDefinition/vn-ai-conformity-id",
"valueString": "CONF-2026-001"
},
{
"url": "http://fhir.hl7.org.vn/core/StructureDefinition/vn-ai-model-card-url",
"valueUrl": "https://omi.health/models/cardio-v2.1"
}
]
}
6.2 GuidanceResponse — the AI output for each inference
GuidanceResponse is the primary Resource for recording clinical AI output. In R4, the module[x] field is required; outputParameters is a Reference(Parameters); and result references either a CarePlan or a RequestGroup. Detailed Observations and RiskAssessments live elsewhere and are linked via Provenance.
{
"resourceType": "GuidanceResponse",
"id": "gr-001",
"status": "success",
"moduleCanonical": "http://omi.health/PlanDefinition/cardio-risk-v2",
"subject": { "reference": "Patient/vn-001" },
"encounter": { "reference": "Encounter/enc-001" },
"occurrenceDateTime": "2026-04-30T14:00:00+07:00",
"performer": { "reference": "Device/ai-cardio-v2" },
"outputParameters": { "reference": "Parameters/gr-001-out" },
"result": { "reference": "RequestGroup/rg-cardio-001" }
}
Parameters/gr-001-out holds the explanatory values (risk score, reasons, attended image regions). RequestGroup bundles concrete recommendations — for example ordering a Holter ECG and a repeat cardiac-enzyme panel.
6.3 Provenance — chain of custody from input to output
Provenance records who or what produced the GuidanceResponse, which input resources were consumed, and at what time. The recorded field is required in R4.
{
"resourceType": "Provenance",
"target": [{ "reference": "GuidanceResponse/gr-001" }],
"occurredDateTime": "2026-04-30T14:00:00+07:00",
"recorded": "2026-04-30T14:00:05+07:00",
"agent": [{
"type": {
"coding": [{
"system": "http://terminology.hl7.org/CodeSystem/provenance-participant-type",
"code": "device"
}]
},
"who": { "reference": "Device/ai-cardio-v2" }
}],
"entity": [
{ "role": "source", "what": { "reference": "Observation/ecg-001" } },
{ "role": "source", "what": { "reference": "Observation/troponin-001" } }
]
}
6.4 AuditEvent — logging inference for compliance
While Provenance answers the question what produced this data, AuditEvent answers who accessed what, and when. It is the appropriate Resource for logging every inference call, satisfying the audit-trail requirements of both Law 134/2025 and Law 91/2025.
6.5 ClinicalImpression — the clinician's review
After reading the AI result, the clinician records a ClinicalImpression to confirm or reject it. This is where the human-oversight principle becomes most visible: AI-generated data does not become a clinical conclusion without this step.
{
"resourceType": "ClinicalImpression",
"status": "completed",
"subject": { "reference": "Patient/vn-001" },
"encounter": { "reference": "Encounter/enc-001" },
"date": "2026-04-30T14:15:00+07:00",
"assessor": { "reference": "Practitioner/bs-001" },
"supportingInfo": [{ "reference": "GuidanceResponse/gr-001" }],
"summary": "AI flagged high cardiovascular risk. Clinician concurs based on ECG and elevated troponin. Ordered 24h Holter and a repeat CK-MB at 6 hours."
}
6.6 meta.tag for AI-generated data
Every Resource produced by AI should be tagged so that downstream systems and human readers can recognize it. A VN Core code system is expected to provide tags such as AI-GENERATED and PENDING-CLINICIAN-REVIEW.
"meta": {
"tag": [
{
"system": "http://fhir.hl7.org.vn/core/CodeSystem/vn-ai-tag-cs",
"code": "AI-GENERATED",
"display": "Data generated by an AI system"
},
{
"system": "http://fhir.hl7.org.vn/core/CodeSystem/vn-ai-tag-cs",
"code": "PENDING-CLINICIAN-REVIEW",
"display": "Awaiting clinician review"
}
]
}
7. CDS Hooks — the real-time integration pattern
CDS Hooks is an HL7 specification that lets the EMR call clinical decision-support services at standard events (opening a patient chart, placing a medication order, signing an order, and so on). It is the most common way to bring AI into the clinician's workflow without modifying the EMR core.
[Clinician opens patient chart in the EMR]
↓
[EMR] -- POST /cds-services/cardio-risk + FHIR context --> [AI Service]
↓
←------- CDS Card (text + suggestion + explanation link) --------
↓
[EMR UI] displays the card; clinician accepts, dismisses, or asks a question
A CDS Card aligned with Law 134/2025 should include a link to the model card and a short explanation snippet right at the point of display. Once the clinician accepts, the EMR writes the corresponding GuidanceResponse, Provenance, and AuditEvent.
Best practice: separate the AI service and the CDS Hook logic into two independent services. The AI generates predictions; the CDS Hook decides when to surface a card and how to format its content. This separation lets you swap the model without touching the EMR UI.
8. Bulk Data API for training data
FHIR Bulk Data Access (the $export specification) supports exporting large volumes of clinical data as NDJSON, which fits AI training pipelines. Basic request shape:
GET [base]/Patient/$export?_type=Patient,Observation,Condition,DiagnosticReport
Accept: application/fhir+json
Prefer: respond-async
→ 202 Accepted
Content-Location: https://server/.../status/abc
→ Poll the status URL → 200 OK + list of NDJSON URLs
Before going into training, the data must be de-identified or pseudonymized in line with Decree 356/2025/NĐ-CP. A Data Protection Impact Assessment (DPIA) record is mandatory whenever the data scope crosses an individual threshold.
Data residency warning
Sending clinical data outside Vietnamese territory for training on a foreign cloud triggers the cross-border personal-data transfer rules in Law 91/2025 and Decree 356/2025. A separate filing under Form 09 of Decree 356/2025/NĐ-CP is required.
9. Bias mitigation — diverse data from FHIR
Fairness and non-discrimination are among the principles of Law 134/2025. Healthcare AI trained on a foreign population often suffers performance drops when deployed in Vietnam — for instance a diabetic-retinopathy model trained on a European population behaves differently on Vietnamese patients with darker retinal backgrounds.
FHIR VN Core supports bias mitigation through localized terminology bindings: vn-icd10-cs with the Vietnamese ICD-10 codes per Decision 4469/QĐ-BYT, vn-ethnicity-cs covering the 54 ethnic groups, the Vietnamese SNOMED CT ConceptMap per Decision 2427/QĐ-BYT, and the 34-province administrative-division codes under Resolution 202/2025/QH15. Combining these axes, training data reflects Vietnamese population characteristics rather than a coarse remapping of international standards.
When evaluating performance, AI vendors should report metrics by age group, sex, ethnicity, and geography — exactly along the structures FHIR has already standardized.
10. Interplay with Law 91/2025 and Decree 356/2025
Healthcare AI is simultaneously subject to two legal frameworks: AI Law 134/2025 sets risk-tier obligations, while the Personal Data Protection Law 91/2025/QH15 and Decree 356/2025/NĐ-CP impose obligations whenever personal data is processed — which virtually all clinical AI does.
- Training data contains health information — sensitive personal data under Article 3 of Decree 356/2025. A Consent that explicitly covers the AI training purpose is required, or some other valid legal basis under Law 91/2025.
- Each inference on patient data is itself a personal-data processing activity — it must rest on a legal basis and leave an AuditEvent behind.
- The DPIA file required by Form 10 of Decree 356/2025 is mandatory whenever a processing activity is large in scale or sensitive in nature — high-tier healthcare AI almost always falls into this category.
- The personal data protection unit or officer (DPO) participates in producing, retaining, and submitting the DPIA file under Decree 356/2025; this role does not displace the data controller's legal responsibility.
- Data breaches must be reported within 72 hours via Form 08; a healthcare AI system whose data leaks through an adversarial attack is a breach within this scope.
Circular 13/2025/TT-BYT on electronic medical records adds a third layer: linkage to the national personal identifier and long-term electronic retention. Healthcare AI that lives inside an EMR must therefore comply with three frameworks at once — AI, personal data, and electronic medical records.
11. Frequently asked questions
Does running a cardiovascular-screening AI on a foreign cloud violate Law 91/2025?
Not by default — but the procedure must be followed. Specifically: prepare a cross-border personal-data transfer file under Form 09 of Decree 356/2025, complete a DPIA, and establish a valid legal basis for the transfer. Some scenarios — for example training a model on raw Vietnamese patient data in a foreign data center — carry significant legal risk and are typically replaced with a domestic option or with already de-identified data.
Can AI sign a Composition (clinical report) on its own?
It should not. Law 134/2025 mandates a human oversight and intervention mechanism for high-risk AI, so a clinical report needs clinician attestation. In FHIR, Composition.attester is 0..* and the base spec allows Patient, RelatedPerson, Practitioner, PractitionerRole, or Organization to act as attester — that is a profile-level decision. VN Core may profile Composition to require at least one attester to be Practitioner or PractitionerRole when the law calls for human attestation.
Which agency must AI models be registered with?
Law 134/2025 does not use the term "model registration." Medium- and high-risk AI systems must notify the Ministry of Science and Technology of their classification result through a single national AI portal; detailed procedures and forms will follow government guidance. Until that implementing decree exists, vendors should prepare technical dossiers structured around the EU AI Act so they can adapt easily once official forms are published.
Does FHIR have a native Resource for AI?
FHIR R4 has no Resource specifically named for AI. The current pattern — endorsed by the HL7 community — is to use Device for the model, GuidanceResponse for the output, Provenance + AuditEvent for the trail, and ClinicalImpression for the clinician review, combined with meta.tag for identification. FHIR R5 expands the Clinical Reasoning module, but the core pattern is unchanged.
Do patients have the right to ask for an explanation of an AI result?
Yes, under both laws. Law 134/2025 mandates transparency and explainability; Law 91/2025 grants data subjects the right to be informed about how their personal data is processed. In practice, the patient interface should provide access to the relevant GuidanceResponse together with an explanation phrased for non-specialist understanding.
12. References and further reading
Vietnamese legal documents
- Artificial Intelligence Law 134/2025/QH15 — passed 10/12/2025, effective 01/03/2026 (VN Core code:
L-134-2025). - Personal Data Protection Law 91/2025/QH15 — effective 01/01/2026 (
L-91-2025). - Decree 356/2025/NĐ-CP implementing the Personal Data Protection Law — effective 01/01/2026 (
ND-356-2025). - Circular 13/2025/TT-BYT on electronic medical records — effective 21/07/2025 (
TT-13-2025). - Law on Examination and Treatment 2023 — currently in force.
- Decision 4469/QĐ-BYT issuing the Vietnamese version of the International Classification of Diseases ICD-10 — 28/10/2020.
The full list of referenced legal documents is maintained in the VN Core legal corpus.
HL7/FHIR specifications
- FHIR R4 (4.0.1) Specification — the standard version for VN Core.
- GuidanceResponse Resource (R4).
- Provenance Resource (R4).
- Clinical Reasoning Module.
- CDS Hooks Specification.
- FHIR Bulk Data Access (Flat FHIR).
Further reading in the knowledge hub
Glossary
- Risk class — the risk tier of an AI system under Law 134/2025 (high, medium, low).
- Model card — a document disclosing data sources, performance, and operational limits of an AI model.
- CDS Hooks — an HL7 specification that lets the EMR call clinical decision-support services in real time.
- Explainability — the ability to justify an AI output at a level that is meaningful to the user.
- Human-in-the-loop — a mechanism for human oversight and intervention in AI-driven decisions.
- DPIA — Data Protection Impact Assessment, per Form 10 of Decree 356/2025.
- DPO — the personal data protection unit or officer under Decree 356/2025.