Building Artificial Intelligence Features and Functionality into Medical Devices – Part 1
06 Nov 2025
The Necessity of Embedded Design and Cyber-Aware Risk Management
The integration of artificial intelligence (AI) is driving innovation in medical device development, becoming a foundational component in the design and functionality of next-generation medical technologies. From diagnostic imaging and patient monitoring to clinical decision support, AI is now embedded across a broad spectrum of healthcare applications. However, this rapid evolution brings with it a heightened level of responsibility particularly in ensuring that AI-driven systems, often being data-driven and stochastic in nature, maintain rigorous standards for patient safety, data integrity, and cybersecurity.
Diagnostic Support: The Leading Edge of AI in MedTech
One of the most prevalent and impactful applications of AI in medical devices today is diagnostic support, particularly within radiology and cardiology. These AI algorithms are often based on convolutional neural networks (CNNs) and transformer-based architectures and are being trained on large volumes of labeled medical data to detect clinically significant anomalies with high sensitivity and specificity. These models can identify subtle patterns in imaging and waveform data that are difficult to model with classical software and that may be overlooked or vary in interpretation across human clinicians.
This evolution represents more than just automation; it is a meaningful augmentation of clinical expertise. In radiology, AI enhances efficiency by accelerating image analysis and reducing variability in diagnostic accuracy across practitioners. It serves as a second set of eyes, flagging abnormalities in MRI scans, CT images, and X-rays with a level of consistency that supports faster, more informed decision-making.
In cardiology, AI is being deployed in the interpretation of electrocardiograms (ECGs) to detect arrhythmias or ischemic changes in near real time. Beyond static diagnostics, AI is now enabling dynamic, continuous monitoring in wearable medical technologies. For example, real-time analytics in smart wearables can detect physiological deviations and alert patients or clinicians to potential issues before they escalate. Similarly, adaptive insulin dosing systems use AI-driven algorithms to process continuous glucose monitoring (CGM) data and automatically adjust insulin delivery, providing a tailored response based on individual metabolic profiles.
These applications mark a shift from reactive to proactive healthcare, where AI is enabling earlier intervention, better patient outcomes, and more personalized treatment plans.
Embedding AI from Day One
AI must be integrated early in the product lifecycle. Retrofitting AI post-design introduces complexity and increases regulatory friction. Embedding AI from the beginning allows teams to align with medical risk management frameworks (such as ISO 14971), cybersecurity requirements (like FDA’s premarket cybersecurity guidance for medical devices and IMDRF documents), and increasingly AI-specific quality processes.
The integration of AI introduces new classes of risks including data drift, adversarial attacks, unintended bias, which aren't typically covered under traditional hardware or software risk management schemes. Addressing these risks early creates natural design constraints that inform not just the model’s behavior but also the entire system architecture, usability, and validation strategy.
Common Model Types in Regulated Devices
From a compliance and auditability standpoint, supervised learning models dominate. These models are often built on labeled datasets and utilize deep neural networks or decision trees depending on the input data type. Structured clinical data might favor ensemble techniques like random forests or XGBoost, providing explainable, traceable outputs that can be mapped back to decision logic which is critical for FDA or EU MDR scrutiny.
In contrast, neural network-type models like CNNs for medical imaging or LSTMs for continuous physiological measurements, but also more recent LLMs and generative models for clinical documentation support are gaining popularity but are rarely embedded in current regulatory-approved safety-critical functions due to lack of explainability, reproducibility, and regulatory maturity.
Risk Management Is a Living, Expanding Discipline
Medical device manufacturers are expanding their traditional risk assessments to include AI-specific failure modes and cybersecurity threats. This includes:
- Stochastic nature requiring statistical validation and robustness demonstration
- Data drift and model degradation over time
- Bias in training data that may impact care equity
- Unintended outputs due to adversarial data manipulation
- Real-world monitoring pipelines to track algorithmic behavior in production
Cybersecurity is tightly interwoven here with connected devices with onboard AI creating new attack vectors. An AI model trained on manipulated data could degrade silently, leading to systemic misdiagnosis. That’s why Intertek encourages a cyber-aware AI lifecycle, aligning with ISO/IEC 81001-5-1 and the FDA’s AI/ML Software as a Medical Device (SaMD) action plan, and the joint FDA-Health Canada-MHRA Good Machine Learning Practice for Medical Device Development.
Integration of AI
The integration of AI into medical devices is not a single engineering decision, it’s a strategic choice that affects every facet of product development, from conception through post-market surveillance. Manufacturers must view AI as part of a continuous performance cycle, not a one-time feature.
As regulators evolve to meet these challenges, so must quality systems, documentation practices, and cyber hygiene protocols. We’re only scratching the surface of AI’s potential, but with thoughtful integration, rigorous validation, and patient-first design, we can shape a safer, smarter future for medical innovation.