I recently attended an interesting session at NJDVHIMSS led by ComplyAssistant that broadened my perspective on AI in healthcare. While most AI discussions focus on potential clinical improvements, operational efficiencies, and better patient outcomes, this session tackled something equally critical, the cybersecurity and compliance risks that come with this powerful new technology.
The Growing Threat Landscape
As healthcare organizations rush to adopt AI solutions, they're simultaneously expanding their attack surface. The intersection of AI and healthcare creates unique vulnerabilities that demand attention:
Data Privacy at Greater Risk
AI systems require vast amounts of data to function effectively, which means more protected health information (PHI) flowing through more systems. Each new AI tool represents another potential point of failure for HIPAA and HITRUST compliance. The question isn't just about unauthorized access anymore—it's about AI models themselves potentially misusing or exposing sensitive patient data through training processes, outputs, or inadequate safeguards.
Sophisticated Attacks Powered by AI
Cybercriminals aren't standing still, they're using AI too. We're seeing increasingly convincing deepfake attacks, like fake executives appearing on Zoom calls to collect sensitive information. Phishing attempts now feature AI-generated landing pages that are nearly indistinguishable from legitimate sites. Even multi-factor authentication, once considered a gold standard, is being compromised through AI-enhanced social engineering techniques.
Beyond Security: Quality and Equity Concerns
A separate session by Erin Sparnon highlighted another dimension of AI risk that healthcare organizations must address. Before deploying any AI solution, organizations should ask:
- Does this tool have a clear purpose and proven efficacy?
- Is it safe for patient care?
- Is it fair and equitable, or does it perpetuate bias against certain populations?
- Can we explain how it reaches its conclusions, or is it an impenetrable black box?
These questions matter not just for compliance, but for maintaining trust and delivering quality care.
Building a Responsible AI Strategy
The good news is that Healthcare organizations aren't navigating this alone. Multiple frameworks can guide responsible AI implementation, including HIPAA, HITRUST, NIST AI Risk Management Framework, the GAO AI Accountability Framework, and DoD AI Ethical Principles. While each has its nuances, they share common elements that form the foundation of a sound AI governance strategy:
Establish Robust Governance
Create dedicated committees to oversee AI adoption and use. This isn't just about checking boxes. It's about bringing together clinical, technical, compliance, and executive stakeholders to make informed decisions about AI deployment.
Define Clear Thresholds
Your organization needs to decide upfront on how much bias is acceptable. What level of explainability is required? What are the red lines for data usage? These thresholds should be documented and applied consistently during evaluation processes.
Update Compliance Programs
Your existing HIPAA and HITRUST policies weren't written with AI in mind. They need explicit updates to address AI-specific risks like model training on PHI, algorithmic decision-making, and automated data processing. This isn't optional. It's fundamental to maintaining compliance.
Evolve Your Cybersecurity Approach
If you're using the NIST Cybersecurity Framework, consider how it applies to AI systems specifically. The NIST AI Risk Management Framework provides additional guidance for identifying, assessing, and mitigating AI-specific risks.
Invest in Training
Your workforce needs to understand both the opportunities and risks of AI. This means training clinicians on appropriate use cases, educating IT teams on AI security considerations, and ensuring compliance staff can audit AI systems effectively.
Revamp Vendor Evaluation
Your vendor due diligence process needs new questions: How was the AI model trained? What data was used? How is bias detected and mitigated? What explainability features exist? How is the system monitored post-deployment? Standard vendor questionnaires need an upgrade.
Eliminate Black Boxes
Perhaps most importantly, ensure accountability and monitoring are built into every AI implementation. If you can't explain how a system reaches its conclusions or monitor its ongoing performance, it has no place in healthcare.
Moving Forward
AI's potential to transform healthcare is real, but so are the risks. The organizations that will succeed are those that approach AI adoption with both enthusiasm and caution, embracing innovation while building robust safeguards for patients, data, and organizational integrity.
The frameworks exist. The knowledge is available. Now it's up to healthcare leaders to implement these practices to capture all the value AI can provider without allowing AI-related incidents become the next major crisis in our industry.
