AWS Public Sector Blog

How to safeguard healthcare data privacy using Amazon Bedrock Guardrails

AWS branded background design with text overlay that says "How to safeguard healthcare data privacy using Amazon Bedrock Guardrails"

As more and more healthcare companies use their data to remain competitive, protecting patient data is as critical than ever. With increasing adoption of artificial intelligence and machine learning (AI/ML) models in healthcare, making sure that these technologies comply with privacy regulations such as the Health Insurance Portability and Accountability Act (HIPAA ) and General Data Protection Regulation (GDPR) has become a top priority.

Amazon Bedrock is a fully managed service from Amazon Web Services (AWS) that provides unified access to a diverse selection of high-performance foundation models (FMs) from industry-leading AI companies, including AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. Through a single API, Amazon Bedrock offers a comprehensive suite of tools and capabilities for developing generative AI applications, with a strong emphasis on security, privacy, and responsible AI practices.

These models offer great potential for healthcare innovation, such as accelerating drug discovery through protein folding and molecule design, enhancing clinical decision-making with automated medical image interpretation, and improving patient care through ambient digital scribes that capture and summarize clinician-patient interactions. However, they must also adhere to stringent data privacy standards. This is where Amazon Bedrock Guardrails come into play. With Amazon Bedrock Guardrails you can implement safeguards customized to your generative AI applications based on your specific use cases and responsible AI policies.

In this post, we walk you through the importance of healthcare data privacy and how to use Amazon Bedrock Guardrails to safeguard sensitive information in AI-driven healthcare solutions.

The sensitive nature of healthcare data

Healthcare data, including medical records, personal identification details, and lab results, is highly confidential and critical for delivering personalized and effective medical care. Safeguarding this sensitive information is paramount to maintaining patient trust, preventing potential misuse, and complying with regulations. Robust data protection measures not only mitigate risks such as identity theft, fraud, or discrimination but also uphold the integrity and reputation of healthcare organizations. By implementing robust security protocols, healthcare providers can foster an environment of trust, where patients feel confident that their private health information is secure and protected from unauthorized access or exploitation.

Rising risks in the AI era

The growing adoption of AI/ML in healthcare amplifies the risks to data privacy. These advanced models rely on vast troves of sensitive patient data to train and improve their predictive capabilities. However, without proper safeguards, this data could be inadvertently exposed through model outputs or misused by bad actors. Data breaches driven by cyberattacks, insider threats, and human error are on the rise in the healthcare sector, exposing patients to identity theft, fraud, and discrimination. Even if data isn’t directly compromised, AI systems that lack robust privacy measures could unintentionally reveal patterns or details about individual patients.

Strict regulation requirements

Governments worldwide have enacted strict laws to protect the privacy of healthcare data. In the US, HIPAA imposes rigorous requirements on how patient information is collected, stored, and shared, mandating that healthcare organizations implement robust security controls. Similarly, the European Union’s GDPR provides a comprehensive framework for data protection, including stringent rules for processing sensitive health data and severe penalties for noncompliance. Additionally, the recently adopted European Health Data Space (EHDS) further strengthens health data protection by establishing a framework for secure data exchange, empowering citizens’ access to their health information, and creating guidelines for responsible secondary use of medical data across EU member states.

Other regions have similar regulations focused on securing this critical information. For healthcare providers using AI and ML, enforcing adherence to these regulatory standards is paramount. The failure to meet these regulatory requirements can result in severe penalties, including hefty fines and operational restrictions. AWS provides numerous resources and services to help you comply with these regulations. For example, AWS offers HIPAA-eligible services and provides compliance documentation for GDPR. Additionally, AWS Audit Manager helps you continually audit your AWS usage to streamline risk and compliance assessment, and you can use AWS Config conformance packs to assess, audit, and evaluate the configurations of your AWS resources for compliance. AWS also supports healthcare organizations building AI-powered applications that adhere to these privacy mandates through Amazon Bedrock Guardrails.

Enter Amazon Bedrock Guardrails

Amazon Bedrock Guardrails is a feature you can use to implement safeguards for your generative AI applications to prevent harmful content and protect user privacy. Amazon Bedrock Guardrails helps you implement safeguards for your generative AI applications based on your use cases and responsible AI policies. Amazon Bedrock Guardrails helps control the interaction between users and FMs by filtering undesirable and harmful content and will soon redact personal identifiable information (PII), enhancing content safety and privacy in generative AI applications. You can create multiple guardrails with different configurations tailored to specific use cases. Additionally, with the guardrails, you can continually monitor and analyze user inputs and FM responses that might violate customer-defined policies.

You can use guardrails to define a set of policies to help safeguard your generative AI applications. There are five types of policies you can configure in Amazon Bedrock Guardrails to avoid undesirable and harmful content and remove sensitive information for privacy protection: content filters, denied topics, word filters, sensitive information filters, and contextual grounding check.

Solution overview

A healthcare AI assistant using Amazon Bedrock Guardrails must adapt to the unique privacy requirements of different medical practitioners.

Consider the following scenario: Dr. Smith, an oncologist dealing with sensitive cancer data, and Dr. Jones, a general practitioner handling a wide range of patient information, both use AI assistants in their practices.

While Dr. Smith requires stringent safeguards to prevent any disclosure of confidential treatment details, Dr. Jones needs more flexible protection that allows discussion of general health topics while securing specific patient information.

An AI assistant with Amazon Bedrock Guardrails would tailor its approach accordingly, ensuring both doctors receive customized data protection aligned with their respective specialties and privacy needs.

The solution follows these high-level steps:

  1. Activate models in Amazon Bedrock
  2. Create guardrails in Amazon Bedrock Guardrails
  3. Test the guardrails

Solution walkthrough: Safeguard healthcare data privacy using Amazon Bedrock Guardrails

To set up guardrails to safeguard the privacy of your healthcare data, follow these step-by-step instructions.

Step 1: Activate models in Amazon Bedrock

To request or modify access, first, make sure the AWS Identity and Access Management (IAM) role that you use has sufficient IAM permissions to manage access to FMs. Then, add or remove access to a model by following the instructions at Add or remove access to Amazon Bedrock foundation models.

Step 2: Create guardrails in Amazon Bedrock Guardrails

To create the guardrails in Amazon Bedrock Guardrails, follow these steps:

  1. On the Amazon Bedrock console choose, Guardrails.
  2. Choose Create guardrail, as shown in the following screenshot.

    Figure 1. Create a guardrail.

  3. In Guardrail details, enter a Name and Description.
  4. In Messaging for blocked prompts, enter a message to display if your guardrail blocks the user prompt.
  5. Choose Next, as shown in the following screenshot.

    Figure 2. Provide guardrail details.

  6. Turn on Enable harmful content filters and Enable prompt attacks filter.
  7. Choose Next, as shown in the following screenshot.

    Figure 3. Configure content filters.

  8. To add denied topics, choose Add denied topic, as shown in the following screenshot.

    Figure 4. Add denied topics.

  9. Enter a Name and Definition and Add sample phrases. Choose Confirm.

    Figure 5. Edit denied topic.

  10. To add word filters, choose Filter profanity, as shown in the following screenshot. Choose Confirm.

    Figure 6. Add word filters.

  11. To filter PII types, under Choose PII type, select from a category from the dropdown menu, as shown in the following screenshot.
  12. Choose Next.

    Figure 7. Add sensitive information filters.

  13. To add contextual grounding, turn on Enable grounding check and Enable relevance check. Choose Next, as shown in the following screenshot.

    Figure 8. Add contextual grounding check.

  14. In the Review and create screen, verify your settings.
  15. Choose Create guardrail.

    Figure 9. Review and create.

Step 3: Test the guardrails

To test the guardrail you created, follow these steps:

  1. In Amazon Bedrock, navigate to the created guardrails and select the name of your guardrail.
  2. Choose Select model, as shown in the following screenshot.

    Figure 10. Select your model.

  3. Select from the list of models.
  4. In the Prompt text box, enter a prompt. For example, I’m sick, please advise on most efficient medication to take.
  5. Choose Run.
  6. If the content is blocked, the Final response should be the same as the blocking messaging, as shown in the following screenshot.

    Figure 11. Enter a prompt.

  7. Choose View trace to see why the content was blocked.

    Figure 12. Check the guardrail trace.

By implementing these guardrails, Dr. Smith and Dr. Jones are able to secure their patient interactions with AI assistants while maintaining specialty-appropriate levels of data protection. This verifies that both specialists can leverage AI assistance effectively in their daily practice while maintaining the required level of confidentiality for their respective patient populations

Conclusion

In this post, we explored a comprehensive solution for securing healthcare data privacy using Amazon Bedrock Guardrails in AI-driven healthcare applications. We demonstrated how to use Amazon Bedrock Guardrails built-in filters and customizable policies to create a robust and secure AI system that adheres to stringent healthcare data privacy standards and regulations.

The key benefits of implementing this solution include:

  • Enhanced patient data protection – By making sure that AI models operate within predefined boundaries and provide appropriate responses, patient data remains secure and protected from inadvertent exposure or misuse.
  • Regulatory compliance – The solution helps healthcare organizations align with strict regulatory requirements such as HIPAA and GDPR, mitigating risks of noncompliance and associated penalties.
  • Responsible AI alignment – The implementation aligns with ethical AI principles and responsible AI practices in healthcare, fostering trust and accountability in the deployment of AI systems.

By following the steps outlined in this post, healthcare organizations can confidently harness the power of generative AI while prioritizing patient data privacy and regulatory compliance. This approach ultimately delivers secure and trustworthy AI-driven healthcare solutions that protect sensitive information and maintain the highest standards of patient confidentiality.

Syrine Souissi

Syrine Souissi

Syrine is a business development manager at Amazon Web Services (AWS). She supports EMEA public sector customers on data and artificial intelligence (AI) projects. Syrine has previous experience in public sector consulting and public policy. She holds a master’s degree in management and public affairs from Sciences Po Paris. She's based in Paris and speaks four languages.

Makram Jenayah

Makram Jenayah

Makram is a senior solutions architect at Amazon Web Services (AWS) and specializes in transforming public healthcare through cloud innovation. He helps enable government and healthcare organizations to enhance patient outcomes, optimize operations, and pioneer data-driven health initiatives, ultimately elevating the quality of care. Makram is based in Paris, holds master’s degrees in artificial intelligence (AI) and business management, and speaks four languages.