AWS Public Sector Blog
How healthcare organizations use generative AI on AWS to turn data into better patient outcomes
Healthcare organizations invest heavily in technology and data. Generative artificial intelligence (AI) empowers healthcare organizations to leverage their investments in robust data foundations, improve patient experience through innovative interactive technologies, boost productivity to help address workforce challenges, and drive new insights to accelerate research. This post highlights three examples of how generative AI on Amazon Web Services (AWS) is being used in healthcare and discusses ways to leverage this technology in a responsible, safe way.
More than 10,000 organizations worldwide use Amazon Bedrock, a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon via a single API. Using Amazon Bedrock, you can easily experiment with top FMs, and fine-tune and privately customize them with your own data. Let’s take a look at how three healthcare customers are using Amazon Bedrock.
Fujita Health University improves work flows for doctors
Fujita Health University, the largest private medical university in Japan, used Amazon Bedrock to explore possible improvements to doctor workflows. Their pilot project evaluated the feasibility of using generative AI capabilities to generate discharge summaries, which are critical medical records that capture a patient’s treatment history and diagnosis during their hospital stay. With Amazon Bedrock, Fujita reduced the time required for discharge summaries by up to 90 percent, bringing it down to approximately 1 minute per patient. By automating repetitive aspects of these essential tasks, healthcare professionals can focus more on communications with patient and personalized care that can create better patient outcomes, and optimize their workload.
Genomics England accelerates gene-disease research using Anthropic Claude on AWS
Genomics England, a leader in human genome research, is developing a solution using Claude 3 models on Amazon Bedrock to help researchers identify associations between genetic variants and medical conditions. Using peer-reviewed articles, this research has the potential to inform future genetic tests and improve human health, with an initial focus on intellectual disability. The solution can quickly process millions of pages of literature to surface highest likelihood gene associations for further investigation, faster than manual review alone, with 20 potential clinically-relevant associations already identified.
AlayaCare equips home care professionals with rapid information when engaging patients
AlayaCare empowers home-based care providers and caregivers to provide better client care through technology. Using AWS AI technologies, they’re automating the heavy lifting of extracting crucial data from patient forms and care plans, turning it all into easy-to-digest summaries. This way, nurses and doctors can get the insights they need and can focus on patient care. Additionally, AlayaCare can identify at-risk clients for re-admission to acute care or hospitalization, thereby reducing wait times, improving care intervention times and reducing cost of care through early identification.
Moving forward responsibly with generative AI
When it comes to navigating ethical and responsible use of AI in healthcare, accuracy, security, privacy, and fairness are paramount considerations. At AWS, we have tools to help your organization get started with AI safely and responsibly.
Improving accuracy with Retrieval-Augmented Generation
One way to think about a generative AI large language model (LLM) is like an eager new employee, who doesn’t keep up with what’s happening now, but confidently answers every question. This is because LLMs are trained offline with data only up to a certain point, making the model unaware of any information that is created after the model was trained, or from a source it has not encountered. LLMs are also typically trained on general domain information, which can make them less effective for domain-specific tasks.
Retrieval-Augmented Generation (RAG) allows us to retrieve data from outside a foundation model. This is achieved by combining the LLM with search technology that we can use to enrich prompts by adding the relevant retrieved data in context. RAG is a powerful tool in the quest for accuracy in domain-specific AI solutions. By combining LLMs with search technology extracting data specific to your use cases and users, it is possible to incorporate both general domain knowledge and specialized healthcare contexts. This kind of ability gives providers access to organizational and professional knowledge, seamlessly integrated into clinical decision-making processes, for example, enabling new use cases where different versions of an AI application are needed in different countries to meet local requirements.
Defending data integrity, safeguarding privacy, preserving quality
AWS provides a robust framework for responsible AI deployment, prioritizing the privacy and security of user data and continuously monitoring and mitigating potential biases. Amazon Bedrock is in scope for common compliance standards including ISO, SOC, CSA STAR Level 2, and is HIPAA eligible. Customers can also use Amazon Bedrock in compliance with the GDPR.
Data input into Amazon Bedrock is stored in an encrypted format within the AWS Region of the application and is never shared with third-party providers. Take AWS HealthScribe, a HIPAA-eligible service powered by Amazon Bedrock, as an example. It uses speech recognition and generative AI to automatically generate preliminary clinical documentation. Built with security and privacy in mind, you control where your data is stored, with data encrypted in transit and at rest. AWS does not use inputs or outputs generated through the service to train its models.
Guardrails for Amazon Bedrock offers industry-leading safety protection, giving customers the ability to define content policies, set application behavior boundaries, and implement safeguards against potential risks. Guardrails for Amazon Bedrock is the only solution offered by a major cloud provider that enables customers to build and customize safety and privacy protections for their generative AI applications in a single solution. It helps customers block as much as 85 percent more harmful content than protection natively provided by FMs on Amazon Bedrock and provides robust personal identifiable information (PII) detection capabilities.
AWS is committed to user safety, security, and privacy as generative AI matures and its use within healthcare becomes more widespread. We aim to ensure that providers, patients, and healthcare agencies have access to the right tools for the right use case, all while upholding our core principle that “security is job zero.” This commitment underscores our dedication to fostering responsible AI initiatives and maintaining the trust of our customers in the ever-evolving landscape of healthcare technology.
Talk to a specialist to learn about generative AI in healthcare and life sciences and explore responsible AI considerations for your applications.