Amazon Sagemaker
Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. With Amazon SageMaker, all the barriers and complexity that typically slow down developers who want to use machine learning are removed. The service includes models that can be used together or independently to build, train, and deploy your machine learning models.

Medical LLM - 14B
By:
Latest Version:
2.0
Free trial
Medical model offering top-tier depth and accuracy in processing complex medical cases and literature, ideal for specialized medical use
Product Overview
The 14B parameter model represents the pinnacle of our medical language modeling capabilities, offering unparalleled depth in medical knowledge processing and clinical reasoning. This powerful model excels in handling the most complex medical cases, rare conditions, and sophisticated clinical analyses. It demonstrates exceptional accuracy in interpreting complicated medical literature, producing detailed clinical summaries, and providing comprehensive responses to intricate medical queries. While requiring more computational resources, it delivers superior performance in critical medical tasks where accuracy and depth of understanding are paramount. Its advanced RAG optimization enables sophisticated integration with extensive medical databases and research repositories. Choose this model for specialized medical institutions, research facilities, and scenarios where premium performance in complex medical tasks justifies the additional computational investment.
Key Data
Version
Type
Model Package
Highlights
Real-Time Inference
- Instance Type: ml.g5.12xlarge
- Maximum Model Length: 16,000 tokens
Tokens per Second during real-time inference:
- Summarization: up to 18 tokens per second
- QA: up to 27 tokens per second
Batch Transform
- Instance Type: ml.g5.12xlarge
- Maximum Model Length: 16,000 tokens
Tokens per Second during batch transform operations:
- Summarization: up to 34 tokens per second
- QA: up to 207 tokens per second
Accuracy
- Achieves 81.42% average, competing with GPT-4 (82.85%)
- Outstanding clinical comprehension (92.36%), exceeding Med-PaLM-2's 88.3%
- Superior medical reasoning (90%) comparable to top-tier models
- Outperforms Meditron-70B despite being 5x smaller
- State-of-the-art performance in medical tasks while maintaining deployment efficiency
Not quite sure what you’re looking for? AWS Marketplace can help you find the right solution for your use case. Contact us
Pricing Information
Use this tool to estimate the software and infrastructure costs based your configuration choices. Your usage and costs might be different from this estimate. They will be reflected on your monthly AWS billing reports.
Contact us to request contract pricing for this product.
Estimating your costs
Choose your region and launch option to see the pricing details. Then, modify the estimated price by choosing different instance types.
Version
Region
Software Pricing
Model Realtime Inference$9.98/hr
running on ml.g5.12xlarge
Model Batch Transform$9.98/hr
running on ml.g5.12xlarge
Infrastructure PricingWith Amazon SageMaker, you pay only for what you use. Training and inference is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and inference instances.
Learn more about SageMaker pricing
With Amazon SageMaker, you pay only for what you use. Training and inference is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and inference instances.
Learn more about SageMaker pricing
SageMaker Realtime Inference$7.09/host/hr
running on ml.g5.12xlarge
SageMaker Batch Transform$7.09/host/hr
running on ml.g5.12xlarge
About Free trial
Try this product for 15 days. There will be no software charges, but AWS infrastructure charges still apply. Free Trials will automatically convert to a paid subscription upon expiration.
Model Realtime Inference
For model deployment as Real-time endpoint in Amazon SageMaker, the software is priced based on hourly pricing that can vary by instance type. Additional infrastructure cost, taxes or fees may apply.InstanceType | Realtime Inference/hr | |
---|---|---|
ml.g5.12xlarge Vendor Recommended | $9.98 |
Usage Information
Model input and output details
Input
Summary
Input Format
1. Chat Completion
Example Payload
{
"model": "/opt/ml/model",
"messages": [
{"role": "system", "content": "You are a helpful medical assistant."},
{"role": "user", "content": "What should I do if I have a fever and body aches?"}
],
"max_tokens": 1024,
"temperature": 0.7
}
For additional parameters:
2. Text Completion
Single Prompt Example
{
"model": "/opt/ml/model",
"prompt": "How can I maintain good kidney health?",
"max_tokens": 512,
"temperature": 0.6
}
Multiple Prompts Example
{
"model": "/opt/ml/model",
"prompt": [
"How can I maintain good kidney health?",
"What are the best practices for kidney care?"
],
"max_tokens": 512,
"temperature": 0.6
}
Reference:
Important Notes:
- Streaming Responses: Add "stream": true to your request payload to enable streaming
- Model Path Requirement: Always set "model": "/opt/ml/model" (SageMaker's fixed model location)
Input MIME type
application/jsonSample input data
Output
Summary
Output Format
The API delivers responses in two modes:
- Non-streaming: The complete response is returned as a single JSON object once the model finishes generating the output. This occurs when "stream": false (default) is set in the request payload.
- Streaming: The response is delivered incrementally as JSON Lines (JSONL) chunks, each prefixed with data: and ending with a newline. The stream concludes with data: [DONE]. This mode is activated by setting "stream": true in the request payload.
This section details the structure and fields of the output for both chat completion and text completion endpoints in each mode, reflecting the behavior of a model hosted on Amazon SageMaker with the fixed path "/opt/ml/model".
Non-Streaming Responses
In non-streaming mode, the API returns a single JSON object containing the full response.
1. Chat Completion
Description:
The chat completion response contains the model’s reply to a series of input messages (e.g., from "system" and "user" roles), as shown in the user’s example payload.
Example:
{ "id": "chatcmpl-1d202501a96e4580b6352ba7064e6bb8", "object": "chat.completion", "created": 1743488701, "model": "/opt/ml/model", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "The patient presents with symptoms of a ...", "tool_calls": [] }, "logprobs": null, "finish_reason": "stop", "stop_reason": null, } ], "usage": { "prompt_tokens": 206, "completion_tokens": 356, "total_tokens": 562, "prompt_tokens_details": null }, "prompt_logprobs": null }
2. Text Completion
Description:
The text completion response contains the model’s generated text based on a single prompt or an array of prompts, as shown in the user’s single and multiple prompt examples.
Example (Single Prompt):
{ "id": "cmpl-a6d9952b95dc4c0dbea4cf9deeb46560", "object": "text_completion", "created": 1743488720, "model": "/opt/ml/model", "choices": [ { "index": 0, "text": "If you have a fever and body aches ...", "logprobs": null, "finish_reason": "stop", "stop_reason": null, "prompt_logprobs": null } ], "usage": { "prompt_tokens": 14, "completion_tokens": 368, "total_tokens": 382, "prompt_tokens_details": null } }
Example (Multiple Prompts):
{ "id": "cmpl-86c6f7fe2ead4dc79ba5942eecfb9930", "object": "text_completion", "created": 1743489812, "model": "/opt/ml/model", "choices": [ { "index": 0, "text": "To maintain good kidney health ...", "logprobs": null, "finish_reason": "stop", "stop_reason": null, "prompt_logprobs": null }, { "index": 1, "text": "Best practices for kidney care include ...", "logprobs": null, "finish_reason": "stop", "stop_reason": null, "prompt_logprobs": null } ], "usage": { "prompt_tokens": 20, "completion_tokens": 50, "total_tokens": 70, "prompt_tokens_details": null } }
Streaming Responses
In streaming mode ("stream": true), the API delivers the response as a series of JSON Lines (JSONL) chunks, each prefixed with data: and terminated with a newline. The stream ends with data: [DONE]. This aligns with the user’s streaming examples using invoke_streaming_endpoint.
1. Chat Completion (Streaming)
Description:
Each chunk contains a portion of the assistant’s message. The full response is reconstructed by concatenating the content fields from the delta objects in the order received.
Example:
data: {"id":"chatcmpl-5a398898be0b4014b7eb9fb15798a006","object":"chat.completion.chunk","created":1743433744,"model":"/opt/ml/model","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null,"stop_reason":null}]} data: {"id":"chatcmpl-5a398898be0b4014b7eb9fb15798a006","object":"chat.completion.chunk","created":1743433744,"model":"/opt/ml/model","choices":[{"index":0,"delta":{"content":"If"},"logprobs":null,"finish_reason":null,"stop_reason":null}]} data: {"id":"chatcmpl-5a398898be0b4014b7eb9fb15798a006","object":"chat.completion.chunk","created":1743433744,"model":"/opt/ml/model","choices":[{"index":0,"delta":{"content":" you"},"logprobs":null,"finish_reason":null,"stop_reason":null}]} data: {"id":"chatcmpl-5a398898be0b4014b7eb9fb15798a006","object":"chat.completion.chunk","created":1743433744,"model":"/opt/ml/model","choices":[{"index":0,"delta":{"content":" have"},"logprobs":null,"finish_reason":"length","stop_reason":null}]} data: [DONE]
2. Text Completion (Streaming)
Description: Each chunk contains a portion of the generated text. The full response is reconstructed by concatenating the text fields from each chunk in the order received
Example:
data: {"id":"cmpl-1318a788635e47a58bafeaf18a2816c2","object":"text_completion","created":1743433786,"model":"/opt/ml/model","choices":[{"index":0,"text":"If","logprobs":null,"finish_reason":null,"stop_reason":null}],"usage":null} data: {"id":"cmpl-1318a788635e47a58bafeaf18a2816c2","object":"text_completion","created":1743433786,"model":"/opt/ml/model","choices":[{"index":0,"text":" you","logprobs":null,"finish_reason":null,"stop_reason":null}],"usage":null} data: {"id":"cmpl-1318a788635e47a58bafeaf18a2816c2","object":"text_completion","created":1743433786,"model":"/opt/ml/model","choices":[{"index":0,"text":" have","logprobs":null,"finish_reason":null,"stop_reason":null}],"usage":null} data: {"id":"cmpl-1318a788635e47a58bafeaf18a2816c2","object":"text_completion","created":1743433786,"model":"/opt/ml/model","choices":[{"index":0,"text":" a","logprobs":null,"finish_reason":"stop","stop_reason":null}],"usage":null} data: [DONE]
Output MIME type
application/json, text/event-streamSample output data
Sample notebook
End User License Agreement
By subscribing to this product you agree to terms and conditions outlined in the product End user License Agreement (EULA)
Support Information
Medical LLM - 14B
For any assistance, please reach out to support@johnsnowlabs.com. https://spark-nlp.slack.com/archives/C06HG18DDDH
AWS Infrastructure
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Learn MoreRefund Policy
No refunds are possible.
Customer Reviews
There are currently no reviews for this product.
View allWrite a review
Share your thoughts about this product.
Write a customer review