Listing Thumbnail

    The Inference Server - Llama.cpp - CUDA - NVIDIA Container - Ubuntu 22

     Info
    Run AI Inference on your own server for coding support, creative writing, summarizing, ... without sharing data with other services. The Inference server has all you need to run state-of-the-art inference on GPU servers. Includes llama.cpp inference, latest CUDA and NVIDIA Docker container support. Support for llama-cpp-python, Open Interpreter, Tabby coding assistant.
    Listing Thumbnail

    The Inference Server - Llama.cpp - CUDA - NVIDIA Container - Ubuntu 22

     Info

    Overview

    Play video

    The Inference server offers the full infrastructure to run fast inference on GPUs.

    It includes llama.cpp inference, latest CUDA and NVIDIA Docker container toolkit.

    Leverage the multitude of models freely available to run inference with 8 bit or lower quantized models which makes inference possible on e.g. 16 GB or 24 GB memory GPUs.

    Llama.cpp offer efficient inference of quantized models in interactive and server mode. It features

    • Plain C/C++ implementation without dependencies
    • 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support
    • Running inference on GPU and CPU simultaneously allowing to run larger models in case GPU memory is insufficient
    • AVX, AVX2 and AVX512 support for x86 architectures
    • Supported models: LLaMA, LLaMA 2, Falcon, Alpaca, GPT4All, Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2, Vigogne (French), Vicuna, Koala, OpenBuddy (Multilingual), Pygmalion 7B / Metharme 7B, WizardLM, Baichuan-7B and its derivations (such as baichuan-7b-sft), Aquila-7B / AquilaChat-7B, Starcoder models, Mistral AI v0.1, Refact

    Here is our guide How to use the AI SP Inference Server 

    The Inference server supports in addition

    • llama-cpp-python: OpenAI API compatible Llama.cpp inference server
    • Open Interpreter: let language models run code on your computer. An open-source, locally running implementation of OpenAIs Code Interpreter.
    • Tabby coding assistant: a self-hosted AI coding assistant, offering an open-source alternative to GitHub Copilot

    Includes remote desktop access via NICE DCV high-end remote desktops or via ssh (putty, ...).

    Highlights

    • Ready to run Inference. Everything pre-installed. Download a model for coding, text generation, chat, ... and start creating output
    • Different options to run Inference servers for text generation, coding integration for IDE support, summarizing, sentiment analysis, ...
    • You own the data and inference. No data is shared with any public service for AI inference.

    Details

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Ubuntu 22

    Typical total price

    This estimate is based on use of the seller's recommended configuration (g5.xlarge) in the US East (N. Virginia) Region. View pricing details

    $1.106/hour

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    The Inference Server - Llama.cpp - CUDA - NVIDIA Container - Ubuntu 22

     Info
    Pricing is based on actual usage, with charges varying according to how much you consume. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Usage costs (13)

     Info
    Instance type
    Product cost/hour
    EC2 cost/hour
    Total/hour
    g4dn.xlarge
    $0.06
    $0.526
    $0.586
    g4dn.2xlarge
    $0.08
    $0.752
    $0.832
    g4dn.4xlarge
    $0.12
    $1.204
    $1.324
    g4dn.8xlarge
    $0.16
    $2.176
    $2.336
    g4dn.12xlarge
    $0.32
    $3.912
    $4.232
    g4dn.16xlarge
    $0.36
    $4.352
    $4.712
    g4dn.metal
    $0.48
    $7.824
    $8.304
    g5.xlarge
    Recommended
    $0.10
    $1.006
    $1.106
    g5.2xlarge
    $0.13
    $1.212
    $1.342
    g5.4xlarge
    $0.18
    $1.624
    $1.804

    Additional AWS infrastructure costs

    Type
    Cost
    EBS General Purpose SSD (gp2) volumes
    $0.10/per GB/month of provisioned storage

    Vendor refund policy

    No refund. Instance is billed by hour of actual use, terminate at any time and product charges are stopped .

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Version release notes

    Includes Llama.cpp as of Dec 21, 2024. Security fixes.

    Additional details

    Usage instructions

    Make sure the instance security groups allow inbound traffic to TCP and UDP port 8443 and 22.

    To connect to your Inference Server you have different options:

    Option 1: Connect with the native NICE DCV Client for best performance

    1. Download the NICE DCV client from: https://download.nice-dcv.com/  (includes Windows portable client)
    2. In the DCV client connection field enter the instance public IP to connect.
    3. Sign in using the following credentials: User: ubuntu. Password: last 6 digits of the instance ID.

    Option 2: Connect with NICE DCV Web Client for convenience

    1. Connect with the following URL: https://IP_OR_FQDN:8443/, e.g. https://3.70.184.235:8443/ 
    2. Sign in using the following credentials: User: ubuntu. Password: last 6 digits of the instance ID.

    Option 3: Set your own password and connect

    1. Connect to your remote machine with ssh -i <your-pem-key> ubuntu@<public-dns>
    2. Set the password for the user "ubuntu" with sudo passwd ubuntu. This is the password you will use to log in to DCV
    3. Connect to your remote machine with the NICE DCV native client or web client as described above
    4. Enter your credentials and you are ready to rock

    Please do not perform an update to a new kernel or higher releases as it might disable the GPU driver.

    Here is our guide How to use the AI SP Inference Server 

    Quick start

    How to run neural network inference with llama.cpp for quantized models - example with Xwin-LM-13B:

    # depending on the instance type g4dn or g5 please use one of the 'cd' below cd ~/inference/llama.cpp-g4dn cd ~/inference/llama.cpp-g5 # now download the model - example is Xwin-LM-13B with 5bit quantization cd models wget https://huggingface.co/TheBloke/Xwin-LM-13B-V0.1-GGUF/resolve/main/xwin-lm-13b-v0.1.Q5_K_M.gguf cd .. # start inference ./main -m models/xwin-lm-13b-v0.1.Q5_K_M.gguf -p 'Building a website can be done in 10 simple steps:\nStep 1:' -n 600 -e -c 2700 --color --temp 0.1 --log-disable -ngl 52 # move 52 layers into the GPU # or you can put your prompt into the file "prompt.txt" and run bash run.sh # please note that llama.cpp also supports a chat mode by adding the option '-i': ./main -i -m models/xwin-lm-13b-v0.1.Q5_K_M.gguf -p 'Building a website can be done in 10 simple steps:\nStep 1:' -n 600 -e -c 2700 --color --temp 0.1 --log-disable -ngl 52 # move 52 layers into the GPU

    Have fun infering!

    (At the moment the AMI supports g4dn and g5 instances - you can clone and compile for other instance types like p3).

    Support

    Vendor support

    Guide to use the Inference server (https://www.ai-sp.com/how-to-use-the-ai-sp-inference-server/ ). Free support is available through forums (https://forums.thinkwithwp.com/forum.jspa?forumID=366 )

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to write a review for this product.