Overview
Stable Diffusion allows to render beautifully stunning images based on text or image input independently on your own AWS cloud server with great performance. NeRF neural networks create 3D scenes from videos and images. Ubuntu 22 operating system.
AUTOMATIC Stable Diffusion
Stable Diffusion creates images similar to Midjourney or OpenAI DALL-E. Automatic is a feature rich collection of Stable Diffusion integration to create beautiful images yourself.
Supports text2image as well as img2img to create impressive images based on other images with a guidance prompt controlling the influence on the generated image.
Leverages the Automatic Stable Diffusion bundle and GUI including built-in upscaling (ESRGAN, LDSR, ...), face restoration (GFPGAN, Codeformer, ...), inpainting, outpainting and many other features.
Supported versions: Stable Diffusion 1.4, 2.0 and 2.1.
Deforum Stable Diffusion
Create image sequences and videos automatically with Deforum Stable Diffusion. Please find more background in our guide linked to the left. Example: https://www.youtube.com/watch?v=SEXbfni0nRc . Also available in the primary Automatic version of Stable Diffusion as extension.
NeRF - Create 3D Scenes with Neural Networks
NeRFs use neural networks to represent and render realistic 3D scenes based on an input collection of 2D images. The images can automatically be sampled from a video or be a collection of videos. Please find more background in our guide linked to the left. Supports Instant-NGP from nVidia and Nerfstudio integrating different NeRF technologies.
Supports T4 GPUs with 16 GB of VRAM (g4dn family) and powerful A10 GPUs with 24 GB (g5 family) for large image rendering.
Uses DCV from AWS to offer high-end remote desktop. You can upload and download images created via the DCV interface.
If you prefer Windows as OS please check out our other Stable Diffusion Windows Marketplace offer.
This is a collaborative project of NI SP and AI SP.
More background on Stable Diffusion and license:
Stable Diffusion is a latent text-to-image diffusion model. Thanks to a generous compute donation from Stability AI and support from LAION, they were able to train a Latent Diffusion Model on 512x512 images from a subset of the LAION-5B database. Similar to Google's Imagen, this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM. See this section below and the model card. Stable Diffusion was trained on AWS GPU servers.
Stable Diffusion v1 refers to a specific configuration of the model architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and then finetuned on 512x512 images.
Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present in its training data. Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding model card.
The weights are available via the CompVis organization at Hugging Face under a license which contains specific use-based restrictions to prevent misuse and harm as informed by the model card, but otherwise remains permissive. While commercial use is permitted under the terms of the license, we do not recommend using the provided weights for services or products without additional safety mechanisms and considerations, since there are known limitations and biases of the weights, and research on safe and ethical deployment of general text-to-image models is an ongoing effort. The weights are research artifacts and should be treated as such.
The CreativeML OpenRAIL M license is an Open RAIL M license, adapted from the work that BigScience and the RAIL Initiative are jointly carrying in the area of responsible AI licensing. See also the article about the BLOOM Open RAIL license on which our license is based.
Highlights
- Render stunning images leveraging Stable Diffusion neural networks with your own GPU cloud server
- Stable Diffusion Image to Image offers enhancing existing images with a descriptive prompt even from doodles. Create 3D scenes from images with NeRF neural networks.
- Supports T4 GPUs with 16 GB or A10 with 24 GB of GPU memory to render images with high resolutions
Details
Typical total price
$0.842/hour
Features and programs
Financing for AWS Marketplace purchases
Pricing
Instance type | Product cost/hour | EC2 cost/hour | Total/hour |
---|---|---|---|
g4dn.xlarge | $0.09 | $0.526 | $0.616 |
g4dn.2xlarge Recommended | $0.09 | $0.752 | $0.842 |
g4dn.4xlarge | $0.19 | $1.204 | $1.394 |
g4dn.8xlarge | $0.29 | $2.176 | $2.466 |
g4dn.12xlarge | $0.39 | $3.912 | $4.302 |
g4dn.16xlarge | $0.49 | $4.352 | $4.842 |
g4dn.metal | $0.49 | $7.824 | $8.314 |
g5.xlarge | $0.14 | $1.006 | $1.146 |
g5.2xlarge | $0.19 | $1.212 | $1.402 |
g5.4xlarge | $0.24 | $1.624 | $1.864 |
Additional AWS infrastructure costs
Type | Cost |
---|---|
EBS General Purpose SSD (gp2) volumes | $0.10/per GB/month of provisioned storage |
Vendor refund policy
No refund
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
Includes Stable Diffusion with DreamShaper_6_BakedVae. Vlad/Automatic SD with WebGui as of Dec 31, 2023 with Torch 2.0 and CUDA 11.7. Updated to latest Vlad/Automatic GUI including Stable Diffusion, extension manager, depth map creation, merging of checkpoints, Lora support, dreambooth training and much more. Runs on at least 32 GB main memory, e.g. g4dn.2xlarge.
Additional details
Usage instructions
Usage Instructions: Make sure the instance security groups allow inbound traffic to TCP port 8443 and optionally TCP port 22 and UDP port 8443.
To connect you have different options:
Connect with the NICE DCV Web Client
Connect with the following URL: https://IP_OR_FQDN:8443/, e.g. https://3.70.184.235:8443/ Sign in using the following credentials: User: ubuntu. Password: last 6 digits of the instance ID.
Connect with the native NICE DCV Client
Download the NICE DCV client from: https://download.nice-dcv.com/ (includes Windows portable client) In the DCV client connection field enter the instance public IP to connect. Sign in using the following credentials: User: ubuntu. Password: last 6 digits of the instance ID.
Set your own password and connect
Connect to your remote machine with ssh -i <your-pem-key> ubuntu@<public-dns> (needs TCP port 22 open) Set the password for the user "ubuntu" with sudo passwd ubuntu. This is the password you will use to log in to DCV Connect to your remote machine with the NICE DCV native client or web client as described above Enter your credentials and you are ready to rock
See also our guides at https://www.ni-sp.com/how-to-run-stable-diffusion-on-your-own-cloud-gpu-server/ , https://www.ni-sp.com/how-to-run-deforum-stable-diffusion-on-your-own-cloud-gpu-server/ and https://www.ni-sp.com/nerf-how-to-create-3d-scenes-with-neural-networks-yourself/ . How to use the AMI: https://www.youtube.com/watch?v=0tZgEV5C9Z0
Read more about DreamShaper at https://civitai.com/models/4384/dreamshaper .
Please checkout https://www.reddit.com/r/StableDiffusion for more background on how to use Stable Diffusion.
When switching checkpoints in the upper left selector it can happen that the machine runs out of memory. In this case reboot the instance and adapt the start-AUTO.sh script to specify the initially desired checkpoint e.g. with options: --autolaunch --ckpt models/Stable-diffusion/ProtoGen_X3.4.safetensors
Support
Vendor support
DCV User Guide: https://docs.thinkwithwp.com/dcv/latest/userguide/getting-started.html . General Stable Diffusion Discord: https://discord.com/invite/stablediffusion and support channel: https://discord.com/channels/1002292111942635562/1002602742667280404 Deforum Stable Diffusion Discord:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.