AWS Startups Blog

Accelerating the next wave of generative AI startups

Accelerating the next wave of generative ai startups

Since day one, AWS has helped startups bring their ideas to life by democratizing access to the technology powering some of the largest enterprises around the world including Amazon. Each year since 2020, we have provided startups nearly $1 billion in AWS Promotional Credits. It’s no coincidence then that 80% of the world’s unicorns use AWS. I am lucky to have had a front row seat to the development of so many of these startups over my time at AWS—companies like Netflix, Wiz, and Airtasker. And I’m enthusiastic about the rapid pace at which startups are adopting generative artificial intelligence (AI) and how this technology is creating an entirely new generation of startups.

These generative AI startups have the ability to transform industries and shape the future, which is why today we announced a commitment of $230 million to accelerate the creation of generative AI applications by startups around the world. We are excited to collaborate with visionary startups, nurture their growth, and unlock new possibilities. In addition to this monetary investment, today we’re also announcing the second-annual AWS Generative AI Accelerator in partnership with NVIDIA. This global 10-week hybrid program is designed to propel the next wave of generative AI startups. This year, we’re expanding the program 4x to serve 80 startups globally. Selected participants will each receive up to $1 million in AWS Promotional Credits to fuel their development and scaling needs. The program also provides go-to-market support as well as business and technical mentorship. Participants will tap into a network that includes domain experts from AWS as well as key AWS partners such as NVIDIA, Meta, Mistral AI, and venture capital firms investing in generative AI.

Building in the cloud with generative AI

In addition to these programs, AWS is committed to making it possible for startups of all sizes and developers of all skill levels to build and scale generative AI applications with the most comprehensive set of capabilities across the three layers of the generative AI stack. At the bottom layer of the stack, we provide infrastructure to train large language models (LLMs) and foundation models (FMs) and produce inferences or predictions. This includes the best NVIDIA GPUs and GPU-optimized software, custom, machine learning (ML) chips including AWS Trainium and AWS Inferentia, as well as Amazon SageMaker, which greatly simplifies the ML development process. In the middle layer, Amazon Bedrock makes it easier for startups to build secure, customized, and responsible generative AI applications using LLMs and other FMs from leading AI companies. And at the top layer of the stack, we have Amazon Q, the most capable generative AI-powered assistant for accelerating software development and leveraging companies’ internal data.

Customers are innovating using technologies across the stack. For instance, during my time at the VivaTech conference in Paris last month, I sat down Michael Chen, VP of Strategic Alliances at PolyAI, which offers customized voice AI solutions for enterprises. PolyAI develops natural-sounding text-to-speech models using Amazon SageMaker. And they build on Amazon Bedrock to ensure responsible and ethical AI practices. They use Amazon Connect to integrate their voice AI into customer service operations.

At the bottom layer of the stack, NinjaTech uses Trainium and Inferentia2 chips, along with Amazon SageMaker, to build, train, and scale custom AI agents. From conducting research to scheduling meetings, these AI agents save time and money for NinjaTech’s users by bringing the power of generative AI into their everyday workflows. I recently sat down with Sam Naghshineh, Co-founder and CTO, to discuss how this approach enables them to save time and resources for their users.

Leonardo.AI, a startup from the 2023 AWS Generative AI Accelerator cohort, is also harnessing the capabilities of AWS Inferentia2 to enable artists and professionals to produce high-quality visual assets with unmatched speed and consistency. By reducing their inference costs without sacrificing performance, Leonardo.AI can offer their most advanced generative AI features at a more accessible price point.

Leading generative AI startups, including Perplexity, Hugging Face, AI21 Labs, Articul8, Luma AI, Hippocratic AI, Recursal AI, and DatologyAI are building, training, and deploying their models on Amazon SageMaker. For instance, Hugging Face used Amazon SageMaker HyperPod, a feature that accelerates training by up to 40%, to create new open-source FMs. The automated job recovery feature helps minimize disruptions during the FM training process, saving them hundreds of hours of training time a year.

At the middle layer, Perplexity leverages Amazon Bedrock with Anthropic Claude 3 to build their AI-powered search engine. Bedrock ensures robust data protection, ethical alignment through content filtering, and scalable deployment of Claude 3. While Nexxiot, an innovator in transportation and supply chain solutions, quickly moved its Scope AI assistant solution to Amazon Bedrock with Anthropic Claude in order to give their customers the best real-time, conversational insights into their transport assets.

At the top layer, Amazon Q Developer helps developers at startups build, test, and deploy applications faster and more efficiently, allowing them to focus their valuable energy on driving innovation. Ancileo, an insurance SaaS provider for insurers, re-insurers, brokers, and affinity partners, uses Amazon Q Developer to reduce the time to resolve coding-related issues by 30%, and is integrating ticketing and documentation with Amazon Q to speed up onboarding and allow anyone in the company to quickly find their answers. Amazon Q Business enables everyone at a startup to be more data-driven and make better, faster decisions using the organization’s collective knowledge. Brightcove, a leading provider of cloud video services, deployed Amazon Q Business to streamline their customer support workflow, allowing the team to expedite responses, provide more personalized service, and ultimately enhance the customer experience.

Resources for generative AI startups

The future of generative AI belongs to those who act now. The application window for the AWS Generative AI Accelerator program is open from June 13 to July 19, 2024, and we’ll be selecting a global cohort of the most promising generative AI startups. Don’t miss this unique chance to redefine what’s possible with generative AI, and apply now!

Other helpful resources include:

  • You can use your AWS Activate credits for Amazon Bedrock to experiment with FMs, along with a broad set of capabilities needed to build responsible generative AI applications with security and privacy.
  • Dive deeper by exploring our Generative AI Community space for technical content, insights, and connections with fellow builders. AWS also provides free training to help the current and future workforce take advantage of Amazon’s generative AI tools. For those interested in learning to build with generative AI on AWS, explore the comprehensive Generative AI Learning Plan for Developers to gain the skills you need to create cutting-edge applications
  • NVIDIA offers NVIDIA Inception, a free program designed to help startups evolve faster through cutting-edge technology, opportunities to connect with venture capitalists, and access to the latest technical resources from NVIDIA.

Apply now, explore the resources, and join the generative AI revolution with AWS.

Additional Resources

Twitch series: Let’s Ship It – with AWS! Generative AI

AWS Generative AI Accelerator Program: Apply now

Swami Sivasubramanian

Swami Sivasubramanian

Dr. Swami Sivasubramanian is the Vice President of AI & Data at AWS. In this role, Swami oversees all AWS AI and Data Services. His team’s mission is to help organizations leverage the power of AI and data to solve their most urgent business needs. Swami and his team innovate across multiple areas of the AI and data stack. Swami’s team works across three layers of the AI stack, including: (1) Amazon SageMaker and optimized deep learning frameworks/engines in the bottom layer of the stack (which is for developers and companies wanting to build foundation models (FMs); (2) Amazon Bedrock, which forms the middle layer of the stack, is for customers seeking to leverage an existing foundational model, customize it with their own data, and get access to features like RAG, Guardrails, etc., to build a GenAI application — all as a managed service. Amazon Bedrock, the first managed service of its kind, provides customers with the easiest way to build and scale GenAI applications with the broadest selection of first-party and third-party FMs, as well as leading ease-of-use capabilities that allow GenAI builders to get higher quality model outputs more quickly; (3) In the top layer of the stack, we have GenAI applications, with Amazon Q being the primary application to call out. Amazon Q, is an expert on AWS that writes, debugs, tests, and implements code, while also doing transformations (like moving from an old version of Java to a new one), and querying customers’ various data repositories (e.g. Intranets, wikis, Salesforce, Amazon S3, ServiceNow, Slack, Atlassian, etc.) to answer questions, summarize data, carry on coherent conversation, and take action. Q is the most capable work assistant available today and continues to evolve quickly. Most AI applications heavily rely on data, so Swami also leads teams focused on helping customers with data preparation (EMR, Glue), data catalog and governance (Amazon DataZone) and BI/analytics (with Amazon QuickSight). Since joining Amazon in 2005, Swami has also led the AWS Analytics and Databases portfolio, plus helped to build AWS services including Amazon S3, Amazon CloudFront, Amazon RDS, and Amazon DynamoDB. In September 2023, Swami joined the Amazon senior leadership team, or Steam. Swami has been awarded more than 250 patents, authored 40 referred scientific papers and journals, and participates in several academic circles and conferences. Swami is also a member of the National Artificial Intelligence Advisory Committee, which is tasked with advising the President of the United States and the National AI Initiative Office on topics related to the National AI Initiative.