AWS for Industries

Using generative AI for hyper-personalized telecom billing and subscription experiences on AWS

The quest for hyper-personalization in telecom billing is here to stay

Exceptional customer experiences are non-negotiable for building loyalty and driving revenue growth. However, the billing process remains a sore point that frustrates and alienates a significant portion of the customer base. Unclear or confusing bills erode trust, resulting in a contact center calls being billing-related issues.

Despite efforts to minimize billing inquiries with initiatives such as personalized payment options, clearer bill designs, and automated operations, communications service providers (CSPs) are still grappling with an overwhelming volume of billing calls. The root of the problem lies in the lack of true personalization tailored to each customer’s unique billing situation.

For CSPs, billing can be more than just a touchpoint for communicating charges. They are seeking ways to increase revenue through personalized upsell, cross-sell, and right-sizing plans by analyzing individual usage patterns and life events. However, developing these hyper-personalized billing journeys has traditionally necessitated complex coding changes on lengthy cycles, which is impractical for keeping pace with evolving customer needs.

Using generative AI technology presents an opportunity to finally solve this long-standing billing personalization challenge at scale. CSPs can now use generative AI solutions to dynamically generate hyper-personalized chatbots, messaging, and digital experiences for each billing journey. This generative AI powered personalized billing makes communications relevant to each individual customer’s need and hence reduces billing related calls, builds trust, and fosters loyalty.

AWS democratizes generative AI with comprehensive solution

AWS is rapidly innovating to provide the most comprehensive set of capabilities across the generative AI stack. It aims to take complex and expensive technology that can transform customer experiences and businesses and democratize that technology for users of all sizes and technical abilities.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies through a single API. It also provides a broad set of capabilities needed to build generative AI applications with security, privacy, and responsible AI.

Using Amazon Bedrock allows for experimentation and evaluation of top FMs for different use cases. It also privately customizes the FMs with user-specific data using techniques such as fine-tuning and retrieval augmented generation (RAG), and it builds agents that run tasks using the user’s enterprise systems and data sources.

Since Amazon Bedrock is serverless, there’s no need to manage any infrastructure and users and partners can securely integrate and rapidly adapt generative AI capabilities into their applications using AWS services.

How CSG and AWS enable hyper-personalization

To illustrate AWS generative AI capability, in this post we show how AWS partner CSG have used the generative AI solutions of AWS to enhance hyperpersonalized capability in their portfolio Ascendon. CSG Ascendon is an AWS Cloud-native platform that enables telecommunications, financial services, media, and entertainment companies to manage customer experiences, billing, and revenue streams. It offers a suite of solutions, including customer experience management, billing and revenue management, order management, product and pricing management, and analytics and reporting. Ascendon allows operators to deliver personalized experiences, manage complex billing and revenue streams, and gain insights to drive business growth and customer loyalty. By using Amazon SageMaker and Amazon Bedrock, CSG Ascendon added capability to detect subscriber’s billing behavior and communicate with hyperpersonalized recommendations targeted toward each individual subscriber with right context.

The following sections illustrate the logical architecture of the solution running in AWS, capturing high level flow and technical architecture for the detail integration flow between different modules with AWS services used. Finally, we have also illustrated the benefit of the solution with a practical use case example.

Logical architecture

The solution comprises four AI engines powered by AWS:

https://thinkwithwp.com/sagemaker/

Figure 1. Logical representation of modular AI engines for different stages of task

Analysis engine: This machine learning (ML)-based analytical model is trained using unsupervised learning to detect bill and usage data anomalies. It identifies overage charge spikes, habitual overage charges, and whether anomalies were detected in the current bill. The output from this stage is used by the other engines. The Analysis engine uses Amazon Bedrock to summarize historical usage, describe recent usage trends, anomalies, and classify the usage type. Amazon Bedrock helped to use the latest generative AI innovations with easy access to choice of FMs.

Prediction engine: An ML-based classifier model trained using supervised learning to predict customer churn scores. Another model predicts customer lifetime value (CLV). The output from this stage, along with the customer data and anomaly type from the previous stage, is used by the other engines. Prediction engine uses Sagemaker for an ML-based classifier model to establish churn score from 0-1. SageMaker brings together a broad set of tools to enable ML use cases in one integrated development environment (IDE).

Recommendation engine: This stage uses the Retrieval Argumented Generation (RAG)-based architecture to generate recommendations using a large language model (LLM). It uses customer data, outputs from the other stages, and a curated catalog knowledge base to generate the best recommendations based on specific criteria, making sure that the proposed plan meets qualification criteria. Recommendation engine uses Amazon Bedrock with RAG to retrieve a catalog of knowledge base stored in Amazon OpenSearch and generate the best recommendation for the customer. It also uses the Amazon Bedrock guardrail feature to make sure that the proposed plan is appropriate for the customer.

Communication engine: This stage uses LLM prompt fine-tuning to generate personalized emails and customer contexts. It generates emails and sends them directly to customers, and customer context summaries for customer service representatives (CSRs) and personalized AI chatbot assistants for customer care and self-care UIs. Communication engine uses Amazon Bedrock to generate personalized email and customer context. It uses Amazon Simple Email Service (Amazon SES) to send email and Amazon Lex for chat integration with customer care and self-care UI.

The power of this solution lies in its ability to provide hyper-personalized experiences for each customer’s specific billing journey. By using generative AI, CSPs can dynamically generate customized messages, recommendations, and digital experiences that address individual customer needs, reducing calls, building trust, and fostering loyalty.

Technical architecture

The following diagram and table illustrate the various native AWS services used to realize the different stages (engines) of the solution in addition to Amazon Bedrock and Sagemaker. Amazon Bedrock provides access to a diverse range of FMs, enabling flexibility to select the most appropriate FM for the desired outcome in the different stages of the solution through a single interface. Classification and regression ML models available in Sagemaker are used to predict the customer churn score and customer lifetime value. These engines are modular and can be re-used with different combinations to build the necessary solution. AWS Lambda, Amazon Simple Notification Service (Amazon SNS), and Amazon Simple Queue Service (Amazon SQS) are primarily used for integration between different engines, thereby providing modularity in building the solution.

AWS services used on different stages of the solution

Figure 2. AWS services used on different stages of the solution

 

Module Tools Input Output
1 Analysis Engine Amazon Bedrock with zero-shot and one-shot prompt Past usage and billing data
  • Anomaly overage charge
  • Habitual overage charge
  • No anomalies detected
2 Prediction Engine Amazon SageMaker ML based classifier model Feature list
  • Churn score, predicted CLV
3 Recommendation Engine Amazon Bedrock with RAG from OpenSearch as knowledgebase Customer data, and output from previous engines
  • Recommended plan with explanation text
4 Communication Engine Amazon Bedrock, Amazon Lex, SES, OpenSearch Recommendation output
  • Recommendation email to subscriber
  • Recommendation text to self-care
  • Recommendation Text to customer-care

A practical example (use case)

Imagine a customer receiving an overage charge for the first time. The Analysis engine detects this anomaly, and the Prediction engine calculates the customer’s churn score based on their profile, current products, customer journey, and bill and usage data history. Then, the Recommendation engine generates a personalized recommendation, such as upgrading to a higher data plan or bundling additional services, based on the customer’s needs and the curated knowledge base.

The Communication engine then generates a personalized email explaining the overage charge, the recommendation, and its rationale. Furthermore, it creates a customer context summary for CSRs, providing them with a comprehensive understanding of the customer’s situation, enabling them to provide more personalized and effective support.

Moreover, the personalized AI chatbot assistants for customer care and self-care UIs are initialized with the customer context, allowing customers to engage with a tailored experience that addresses their specific concerns and needs.

This level of personalization and proactive communication reduces the likelihood of customers calling the call center and fosters a sense of trust and loyalty. Customers feel understood and valued as the telecom company demonstrates a deep understanding of their unique circumstances and provides tailored solutions to address their needs.

Lastly, the solution’s agility allows CSPs to adapt quickly to changing customer behavior and new product/bundle offerings, making sure that personalized messaging remains relevant and effective.

Summary

In a world where customer experience is paramount, this solution from CSG Ascendon combined with AWS generative AI capabilities represents a game-changer for CSPs. By optimizing the billing experience and providing hyper-personalized solutions, CSPs can reduce call center contacts, build trust, and foster long-lasting customer relationships by driving digital engagement, ultimately driving growth and success in an increasingly competitive market.

Rabindra Shakya

Rabindra Shakya

Rabindra Shakya is a Senior Solutions Architect at Amazon Web Services with the Telecommunications team. He is based out of Redmond, WA and helps AWS Telco customers find optimal solutions on AWS. He specializes in Telecommunications 5G, BSS, OSS, Analytics and passionate about applying evolving technology such as generative AI and machine learning to continuously improve and simplify the complexity of telecommunication domain.

John Fendley

John Fendley

John Fendley is Vice President of Product Management leading the evolution of cloud-native solutions at CSG. He leads the development of Ascendon, CSG's innovative digital monetization platform running on AWS. John has been in technical leadership roles within telecommunications, cable, and media for almost 30 years and lives in Denver, Colorado.

Visu Sontam

Visu Sontam

Visu Sontam is a Sr. Partner Solutions Architect in the AWS Worldwide Global Telecom Partner Alliance team, specializing in BSS, OSS, and Network Analytics, working with various global telecom carriers and partners.