AWS Cloud Enterprise Strategy Blog

Centralizing or Decentralizing Generative AI? The Answer: Both

centralize

Introduction

For business and IT decision-makers, the question is no longer whether to adopt generative AI but how to structure its implementation for maximum impact and minimum risk. Whether to centralize or decentralize the management and deployment of generative AI capabilities is a key strategic decision with long-term implications.

As highlighted in the blog post “Centralize or Decentralize?” organizations must weigh the trade-offs between centralization and decentralization when implementing transformative technologies like generative AI. Centralization can provide enterprise-wide governance, economies of scale, and unified data management, while decentralization may enable faster innovation and closer alignment with business needs.

We recommend a nuanced approach: a hybrid model that leverages the strengths of both strategies—centralizing the foundations while decentralizing innovation. This strategy combines robust governance with agile delivery and positions your organization to maximize generative AI’s impact.

Determining the Business Need

Instead of focusing on generative AI’s technical layers, identify the high-impact areas where it can drive value and competitive advantage:

  • Customer service: Enhance support while reducing costs with AI-powered chatbots.
  • Marketing: Leverage AI for personalized content creation at scale.
  • Product development: Generate design concepts and simulations with AI.
  • Pharmaceuticals: Accelerate drug discovery by exploring molecular structures with AI.
  • Financial services: Use AI for risk assessment, fraud detection, and personalized advice.
  • Software development: Increase productivity through AI-assisted coding and bug detection.
  • Supply chain: Optimize with AI-driven predictive analytics and logistics planning.
  • HR: Streamline recruitment processes using AI for candidate screening and matching.

Key Questions

  1. What are the critical business problems or opportunities that generative AI could address?
  2. Which organizational domains or functions would benefit the most from generative AI?
  3. What unique data assets and domain expertise can the organization leverage to create differentiated AI solutions?

By clearly defining business needs and use cases upfront, organizations can determine the most appropriate organizational structure and operating model to support the deployment and governance of generative AI.

The Hybrid Approach: The Best of Both Worlds

What is the best organizational structure for generative AI? As AWS Enterprise Strategists, we are inspired by how finance and HR teams can (a) maximize the impact of their resources, (b) be responsive to business demands, and (c) establish guardrails and common ways of working.

They have centralized teams that bring best practices and knowledge to these domains for the whole business—but everyone is expected to manage people and finances.

AI will similarly permeate every aspect of business. It will require skills and knowledge at the front lines, such as the ability to assess the appropriateness of model outputs.

Centralizing the Foundation Layer

Centralizing AI infrastructure enables organizations to efficiently manage the complex, resource-intensive processes of training, fine-tuning, and developing proprietary AI models while achieving economies of scale. This consolidation streamlines data management, analytics, and model maintenance, reducing costs and complexity across the enterprise.

Centralization ensures consistent data quality, security, and compliance standards—critical factors for successfully developing and deploying reliable generative AI models. By unifying these resources, organizations can more effectively navigate the challenges of implementing AI technology while maximizing its potential benefits.

A specialized data team typically manages this centralized foundation and provides guidance, training, tools, and governance to the rest of the organization. They bring advanced AI/ML skills to the table, ensuring that the organization’s generative AI capabilities are built on a solid foundation.

Decentralizing AI Innovation across Business Domains

While the foundational aspects of generative AI benefit from centralization, innovation thrives in a decentralized environment. A distributed approach accommodates the diversity of AI use cases across business domains—from summarizing legal texts to analyzing financial data to designing in R&D and creating marketing content. These applications require not only different underlying models but also different customizations, fine-tuning, quality control measures, user interface designs, and integration with existing applications and business processes.

This diversity and individuality of use cases makes a centralized model less efficient, as it struggles to meet each department’s unique needs and rapid innovation cycles. But data mesh (a model that decentralizes data and AI) aligns well with the needs of the business domains.

As in finance and HR, centralized teams provide best practices, but each part of the organization develops its own capabilities. For generative AI this means empowering teams across the organization to evaluate model results, integrate AI into workflows, and drive innovation from the ground up.

With data mesh, domain-specific teams take ownership of their AI applications. These teams are closest to business challenges and opportunities; they are best positioned to identify and implement high-impact AI use cases. They can rapidly prototype, test, and iterate AI solutions, ensuring close alignment with their particular operational contexts and strategic goals. This not only accelerates the development and deployment of generative AI solutions but also ensures that they are closely aligned with each department’s specific operational contexts and strategic goals.

Maintaining Effective Governance in a Decentralized Model

While decentralization supports faster innovation and closer alignment with specific business needs, maintaining effective governance and oversight to ensure consistency, quality, and compliance across the organization is crucial.

A few key strategies achieve this.

Centralized Platform and Tooling: Provide a centralized platform that offers a standardized set of tools, models, and APIs for domain teams to leverage when building and deploying generative AI solutions. This ensures a baseline level of quality, security, and compliance.

Shared Responsibility Model: Establish a shared responsibility model where the central data science and engineering team sets the standards, guidelines, and best practices while the domain teams customize and apply these within their specific contexts.

Governance Councils: Form crossfunctional governance councils that bring together representatives from the central team and domain teams to review and approve the deployment of generative AI solutions. This helps maintain strategic alignment and consistent risk management.

Centralized Monitoring and Auditing: Implement centralized monitoring and auditing to track the performance, usage, and compliance of generative AI applications across the organization.

Knowledge Sharing and Collaboration: Foster a culture of knowledge sharing and collaboration between the central team and domain teams, facilitating the exchange of insights, methodologies, and lessons learned. This helps ensure consistent quality and the adoption of best practices.

Domain Teams Thrive with Central Platform Support

Decentralization doesn’t mean isolation. Domain teams still benefit from centralized data science support that provides guidance, training, tools, and governance. This ensures access to the latest methodologies and technologies while maintaining controls and standards. Centralized expertise typically comes from the team responsible for training proprietary models acting as a platform team.

The blog post “Responsible AI Best Practices: Promoting Responsible and Trustworthy AI Systems” discusses how to maintain fairness, transparency, and accountability across the entire generative AI life cycle. This is crucial when deploying generative AI solutions in a distributed, domain-specific manner, as it ensures that the solutions are aligned with the organization’s ethical principles and do not perpetuate biases or cause unintended harm.

Conclusion

The future of generative AI implementation lies in strategically balancing centralization and decentralization.

A centralized foundation provides the bedrock of security, scalability, and compliance that is nonnegotiable in today’s regulatory landscape. A decentralized execution layer empowers domain experts to rapidly innovate and deploy AI solutions tailored to specific business needs. This hybrid model offers a powerful strategic advantage, enabling organizations to maintain control while fostering agility. By centralizing core infrastructure and decentralizing application development, companies can navigate the complexities of AI adoption while maximizing its transformative potential.

To thrive in the AI-driven future, organizations must position themselves at the forefront of innovation while ensuring robust governance and scalability by acting now to develop a nuanced strategy that leverages both centralized and decentralized elements.

—Matthias Patzak and Tom Godden

Links:

Centralize or Decentralize? – by Mark Schwartz
Welcome to a New Era of Building in the Cloud with Generative AI on AWS – by Swami Sivasubramanian
Data Lakes vs. Data Mesh: Navigating the Future of Organizational Data Strategies – by Matthias Patzak
How Technology Leaders Can Prepare for Generative AI – by Phil Le-Brun
Your AI is Only as Good as Your Data – by Tom Godden
Navigating the Generative AI Landscape: A Strategic Blueprint for CEOs and CIOs – by Tom Godden
Data Lakes on AWS
What is a Data Mesh?

Matthias Patzak

Matthias Patzak

Matthias joined the Enterprise Strategist team in early 2023 after a stint as a Principal Advisor in AWS Solutions Architecture. In this role, Matthias works with executive teams on how the cloud can help to increase the speed of innovation, the efficiency of their IT, and the business value their technology generates from a people, process and technology perspective. Before joining AWS, Matthias was Vice President IT at AutoScout24 and Managing Director at Home Shopping Europe. In both companies he introduced lean-agile operational models at scale and led successful cloud transformations resulting in shorter delivery times, increased business value and higher company valuations

Tom Godden

Tom Godden

Tom Godden is an Enterprise Strategist and Evangelist at Amazon Web Services (AWS). Prior to AWS, Tom was the Chief Information Officer for Foundation Medicine where he helped build the world's leading, FDA regulated, cancer genomics diagnostic, research, and patient outcomes platform to improve outcomes and inform next-generation precision medicine. Previously, Tom held multiple senior technology leadership roles at Wolters Kluwer in Alphen aan den Rijn Netherlands and has over 17 years in the healthcare and life sciences industry. Tom has a Bachelor’s degree from Arizona State University.