Artificial intelligence is no longer an experimental workload in AWS—it is rapidly becoming a core part of production architectures. From generative AI applications to large-scale machine learning pipelines, architects are now expected to design AI systems that are not only powerful, but also secure, reliable, cost-efficient, and responsible.
At AWS re:Invent 2025, AWS expanded its AI guidance within the Well-Architected Framework by introducing one new lens and two major updates designed specifically for AI workloads: the Responsible AI Lens, the Machine Learning (ML) Lens, and the Generative AI Lens. Together, these lenses offer practical, end-to-end architectural guidance for organizations at every stage of their AI journey—whether teams are just beginning to explore machine learning or operating complex, production-grade AI systems at scale.
The AWS Well-Architected Framework itself defines proven architectural best practices for building and operating workloads in the cloud that are secure, reliable, performance-efficient, cost-optimized, and sustainable. By extending the framework with AI-focused lenses, AWS enables architects to apply these core principles to the unique challenges and considerations of modern AI and machine learning workloads.
The Responsible AI Lens: Designing AI Systems with Trust, Fairness, and Transparency
The Responsible AI Lens provides a structured framework that helps teams evaluate, track, and continuously improve their AI workloads against established best practices. It enables architects and developers to identify potential gaps in their AI implementations and offers actionable guidance to improve system quality while aligning with responsible AI principles. By applying the Responsible AI Lens, organizations can make informed architectural decisions that balance business objectives with technical requirements—accelerating the transition from AI experimentation to production-ready, trusted solutions.
Key Takeaways from the Responsible AI Lens:
Every AI system carries responsible AI considerations:
Whether intentionally designed or not, all AI systems introduce responsible AI implications. These considerations must be actively addressed throughout the system lifecycle rather than left to chance.
AI systems may be used beyond their original intent:
Applications are often adopted in ways developers did not initially anticipate. Combined with the probabilistic nature of AI, this can lead to unexpected outcomes—even within intended use cases—making early and deliberate responsible AI decisions essential.
Responsible AI enables innovation and builds trust:
Rather than limiting progress, responsible AI practices act as a catalyst for innovation by establishing stakeholder confidence, strengthening customer trust, and reducing long-term operational and reputational risks.
The Responsible AI Lens serves as the foundational guidance for AI development on AWS, providing core principles that inform and support both the Machine Learning Lens and the Generative AI Lens implementations.
The Machine Learning Lens: Building Strong ML Foundations on AWS:
The Machine Learning Lens acts as a practical foundation for teams designing and running ML workloads on AWS. It brings together proven, cloud-agnostic best practices mapped to the Well-Architected Framework pillars, covering every stage of the ML lifecycle. Whether you’re experimenting with your first model or operating complex AI systems in production, the updated ML Lens provides a consistent way to think about architecture, operations, and scale.
Since its initial release in 2023, AWS’s ML ecosystem has evolved significantly—and the updated ML Lens reflects that progress. It incorporates modern tooling and services that help teams move faster, collaborate better, and operate ML workloads more efficiently and responsibly.
What’s new in the updated Machine Learning Lens:
- Streamlined collaboration between data and AI teams using Amazon SageMaker Unified Studio
- AI-assisted development to boost developer productivity with Amazon Q
- Scalable, distributed training for foundation models and fine-tuning using Amazon SageMaker HyperPod
- Flexible model customization, including fine-tuning and knowledge distillation, using Amazon Bedrock, Kiro, and Amazon Q Developer
- No-code ML workflows with Amazon SageMaker Canvas, now enhanced with Amazon Q
- Stronger bias detection and responsible AI practices with improved fairness metrics in Amazon SageMaker Clarify
- Faster access to business insights through automated dashboards in Amazon QuickSight
- Modular inference architectures that simplify deployment and scaling using Inference Components
- Deeper observability with improved debugging and monitoring across the ML lifecycle
- Better cost control through SageMaker Training Plans, Savings Plans, and Spot Instances
One of the strengths of the ML Lens is its flexibility. You can apply it early during architecture design or use it later to review and improve existing production workloads. Regardless of where you are in your cloud or ML journey, the ML Lens—powered by services like Amazon SageMaker Unified Studio, Amazon Q, Amazon SageMaker HyperPod, and Amazon Bedrock—helps teams build ML systems that are scalable, efficient, and ready for production.
The Generative AI Lens: Practical Architecture Guidance for Foundation Models:
The Generative AI Lens helps architects and builders take a structured, repeatable approach to designing systems that use large language models (LLMs) and other foundation models to deliver real business value. It focuses on the architectural decisions teams face most often when building generative AI applications—such as choosing the right model, designing effective prompts, customizing models, integrating workloads, and continuously improving system performance.
Unlike the broader Machine Learning Lens, which applies across the entire ML spectrum, the Generative AI Lens zooms in on the unique requirements of foundation models and generative AI workloads. It distills best practices drawn from AWS’s experience working with thousands of customers and aligns them with the Well-Architected Framework, helping teams move from experimentation to production with confidence.
What’s new in the updated Generative AI Lens:
- Expanded guidance on orchestrating complex, long-running generative AI workflows using Amazon SageMaker HyperPod
- A stronger Responsible AI foundation, including a detailed breakdown of AWS’s eight core Responsible AI dimensions
- A new agentic AI preamble introducing architectural patterns for building AI agents and multi-step reasoning systems
By building on the foundation provided by the ML Lens, the Generative AI Lens offers focused, practical guidance for teams tackling the distinct challenges—and opportunities—of generative AI and foundation model–based applications on AWS.
Implementing Well-Architected AI/ML Guidance
The three new AI-focused lenses—Responsible AI, Machine Learning, and Generative AI—are designed to work together as a single, cohesive guidance model rather than standalone frameworks. Each lens plays a specific role, but together they help teams build AI systems that are production-ready, trustworthy, and scalable.
The Responsible AI Lens sets the baseline by focusing on safe, fair, and secure AI development. It helps teams balance business goals with technical and ethical requirements, making it easier to move from proof-of-concept experiments into production. The Machine Learning Lens then provides broader guidance across both traditional ML and modern AI workloads, with recent updates that improve collaboration between data and AI teams, introduce AI-assisted development, support large-scale infrastructure provisioning, and enable more flexible model deployment. On top of this foundation, the Generative AI Lens focuses specifically on LLM-based architectures, with new guidance for Amazon SageMaker HyperPod, emerging agentic AI patterns, and updated architectural scenarios for common generative AI applications.
What’s Next?
With the launch of these lenses at re:Invent 2025, AWS gives organizations a clear path to building AI systems that are not just powerful, but also responsible and trustworthy. By covering the full range of AI workloads—from traditional ML to generative AI—these lenses help teams accelerate innovation while maintaining strong architectural and responsible AI standards.