Red Hat applies its stack — from Red Hat Enterprise Linux to OpenShift to Red Hat’s AI inference, agentic, and fine-tuning capabilities — to support predictive and generative AI (genAI) development and deployment. Red Hat AI is a portfolio meant to centralize AI monitoring and management and tooling to help across the entire model lifecycle from data ingest to training to ongoing management. Red Hat AI includes Red Hat AI enterprise for organizations looking to deploy and scale efficiently and anywhere, Red Hat AI Inference Server for optimized inference of LLMs, Red Hat OpenShift AI for distributed Kubernetes platform environments, and Red Hat Enterprise Linux AI for individual Linux server environments.