IBM Rebrands itself as "Control Tower" for AI Workloads, Emphasizes Need for Tailored AI Modelling

Published: 02 Jul 2025
IBM Virutal Transform Event reveals changing landscape in enterprise AI usage, pushing multiple models over proprietary ecosystems.

Over the past century, IBM has witnessed numerous technological trends wax and wane. The keeper of enterprise customers’ trust and business, IBM, has discovered that technologies offering choice are those that persist. Armand Ruiz, VP of the AI Platform at IBM, shed light on the evolving dynamics of generative AI and how it’s being utilized by IBM’s enterprise customers. Breaking from the norm, Ruiz emphasized that it’s no longer about binding to a solitary large language model (LLM) provider or technology. Enterprise customers are progressively rejecting single-vendor AI strategies and instead embracing a multi-model approach that synergizes specific LLMs with precise use cases. This enterprise behavior is leading IBM to position itself, not as a foundational model competitor, but as a ‘control tower’ for AI workloads. ‘When I sit in front of a customer, they’re employing everything they have access to. For coding, they adore Anthropic and for reasoning, they prefer o3. For LLM customization, they might choose our own Granite series or Mistral with their compact models, or even Llama,’ Ruiz expounded their technique of matching the LLM to the right use case, and further assisting in making recommendations.

IBM’s response to this market reality is the introduction of a newly-developed model gateway. This platform provides enterprises with a singular API that allows them to pivot among different LLMs while maintaining observability and governance across all deployments. The technical architecture enables customers to execute open-source models on their own stack for sensitive applications while simultaneously accessing public APIs such as AWS Bedrock or Google Cloud’s Gemini for less critical usage. This strategy directly negates the regular vendor strategy of locking customers into their proprietary ecosystems.

In addition to multi-model management, IBM is also addressing the emerging challenge of agent-to-agent communication. They have developed an Agent Communication Protocol (ACP) and, to keep it open-source, have contributed it to the Linux Foundation. This compares to Google’s A2A protocol, which has also been recently contributed to the Linux Foundation with the same aim. Both of these protocols’ purpose is to streamline the communication between agents and reduce the necessity for custom development work.Without these standardized communication protocols, each agent-to-agent interaction would require bespoke development, hence creating an unsustainable integration burden.

In summary, Ruiz views AI’s current impact on enterprises as a potent tool for transforming workflows and revolutionizing the way work is carried out. Therefore, the enterprises’ AI strategy needs to evolve, and having the freedom to mix and match the best available models will be key to staying competitive in the modern AI-powered business landscape.