IBM on Enterprise-Based Generative AI

2023 kicked off with an investment frenzy over generative AI following the wildly successful release of ChatGPT in late 2022. The recent boom in generative AI and ChatGPT is just a continuation of the sweeping digital transformation witnessed since 2020 due to the COVID-19 pandemic. Conversational AI meets the demand for access to real-time information based on consumer experiences, preferences, and expectations observed during the pandemic.


Interestingly, the evolution in generative AI merges enterprise-only focused conversations with consumer-based interactions. ChatGPT, in particular, allows generative AI interface integration with enterprise-based foundation models for consumer-facing applications.


Unlike OpenAI, which is relatively young, IBM has been in the conversational AI business for quite a while. Today, IBM’s 13-year-old Watson Assistant remains one of the most popular business-ready solutions in the AI market. The company was recognized as a Leader in the Gartner® Magic Quadrant™ for Enterprise Conversational AI Platforms for the second year in a row.


Question is, how does Watson AI stack up against the new generation of conversational AI, and how is IBM still the undisputed leader in the enterprise-based AI space despite the recent uptake in ChatGPT?

Large Language Models are Not Ready for the Enterprise Market

To understand what technologies like ChatGPT can do, you need to know how they work. A large language model (LLM) consists of a neural network trained on large quantities of unlabelled text. It generates content using a complex statistical model that predicts word/phrase sequencing based on the input data and the material it’s trained on. The user-facing side has a natural language processor (NLP) that can translate human inputs, such as questions and sentences, into commands the internal mechanisms can understand. Generative AI basically outputs the most probable word or character in a line of text.


Here’s why large language models are not ready for the enterprise space:

Not 100% Reliable

ChatGPT, particularly the latest version (GPT-4), is wildly impressive when it comes to processing natural language, understanding the user’s intent, and providing natural conversational responses. But this technology does have some serious limitations. For starters, LLMs are not right all the time. If the AI doesn’t know the exact answer to a query, it will “hallucinate” facts, resulting in nonsensical or incorrect responses. For instance, the hallucination rate on ChatGPT is about 15% to 20%.


As a business, you can’t allow a chatbot or virtual assistant to spew out wrong or misleading information. You simply can’t have your customers or employees acting on incorrect AI responses.

Lack Enterprise-Specific Foundation Models

Another problem with LLMs as an enterprise AI tool has to do with the AI’s training data. The models are trained on vast amounts of publicly available online data. That means the AI knows nothing about any companies besides what’s already available on public domains (webpages, directories, social media, etc.). So, it won’t answer most questions specific to a particular company or user.


For instance, if a customer asks ChatGPT how to check their account balance in a specific bank, it’ll respond with a vague description of common ways to check bank account balances. It won’t describe the specific steps or procedures to check account balances in that bank, which is what the customer is after. GPT-like AIs cannot provide deeply relevant or innate information about a business or its customers because they’re not trained on any specific company’s workflows, CRMs, or clientele.

Inadequate Governance Around Training Data and Models

The third reason LLM technology is not ready for the enterprise market is the lapse in governance around its learning models. In addition to the online training data, the current versions of LLM chatbots continuously learn from user inputs and interactions. Without the necessary guardrails, the AI’s foundation models could easily be trained using incorrect or manipulative data sets.


“It is the governance around the models and the data that’s used to train the models that ultimately is going to be what makes or breaks the adoption of generative AI solutions at large.” — Chris Zobler, VP Data & AI, IBM.


The truth is, ChatGPT and other LLM products are still in the early market adoption phase. They have exciting and rapidly evolving capabilities but are not quite ready for enterprise adoption as a commercial conversational tool—at least not yet. OpenAI CEO Sam Altman shares the same view.

Watson Assistance Vs. ChatGPT

While IBM’s Watson Assistant may appear similar to ChatGPT, these are two different tools with unique capabilities and applications. Right off the box, Watson Assistant is an enterprise-ready cognitive virtual assistant with NLP capabilities, enabling you to add a conversational interface to any user/employee application, channel, or device. It automates business-user interactions by creating outcome-oriented self-service experiences.


On the other hand, ChatGPT is a language processing tool capable of natural human-like conversations and other language-based tasks such as summarizing and composing text. However, ChatGPT is rather flawed as a standalone enterprise tool.

The table below summarizes the differences between the technologies powering Watson Assistant and ChatGPT:


Large Language Model (LLM)

Cognitive Virtual Assistant using NLP

Responds in an expressive, human-like conversational manner

Tends to sound robotic and stiff

Presents answers in concise text blurbs

Provides most answers in information cards rather than conversational flows

Generates generic responses based on universal information found on the internet

Generates specific information relevant to the particular business or user since it’s trained on company content

Is not entirely truthful or trustworthy, given that some of the information found online may be false or biased

Displays accurate information using tried and true resolution methods and verified knowledge bases

Responds in a generic or flat tone of voice

All responses are tuned to the business’s preferred tone and voice

Limited authoring control and governance around foundation models

Provides full control over authoring, governance, training data, and foundation models

Cannot automate user or business tasks on its own

Automates business processes by tapping into internal action routines based on the user’s intents

Integrating Watson Assistance with Generative AI

Although ChatGPT has yet to break enterprise adoption, it has set a high bar for conversational AI in this space. That said, ChatGPT is not a rival to Watson Assistant. If anything, IBM sees ChatGPT as a powerful ally to help augment Watson Assistant with natural language generation capabilities. The two systems can complement each other to create a more fluid, natural, and accurate enterprise-centric conversational experience.


More specifically, Watson Assistant can tap into ChatGPT’s generative capabilities to produce human-like responses, especially for queries outside the foundation models or training data. This can be done by adding a layer of generative AI to search results produced by Watson Assistant via Watson Discovery or other data sources.


IBM partnership solutions Streebo and NeuralSeek are already merging generative AI and Watson Assistants using OpenAI APIs. With the added layer of generative AI, Watson Assistant can format query results into more concise and conversational responses while retaining the same level of transparency and accuracy users expect.


IBM’s generative AI roadmap centers around embedding the following capabilities in Watson Assistant and its underlying technologies:

  1. Conversational search – Replace “robotic” responses with natural-sounding conversational experiences.
  2. Personalized responses – Create customized experiences based on contextual data and the end user.
  3. Faster and easier authoring for conversations – Make it easier for authors to create and review conversation flows before deploying AI modules.
  4. Faster and easier authoring for user journeys – Allow authors to map user tours around the various products and services available on the host app, website, or channel.

“Watson is already a good listener. He will now become a good speaker!” Alexandre Lanoue, VP & Leader, Business Reimagination, SIA.

The Takeaway

Exciting things are happening in the world of conversational AI. And IBM is passing down the new capabilities of next-gen AI to its customers by merging the enterprise readiness of Watson Assistant and the generative functionality of ChatGPT.


Do not pass up the opportunity to add a whole new dimension to virtual interactions in your business. With SIA Innovations by your side, you’ll never miss out on any enterprise-tech innovation. Let’s discuss upgrading your AI experiences and overall digital performance using the latest solutions.


contact one of our experts

By submitting this form, you accept our privacy policy. Please refer to our privacy policy for more information about our practices.

Subscribe to our newsletter!