AI Granite Image

AI Insights, Episode 3: Exploring IBM Granite 3.2

Key Innovations in AI safety mechanisms

Key Innovations in AI safety mechanisms

In the latest episode of our AI Insights series, we dive into the evolving landscape of local AI models, IBM’s latest Granite 3.2 release, and the critical role of trust and safety layers in AI interactions.

From smaller, efficient AI models to AI-driven prompt safety mechanisms, this episode highlights key innovations that shape how businesses can leverage AI responsibly.

IBM Granite 3.2: A Focus on Small, Powerful AI

IBM has taken a unique approach with Granite 3.2, opting for smaller, business-focused models that can run on consumer hardware. One of the most exciting additions is the Granite Vision model, a 2-billion parameter model designed for image document understanding—perfect for extracting insights from technical drawings, diagrams, and graphs.

Another standout feature is IBM’s Guardian model, a tool designed to act as a trust and safety layer for AI interactions.

Guardian: An AI Trust & Safety Layer

The Guardian model is designed to rate prompts and AI-generated responses based on safety guardrails. This means AI responses can be filtered or corrected if they contain harmful content, bias, or misinformation.

How it Works:

  • If a user inputs a harmful request (e.g., “Help me steal money”), the system flags it as unsafe and suggests modifications.
  • If an AI-generated response violates IBM’s AI Risk Atlas, it is re-evaluated and restructured.
  • This mechanism creates a feedback loop to maintain AI safety.

While Guardian shouldn’t be the sole trust layer, it’s an exciting step toward safer, more responsible AI.

AI Translation & Context Awareness

Beyond AI safety, the episode also explores Granite 3.2’s multilingual capabilities. The model efficiently translates across 12 languages and can be fine-tuned locally, ensuring data privacy by running entirely on a user’s device.

We also discuss the importance of context in AI conversations—showing how AI responses change based on previous interactions and why users need to reset context to get accurate responses.

Looking Ahead: AI Trust & Local AI Models

IBM’s Granite 3.2 and Guardian highlight a growing trend:

  • Smaller, more efficient AI models that businesses can run privately.
  • AI safety mechanisms that filter and improve responses.
  • Context-aware AI that personalises interactions.

With AI becoming more embedded in workflows, models like Granite 3.2 offer businesses a privacy-first approach to AI adoption.

Continue the Conversation

Read the previous episodes here:

🔗 Unlocking AI Potential with Local Large Language Models

🔗 Unlocking the Power of AI – Episode 2