top of page

AI4

FOUNDATION
MODELS & TRAINING

At The Pencil, we don’t start from scratch — we stand on the shoulders of giants.

​Our AI agents are built using commercially viable foundation models from leading open-source providers like LLaMA (Meta), Gemma and Gemma 3N (Google).

OPEN, PROVEN, AND  SMART AT ANY SIZE

We work with models ranging from 0.5B to over 90B parameters, helping you, small businesses, and large enterprises, find the right model for their scale and use case.

Model Training & Customization

Trained with Open Source Industry Best Practices

Once selected, each model is customized using proven, industry-standard open source training methods to align with your data, communication style, and operational workflows.

Instruction

Tuning

to align behaviour with your goals

QLoRA

for efficient fine-tuning with limited compute

Low-Rank Adaptation

to inject custom knowledge into base models

Tensor

Parallelism

for scalable training on larger models

These methods enable safe, efficient fine-tuning — without overfitting or sacrificing performance.

Trained Scalable Compute When Needed

Most training runs efficiently on local or dedicated infrastructure. We scale only when needed.

 

For projects requiring high-memory nodes, multi-GPU clusters, or temporary compute bursts, we use the Databricks Data Intelligence Platform to provide secure, enterprise-grade resources — all within a compliant and isolated training environment.

Securing AI Interactions

Real-time defence against injections and jailbreaks

LLM-powered applications are susceptible to prompt attacks, which are prompts intentionally designed to subvert the intended behaviour of the LLM. This is why we use Llama Prompt Guard 2 by Meta.

PROMPT GUARD

POWERED

BY

Llama Prompt Guard 2 is trained on a large corpus of attacks, and which are capable of detecting both prompts that contain injected inputs (Prompt Injections) as well explicitly malicious prompts (Jailbreaks).

Ensuring safe agent-to-agent communication

The ability for AI agents to interoperate is crucial for building complex, multi-functional applications. This is why we use the Google Agent2Agent (A2A) protocol.

Agent2Agent Protocol

POWERED

BY

​​Enables Complex Collaboration

Allow specialized agents to work together across different ecosystems on tasks that a single agent cannot handle alone.​

Preserve Opacity

Allow agents to collaborate without needing to share internal memory.

​​​​​​​​​We’re deeply grateful to the open source AI community for their innovation

WE BUILD RESPONSIBLY ON TOP OF THEIR WORK

bottom of page