Building custom LLM chatbots with Google Colab

Tolulade Ademisoye
3 min readNov 25, 2023

My talk @ Google Devfest Lagos 2023

The application of large language models (llms) with chatbots has inspired new technologies in bot development relating to natural language. Previously, most chatbots developed didn’t have the element of human-like interaction while using them. Facebook Messenger, WhatsApp bots, Website customer bots etc., were designed to provide mechanical responses based on the input from the user.

Join Semis today to network in AI & Bigtech

This is me, Speaking on this topic @ a Google Developers Group Event, Devfest Lagos 2023

Business Impact of Secured Chatbots

Data privacy is critical for most businesses today, most of these businesses have certain limitations with respect to handling their customer’s data as well as their internal operational generated or derived data.

Human-like chatbots play a crucial role in employee productivity and customer satisfaction. So alternative routes to building chatbots based on large language models (llms) have emerged which is; utilising Opensource Large language Models.

There are some considerations when building a custom LLM chatbot, in this article or write-up, I’ll examine them. Stay tuned!

Join Semis today to network in AI & Bigtech

Selecting the right LLM

Large language models are available in various types, namely;

  1. Closed Source: Examples; GPT 3.5/4, Claude etc.,
  2. Open Source: Examples; LLam2, Vicuna 7b, Falcon 7b/180b etc.,
  3. Internally built LLMs (expensive route and time-consuming)

Since this title focuses on data privacy, let’s explore open-source models.

Before you choose your LLM — picture from my slide GDG Lagos 2023

Designing the Architecture

Before commencing your development, you should design the system architecture for your LLM system. In this case, we will use a Retrieval Augmented Generation (RAG) system.

What is Retrieval Augmented Generation (RAG)?

Someone once said, “RAG is a simple but very effective way to use large language models without fine-tuning.” That is correct. LLMs can be modified to suit our core needs either by; finetuning or using RAG.

RAG has three parts namely;

  1. The instruction
  2. The response
  3. The source of the answer

The LLM is instructed to focus on the primary source data before giving its response.

Join Semis today to network in AI & Bigtech

RAG system components include;

  1. Loaders and parsers
  2. Document pre-processing
  3. Document storage -vector db
  4. Retrieval algorithm

The advantages that RAG possesses are, less hallucination (ability to give wrong answers), the system isn’t reliant on only trained data, RAG enables us to get the model to have positive behaviour.

I’ll share more about RAG in a follow-up write-up hopefully.

Developing the LLM System

Now that we have decided on the system architecture, we need to implement it.

I have made a detailed guide for building a custom chatbot using Google Colab in my GitHub repo and Google Notebook.

Join Semis today to network in AI & Bigtech

You can access the repo here and the notebook. I hope this helps. You may also buy me a coffee to support my work.

Until next time, you may connect with me on; YouTube, Twitter, Substack & LinkedIn.

--

--

Tolulade Ademisoye

i build enterprise AI & data for the world at Reispar Technologies