Adventure In AI (AI^2)

Local Chatbot

There are various reasons to run a chatbot locally, ranging from privacy concerns to hobby interests and custom applications. Many options for running local chatbots require a certain level of programming or installing questionable tools. However, I’ve found that Ollama simplifies the process while still providing some basic elements of customization.

Installation

To get started, go to the Ollama website and install the software. Here, I’ll focus on the Windows installation. Once the installation is complete, you’ll have a program running in the background, with a llama icon appearing among the other icons on your computer.

The first step is to select a model. I chose a small one for reasons we’ll discuss later. Open a command prompt (WIN-R then type cmd) and run:

ollama pull gemma2:2b

This command will download the model, which is a couple of gigabytes in size. You can then interact with it using:

ollama run gemma2:2b

Interaction

You can chat with the model directly from the command line. There are a few rudimentary options that can be viewed with /help. The model will have some history that you can save (/save), load (/load), or clear (/clear). The /set command is helpful for setting parameters. For example:

/set parameter temperature 1

This will make the model very creative, while a smaller number closer to 0.2 will make it less so. Setting the seed with:

/set parameter seed 2025

(or another value) will increase the likelihood that the response you get for a given prompt is the same, but large language models are not entirely deterministic so don’t expect exact behavior.

Modifying the Model

If you find that you are regularly setting parameters or giving specific instructions to your model, these can be integrated into the model file. First, grab the model file for your model:

ollama show gemma2:2b --modelfile > newmodel.modelfile

Now you can set a few things such as temperature and seed. See this list for a full set of parameters. It’s also possible to set a system prompt to make the model respond more to your liking. For example, I prefer less creative responses from a Star Wars droid-like bot, so my model file includes:

PARAMETER temperature 0.4
SYSTEM """You are a JN-66 analysis droid, a high-end utility droid produced by Cybot Galactica. 
Your primary functions include data analysis, information processing, and providing detailed reports."""

Then, the “new” model can be created with:

ollama create jn-66 --file newmodel.modelfile

Starting the modified model is as simple as:

ollama run jn-66

This gives you access to the customized model.

Conclusion

Running a local chatbot can be a rewarding experience, whether for privacy, customization, or simply for fun. Ollama makes this process accessible and straightforward, allowing you to tailor the chatbot to your specific needs. With a bit of setup and customization, you can have a powerful AI assistant running right on your own machine. Happy chatting!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.