ShellGPT and Ollama: First steps with AI and your TUXEDO - TUXEDO Computers

  ATTENTION: To use our store you have to activate JavaScript and deactivate script blockers!  
Thank you for your understanding!

ShellGPT and Ollama: First steps with AI and your TUXEDO

Artificial Intelligence is no longer limited to web applications like ChatGPT – with ShellGPT, large language models (LLMs) can be used directly in the command line. While the tool accesses OpenAI models by default, it also supports locally hosted alternatives. This is where Ollama comes in: The platform enables running powerful models like Llama3.2 or Mistral directly on your own system. This not only provides more control over your data but also saves costs. In this article, you’ll learn how to use ShellGPT with Ollama and bring AI power directly to your shell.

Guide to Installing and Setting Up ShellGPT and Ollama on TUXEDO OS

Setting up ShellGPT in combination with Ollama on a Linux system is straightforward and enables the use of locally hosted language models like Mistral. The following step-by-step guide describes how to install, configure, and integrate Ollama into ShellGPT.

Installing Ollama

To install Ollama, first open a terminal. Enter the following command to download and set up all required components:

sudo apt install pipx
pipx ensurepath
curl https://ollama.ai/install.sh | sh

The installation process runs automatically and sets up all necessary components to operate Ollama on your system.

For optimal performance, Ollama requires a dedicated NVIDIA or AMD GPU with appropriate drivers. On TUXEDO OS, these are installed automatically. If you see the message „No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode.“ during installation, no compatible graphics card was detected, meaning the AI will work exclusively with CPU processing power.

Setting Up Ollama

After successfully completing the installation, you can set up a language model. For optimal results, mistral:7b-instruct is recommended. Download this model using the following command from the internet and ensure that the pull command concludes with „success“. If the download was interrupted, repeat the command. You can then verify if the model was successfully installed using the next command.

ollama pull mistral:7b-instruct
(out)...
(out)verifying sha256 digest
(out)writing manifest
(out)success
ollama list
(out)NAME                   ID              SIZE      (out)MODIFIED
(out)mistral:7b-instruct    f974a74358d6    4.1 GB    6 minutes ago

Mistral is a powerful, open-source AI model known for its high efficiency and flexibility. It requires minimal computational resources and can be used in both small and large-scale applications. Thanks to its compatibility with popular frameworks, it is easy to integrate and suitable for a wide range of use cases. Ollama enables the integration of other large language models (LLMs), such as DeepSeek R1, which is currently making headlines. We will delve into DeepSeek in a future article.

Downloading the model may take some time depending on your internet speed. In total, more than 4 GB of data will be transferred to your hard drive. Once the model is downloaded, check the status of the Ollama API server with the following command.

systemctl status ollama
(out)● ollama.service - Ollama Service
(out)     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)
(out)     Active: active (running) since Wed 2025-02-05 14:25:11 CET; 1min 31s ago
(out)   Main PID: 1174 (ollama)
(out)      Tasks: 13 (limit: 37949)
(out)     Memory: 50.9M (peak: 61.0M)
(out)        CPU: 289ms
(out)     CGroup: /system.slice/ollama.service
(out)             └─1174 /usr/local/bin/ollama serve

This server ensures that the model is continuously available on your system and can be used by various applications, particularly ShellGPT.

Configuring ShellGPT for Ollama

After Ollama has been successfully set up, you need to configure ShellGPT for communication with the local server. Begin by installing ShellGPT and the LiteLLM extension. Enter the following command:

pipx install shell-gpt[litellm]

To verify if the Ollama server is functioning properly, you can perform a simple test:

$ sgpt --model ollama/mistral:7b-instruct "Who are you?"
(out)I am ShellGPT, your Linux/TUXEDO OS programming and system
(out)administration assistant. I specialize in bash shell scripting
(out)and managing various tasks on these operating systems. Let's
(out)work together to make the most out of your system!

If you’re using ShellGPT for the first time, you’ll be asked for an OpenAI API key. In this case, enter any string to skip this step.

Next, it’s necessary to modify ShellGPT’s configuration file. Open the file .sgptrc in the directory ~/.config/shell_gpt with a text editor of your choice:

nano ~/.config/shell_gpt/.sgptrc

Change the following settings:

  • Set [DEFAULT_MODEL] to [ollama/mistral:7b-instruct].
  • Change [OPENAI_USE_FUNCTIONS] to [false].
  • Set [USE_LITELLM] to [true].

Save the file with Ctrl+O and close the editor with Ctrl+X.

Using ShellGPT with Ollama

Once everything is set up, you can use ShellGPT with Ollama. Test this with an input like:

sgpt "Hello Ollama"
(out)Hello! How can I assist you today? If you have a question or task related
(out)to Linux/TUXEDO OS programming or system administration, feel free to ask.
(out)I'm here to help! 

ShellGPT should now communicate with the locally hosted model via Ollama and provide you with answers. Below you’ll find a series of examples.

Example 1: Deleting Specific Files

Want to recursively delete specific files? Using a graphical file manager, you would need to navigate laboriously through various folders and manually search for the corresponding files. ShellGPT builds the appropriate command that you just need to execute.

sgpt "Delete recursively all JPG files in the current folder whose names begin with two numbers."

If you’re only interested in the command and want to skip the explanation, add the –shell switch after calling ShellGPT. This option reduces the output to the generated command and asks if you want to execute the command directly.

sgpt --shell "Delete recursively all JPG files in the current folder whose names begin with two numbers."

Example 2: Converting with ImageMagick

Want to scale all PNG images in the current directory to a uniform width of 1024 pixels and convert them to the space-saving JPG format? You could load each image individually into GIMP and process them one by one. Or you can use ImageMagick on the command line. ShellGPT creates the appropriate command for you.

sgpt "resize all png images in the current folder to a width of 1024 pixels and convert them to jpg."
for img in *.png; do convert "$img" -resize x1024 -quality 80 "$(basename "$img" .png).jpg"; done

Example 3: Creating Regular Expressions

Suppose you have a text or list with names, addresses, and email addresses and want to extract all email addresses from it. With the grep command and a suitable regular expression, this can be done quickly – but creating such expressions is often tricky. ShellGPT handles this task for you and delivers the appropriate command instantly.

sgpt "extract all email adresses from the file example.txt"
grep -oP '(\w+([-+.\w]*\w+)*@(\w+([-.]\w*)*\w+\.){1,2}[a-zA-Z]{2,})' example.txt

Example 4: Daily Backup via Cron

ShellGPT’s capabilities go far beyond creating individual commands – it can also write complete scripts for you. For example, if you want to create a daily backup of a directory, ShellGPT generates not only the appropriate backup script but also the corresponding entry in your system’s crontab.

sgpt "Write a Bash script that creates a daily backup at 2 AM of /home/user/Documents and saves it in the /home/user/Backups directory."

Please note that ShellGPT is not fully optimized for using local models. In certain use cases, this could lead to unexpected behaviors.