Local AI with Alpaca: Privacy-friendly LLMs on TUXEDO OS - TUXEDO Computers

  ATTENTION: To use our store you have to activate JavaScript and deactivate script blockers!  
Thank you for your understanding!

Local AI with Alpaca: Privacy-friendly LLMs on TUXEDO OS

The use of large language models (LLMs) is widespread today but often occurs via cloud services where user data is processed on external servers. The open-source app Alpaca offers a privacy-friendly alternative: it enables the local execution of numerous common LLMs on Linux, providing more independence and security.

After recently introducing ShellGPT with Ollama as an AI server interface and Mistral as a language model, Alpaca simplifies the use of AI language models even further by bundling all necessary features into a single app—without requiring additional services or a command-line interface.

Installation

On TUXEDO OS, Alpaca can be installed most easily via the Discover Software Center as a Flatpak, ensuring you always have the latest version of this still-young program. After installation, open Alpaca and manage the language models through the Open Model Manager switch.

Under the Available tab, you’ll find all models that you can download directly through Alpaca and integrate locally as a „Managed Instance.“ Keep in mind that these models are typically several gigabytes in size and require more RAM and computational power as the data increases. For initial tests, the lighter models Llama3.2 or Deepseek R1 7B are particularly well-suited.

Alternatively, you can add external language models via Manage Instances and the Plus icon under Add Instances. Alpaca supports Ollama (such as an Ollama server on your own network) as well as services like OpenAI ChatGPT, Google Gemini, Together AI, Venice, and OpenAI-compatible instances. In this test, however, we’ll focus on Ollama (Managed) to use a model directly on your computer.

Info: On systems with hybrid graphics, such as laptops with Intel iGPU and Nvidia dGPU, Alpaca does not start under Wayland. Instead, the terminal displays the error message Error 71 (Protocol error) dispatching to Wayland display. As a workaround, you can either use X11 or, as described here, add the option GSK_RENDERER=ngl to the (possibly to be created) configuration file ~/.config/environment.d/gsk.conf.

Note: An alternative to Alpaca, based on KDE libraries, is Alpaka, but it is still in early development and currently requires an Ollama server running in the background.

Usage

After installing one or more language models, start a New Chat via the speech bubble icon at the top left. Then, select the desired language model from the title bar in the chat window. Once you input a query, the AI model will respond.

For a simple test, we installed Dolphin Mistral (7B), Llama3.2 (3B), and Deepseek R1 (7B) and posed the following mathematical problem:

Train A and Train B are 300 km apart and traveling towards each other on a straight track. Train A departs at 08:00 at a speed of 30 km/h, while Train B departs at 11:00 at a speed of 60 km/h. When and where do the trains meet?

The results were disappointing: Only Deepseek arrived at the correct answer (13:20, 160 km from Station A). Mistral and Llama, however, provided completely wrong answers. For tasks requiring high precision, such as important mathematical calculations or scientific computations, simple language models are therefore not suitable.

System Requirements

For our test, we installed Alpaca on a TUXEDO InfinityBook Pro Gen7 with an Intel Core i7–12700H and an Nvidia GeForce RTX 3050 Ti running TUXEDO OS. The app automatically uses the computational power of the dedicated graphics card. With Llama3.2, it achieves text generation speeds comparable to GPT-4o mini in the free version (prompt: „Write me a funny story about the penguin Tux in about 1000 words“).

Performance depends heavily on the selected model and hardware. With GPU acceleration via Nvidia’s CUDA or AMD’s ROCm, even larger models can run smoothly. For initial tests, a business laptop with a dedicated GPU or a desktop PC, typically equipped with Nvidia or AMD graphics cards, is usually sufficient.

Conclusion: A Step Towards More Independence

For those looking to use large AI models while valuing privacy and independence, Alpaca provides a powerful and easy-to-install solution. The local execution reduces dependence on cloud services and ensures full control over the processed data.

For Linux users, Alpaca presents an interesting alternative to proprietary AI services. A wide range of language models is available, each with its strengths and weaknesses depending on the use case.

However, AI and language models are not without controversy and rightly have many critics. Despite legitimate concerns, we must engage with this technology—it is already an integral part of our digital world and is not going away. It is therefore crucial to use it in a secure and data-efficient manner. Solutions like Alpaca make this possible: controlled, locally executed AI use without dependence on central providers.

If Alpaca does not meet your expectations, you can easily remove the program via the Discover Software Center. By also selecting the Delete Settings and User Data switch, all stored data and the often storage-heavy language models will be completely removed.