How to run LLMs for Free on your desktop

This article shows you how to install and run powerful LLM models, such as llama 3, from your desktop. It's free and it's darn good.

That's Phi-3 4B from Microsoft on my comp, exclaims Pita.
That's Phi-3 4B LLM from Microsoft, running AI Agents locally, exclaims Pita.

Introduction

llama 3 by Meta, is comparable in performance to GPT4. For coding there are other models such as Mistral 7B which can also be installed locally.

The big caveat in this method is that LLMs are memory and GPU intensive. So unless you have a powerful machine with enough RAM, this method may not suit you.

There is a workaround for this memory issue. That is, use Groq online for processing via AnythingLLM installed locally. It works beautifully and is blazing fast. You will tutorials for this on YouTube.

How to Install AnythingLLM on Your Desktop

Download AnythingLLM for Windows, Mac, or Linux and LLM away to your heart's glory.
Download AnythingLLM for Windows, Mac, or Linux and LLM away to your heart's glory.
This model gives you the power of GPT4, for FREE.

Why would you want LLMs installed locally?

Running a Large Language Model (LLM) locally has some awesome perks. Here’s why you might want to keep your AI close to home:

Privacy Fort Knox: Keep your secrets safe. No more worrying about data leaks or snooping eyes when everything stays on your own turf.

Lightning Fast: Say goodbye to lag. Local LLMs mean instant responses without the wait for data to travel to the cloud and back.

Wallet-Friendly: Save some bucks. If you’re a heavy user, running your own LLM can be cheaper than those pesky cloud fees.

Tailor-Made: Make it yours. Customize the model to fit your specific needs and tweak it as you go. You’re in control.

Always On: No internet? No problem. Local LLMs work even when your Wi-Fi doesn’t, making them super reliable.

Grow As You Go: Start small and scale up. With better hardware, you can handle bigger models right from your own setup.

Law-Abiding Citizen: Keep it legal. For places with strict data laws, local LLMs ensure you’re always on the right side of the regulations.

Steady and Predictable: No surprise downtimes. Your local LLM is always there when you need it, with no unexpected service interruption.

In short, running an LLM locally gives you privacy, speed, savings, customization, reliability, scalability, legal peace of mind, and consistent availability.

What’s not to love?

Next Steps

  1. To help you find your way around Phewture, I have put together a set of AI Recipes under Wayfinding. Do go through these and you'll navigate like a pro through this stream of consciousness. 😄
  2. The Learning Methods are exercises that I'd recommend if you wish to wrap your head around the possibilities of using AI Recipes at work, or for play.
  3. Don't forget to leave your comments below and share your joy with your friends on social media. Use the share icons below this post to gain some good karma.

Wish to train your in-house team on AI techniques?

Want to raise your team's productivity by 5X, and creativity by 10X?

Phewture offers AI-spurred training for teams. Do check out the Training Services.


🙏
Thank you for trying out Phewture.

While you enjoy your sojourns here do give me your feedback. Use the comment box below and let it rip.

Read next