Introduction
llama 3 by Meta, is comparable in performance to GPT4. For coding there are other models such as Mistral 7B which can also be installed locally.
The big caveat in this method is that LLMs are memory and GPU intensive. So unless you have a powerful machine with enough RAM, this method may not suit you.
There is a workaround for this memory issue. That is, use Groq online for processing via AnythingLLM installed locally. It works beautifully and is blazing fast. You will tutorials for this on YouTube.
How to Install AnythingLLM on Your Desktop
- Go to AnythingLLM and download the appropriate version for your operating system (Mac, Windows, or Linux)
- Follow the installation instructions here: https://docs.useanything.com/installation/overview
- View this video for installing ollama with Anything LLM:
This model gives you the power of GPT4, for FREE.
Why would you want LLMs installed locally?
Running a Large Language Model (LLM) locally has some awesome perks. Here’s why you might want to keep your AI close to home:
Privacy Fort Knox: Keep your secrets safe. No more worrying about data leaks or snooping eyes when everything stays on your own turf.
Lightning Fast: Say goodbye to lag. Local LLMs mean instant responses without the wait for data to travel to the cloud and back.
Wallet-Friendly: Save some bucks. If you’re a heavy user, running your own LLM can be cheaper than those pesky cloud fees.
Tailor-Made: Make it yours. Customize the model to fit your specific needs and tweak it as you go. You’re in control.
Always On: No internet? No problem. Local LLMs work even when your Wi-Fi doesn’t, making them super reliable.
Grow As You Go: Start small and scale up. With better hardware, you can handle bigger models right from your own setup.
Law-Abiding Citizen: Keep it legal. For places with strict data laws, local LLMs ensure you’re always on the right side of the regulations.
Steady and Predictable: No surprise downtimes. Your local LLM is always there when you need it, with no unexpected service interruption.
In short, running an LLM locally gives you privacy, speed, savings, customization, reliability, scalability, legal peace of mind, and consistent availability.
What’s not to love?
Next Steps
- To help you find your way around Phewture, I have put together a set of AI Recipes under Wayfinding. Do go through these and you'll navigate like a pro through this stream of consciousness. 😄
- The Learning Methods are exercises that I'd recommend if you wish to wrap your head around the possibilities of using AI Recipes at work, or for play.
- Don't forget to leave your comments below and share your joy with your friends on social media. Use the share icons below this post to gain some good karma.
Wish to train your in-house team on AI techniques?
Phewture offers AI-spurred training for teams. Do check out the Training Services.
While you enjoy your sojourns here do give me your feedback. Use the comment box below and let it rip.