AI will probably kill us all in the end, but until then, we need to learn how to use it to fight those who WILL use it against us. AI is like a firearm, if you wish to defend yourself, you need to arm yourself with the tool and the training. Don’t get caught defenseless, ignorance is not bliss.
The landscape of AI is changing, and AI itself is changing every day, too rapidly to keep up with all of the happenings. However, with the tools listed below, you can arm yourself with knowledge, just choose the right tool for the job. For sensitive personal information that you don’t want leaked to the technocrats at Google, OpenAI, Microsoft, etc we have tools like Proton’s Lumo AI, Venice.ai and Nano-GPT.com to use. For the maximum privacy way to use AI, you can download models to your computer and run them locally, completely offline. This page is a collection of the various AI tools that I use and think others should consider, as these are all very easy to use, even the offline, local AI models.
If there’s something that you want to know about AI that you don’t know, well, you can… simply ask AI. Seriously, it’s that easy. Don’t know the difference among all of the various ChatGPT or Anthropic models? Simple, just ask AI and it will tell you in as much detail as you wish.
Screenshot of OpenWebUI Interface:

The ‘Big Tech’ models are quite good at many tasks, just be aware that they are collecting all of your exchanges and attributes about your machine and network:
–
Better ways to use the big tech AI models:
–
Private AI models:
Enoch (Brighteon AI model available as offline model as well)
–
Specialty AI models:
Suno (Music and Lyrics)
Elevenlabs (Text to Speech, Voice Cloning)
–
Other Models
Granite (IBM)
–
Offline AI

These are applications for downloading and running models locally on your own computer or server, most for free, with 100% offline privacy. Ollama with Open WebUI front end is a wonderful way to run all of the Ollama models available on their Models page, but it can also run many other models with the ‘.gguf’ file extension type. If you have no idea how to download and use these with the bazillions of local AI models available for free, then ask online AI and it will show you exactly how to install and configure your own offline AI. Seriously, it’s easier than you think. Enoch is an excellent one to have for example, it’s only 7.5GB and can be run with Ollama, LM Studio, and others. It’s excellent ‘non-woke’ AI with tons of uses, it’s been trained on over 3000 firearms, health and wellness, off grid living and many other educational and scientific resources. Just like with online AI, there’s models for specific purposes such as text generation, chat, images, medical, coding, reasoning, etc.
While these can all be run on a regular laptop or desktop, they will be painfully slow and may even crash your system when run on your CPU only. If you have a decent Graphics Processing Unit (GPU) then you can run these models very fast, although GPU’s are not all the same. A $500 GPU will work ok, but a $3500 one will be an order of magnitude better. Those with the resources can even run multiple GPU’s for even better performance running offline AI. Pay attention when downloading the models for offline use, the parameter size. 7-12b models will run ok on a Raspberry Pi or regular laptop, however larger than that you really need a GPU. If you’re not sure how large of a model you can run on your hardware… ask AI, it knows the answer to that question.
Ollama (Add OpenWebUI for a beautiful User Interface)
text-generation-webui (Oobabooga)
–
Video showing Enoch running on a Debian 13 server using a single NVIDIA RTX 5090 GPU:
NVIDIA 5090 Enoch full speed-Ask to build OpenWebUI using docker to run ollama
Instructions for deploying OpenWebUI and LiteLLM on your own hardware or VPS:
VPS AI Hostinger Project Guide – GrapheneGoat
-0-
Uncensored AI – Local, Private, Offline AI that will answer… ANYTHING:
Example of offline AI prompt (Ollama running Enoch engine pictured)


