I've Been Exploring Locally-run AI Chatbots

avatar

I tried out one of Vicuna’s smaller models with llama.cpp, and it wasn’t nearly as responsive as ChatGPT 3.5. A larger database would probably work just as effectively. The primary issue for me was the model repeating itself, and having wonky responses. Plus, I would have liked to use a platform with an interface instead of the console.

00004-dub_2023-07-01.png

I had some issues with oobabooga. After a while, I said forget it and switched to a simpler alternative. Oobabooga is optimized to run on Nvidia graphics cards, but I have an incompatible AMD. The alternative, after becoming lazy trying to get it to work with oobabooga, was to prioritize the CPU, which worked pretty well using llama.cpp. It seems like I’ll have to wait for more AMD graphics support.

Most of the time, I would use the browser versions, but my primary issue with that is security, and many of them are censored models. When discussing violence, story-building, or even certain aspects of chemistry, some models behave strangely or inform you that they cannot discuss the subjects. It limits the creativity and the imagination of the person using it. There is a new one called MPT, but it's experimental. I'm going to be following the project closely.

Let's hope it has AMD graphics support. lol



0
0
0.000
1 comments
avatar

GPT4All is a good introduction to locally run models. Hopefully better GPU-enabled versions come about but there are a variety of huggingface models you can drop into the models folder.

0
0
0.000