I downloaded DeepSeek recently as well. I found running it locally more convincing than SaaS. With the latter I kept imagining a bunch of real people reading and improving the responses behind the scenes. When run locally it's clear the computer is doing all the talking.I'm not sure where I should post this, but I was cuious about AI on my RPi 5 with 8GB of memory, so I flashed a spare SD card with the RPiOS and installed ollama on it. I was able to run Deepseek-r1:8b, llama3.2:1b, and Phi3.5:3.8b without any problems. The responses were slow, especially deepseek. I have a simple bash scipt to run and select which AI to run. I was disappointed that they don't have access to realtime info. Of course, your experiance could be different.
Statistics: Posted by ejolson — Sun Feb 16, 2025 3:57 am