Replies: 1 comment 2 replies
-
That's a good question, @sskorol. Without having access to the hardware for testing it's hard to tell, but I can make some guesses. RAM usage of the models is pretty minimal, so 1 GB of RAM should be more than enough. From what I can see online, CPU performance would be the biggest problem. If we assume the Quad-Core Cortex-A7 (up to 1.5GHz) is maybe 1/2 the speed of an RPi 3, that may be a bit of an issue as right now a single model is around 70% of a RPi 3 core. However, one option to decrease the required CPU load is just to call the model less frequently. By default, a prediction is made every 80 ms, as this gives the best detection performance. But it would be relatively simple to only call every 160 ms, which should decrease the CPU load by ~40-50%, which might be just enough for the Cortext-A7. There might be a small decrease in detection performance from this change, but I haven't yet done the testing to see how large. If you have access to the hardware, I'd be happy to make some changes to the code to allow for this type of testing. |
Beta Was this translation helpful? Give feedback.
-
Hey, thanks for this library! I wonder what's the minimum CPU/RAM requirements for running a single model in real-time.
I saw RPi3 comparison in docs, but curious if it can be effectively used on Quad-Core Cortex-A7 (up to 1.5GHz) with 1 GB RAM.
Beta Was this translation helpful? Give feedback.
All reactions