KlikoAI
| KlikoAI | |
|---|---|
| Participants | Melan |
| Skills | |
| Status | Active |
| Niche | Smart stuff |
| Purpose | Home Automation |
| Tool | No |
| Location | |
| Cost | |
| Tool category | |
KlikoAI Property "Tool Image" (as page type) with input value "File:{{{Picture}}}" contains invalid characters or is incomplete and therefore can cause unexpected results during a query or annotation process. {{{Picture}}} {{#if:No | [[Tool Owner::{{{ProjectParticipants}}} | }} {{#if:No | [[Tool Cost::{{{Cost}}} | }}
The goal of this project is to have an AI detection of the paper and waste containers inside the space. So that we can detect if they are inside our outside. Previous methods using sensors (distance sensor and a ble sensor) have proven to be unsatisfying.
KlikoAI uses https://huggingface.co/vikhyatk/moondream2 as it's model. It is a light-weight LLM Vision model (2B). Looking into using moondream3 might be interesting, but performance of moondream2 seems already pretty solid. We simply have to ask the model to detect the colour of the container lids and to output it as json.
An idea might be to quantize from BF16 to int16 to decrease memory overhead. The program itself is running on gpu.vm.nurd.space and makes use of the Nvidia Geforce RTX 3080. (via docker, in ~/klikoai).
Right now it's programmed to run an update every 3 hours, however the goal is to reduce this to twice every day and multiple times during trash pickup days. It should also be able to detect when a big chance happens in frame, however this is not yet tested.
KlikoAI has hass-integration and outputs it's states as a mqtt sensor. This makes it possible to mute the "PUT THE TRASH OUTSIDE" notifications once the correct container has been put outside. There is also IRC integration using the command !klikos and !klikoupdate (to trigger a scan).
Previous Iteration
The goal at first was to have a YOLO model trained on an existing dataset containing containers. However, due to a "limited" way of mounting the camera, detection results were very poor which made me decide to use an LLM vision model instead.
dataset
We want to detect either one of the bins, our bins have a blue lid and a yellow lid while the rest of the bin is green.
Though Its seems that gathering images with containers with these lid colours appear to be more difficult than initially hoped. But since Yolo returns a bounding box, we could pretty easily look at the top of the container and calculate in which colour range it falls, or determine the location of the bin based on where it is in view.
The default Yolo models don't support garbage bin detection, so I trained my own model on my 5090 using this dataset https://universe.roboflow.com/finance-insitut/garbage-container-detection-sam7i