Naaimachine

From NURDspace
gpu
Owner Space
Status Infra
Hostname gpu.lan.nurd.space
Tool Yes
Tool category Infrastructure

gpu

naaimachine.jpg {{{InventoryOwner}}}Property "Tool Owner" (as page type) with input value "{{{InventoryOwner}}}" contains invalid characters or is incomplete and therefore can cause unexpected results during a query or annotation process. {{{Location}}}Property "Tool Location" (as page type) with input value "{{{Location}}}" contains invalid characters or is incomplete and therefore can cause unexpected results during a query or annotation process.

(Na)AI machine, is a system to be used for AI at the space located inside the rack near the door of zaal 1.

It’s mainly used to run an LLM and Stable Diffusion but it can also be used for other things that need GPU power as well. If you want to use it for something, don’t hesitate to ask Melan about it!

Address: gpu.lan.nurd.space (10.208.1.20) # suspend & waking up The system is designed to suspend 15 minutes after there has been no GPU load. You can also put the system to sleep with systemctl suspend

Waking up is done by a small script running in an LXC on Erratic which provides a small API-endpoint. You can see the state of the AI-machine and it’s open ports by going to http://gpu-wake.dhcp.nurd.space:8000/

Waking up is done by making a GET request to http://gpu-wake.dhcp.nurd.space:8000/wake or sending a magic WOL packet to 08:62:66:81:14:6f

Specs

The system is equipped with a Intel Xeon E5-2643 v4 @ 3.700GHz which is a six-core processor. The system also has 32GB of DDR4 memory available. Primary storage is provided by a fast 500GB NVME, while the secondary storage is a 512GB SSD mounted under /mnt/storage all powered by an Asus SABERTOOTH X99

The system has the following GPUs available:

- NVIDIA GeForce RTX 3080 (10GB vram)

- NVIDIA P104-100 (8GB vram)

The system is running Archlinux, with the latest kernel and Nvidia drivers as well as Cuda 12.

AI models

LLM (Large Language Model)

One of the AI machine’s primary usages is the LLM. It’s integrated with Nurdbot. For this we use https://github.com/LostRuins/koboldcpp

You can talk to the LLM by going to http://gpu.lan.nurd.space:5001/the api-endpoint is available at http://gpu.lan.nurd.space:5001/api

As of right now, the model we are running is https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GGML (Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_0.bin)

(At the time of writing, the extra parameters used right now are -usecublas 0 --threads 10 --gpulayers 40. However, more tweaking is required. It’s also running on the first GPU but I want to test moving it to the second GPU)

Stable Diffusion

Right now this is still work in progress.

Other

Setup notes

In order for the system to be able come back out of suspend properly with the Nvidia cards remembering their state, these settings have to be enabled.

<syntaxhighlight lang="bash">sudo systemctl enable nvidia-suspend.service sudo systemctl enable nvidia-hibernate.service sudo systemctl enable nvidia-resume.service

echo "options nvidia NVreg_PreserveVideoMemoryAllocations=1" >> /lib/modprobe.d/systemd.conf

sudo dracut --force</syntaxhighlight>