(Created page with "{{Inventory |Name=gpu |Owner=Space |Status=Infra |Hostname=gpu.lan.nurd.space |Picture=naaimachine.jpg |Tool=Yes |Category=Infrastructure }} Naaimachine (or gpu for short) is a dedicated machine for AI projects. The machine contains 3 GPUs that can be used. By default the system runs Stable-Diffusion on one GPU, and an LLM on the other GPU. Nvidia Docker is available. == Addresses == gpu.lan.nurd.space 10.208.1.20") |
No edit summary |
||
Line 8: | Line 8: | ||
|Category=Infrastructure | |Category=Infrastructure | ||
}} | }} | ||
(Na)AI machine, is a system to be used for AI at the space located inside the rack near the door of zaal 1. | |||
It’s mainly used to run an LLM and Stable Diffusion but it can also be used for other things that need GPU power as well. If you want to use it for something, don’t hesitate to ask Melan about it! | |||
== | Address: gpu.lan.nurd.space (10.208.1.20) # suspend & waking up The system is designed to suspend 15 minutes after there has been no GPU load. You can also put the system to sleep with <code>systemctl suspend</code> | ||
Waking up is done by a small script running in an LXC on Erratic which provides a small API-endpoint. You can see the state of the AI-machine and it’s open ports by going to <code>http://gpu-wake.dhcp.nurd.space:8000/</code> | |||
Waking up is done by making a GET request to <code>http://gpu-wake.dhcp.nurd.space:8000/wake</code> or sending a magic WOL packet to <code>08:62:66:81:14:6f</code> | |||
<span id="specs"></span> | |||
= Specs = | |||
The system is equipped with a <code>Intel Xeon E5-2643 v4 @ 3.700GHz</code> which is a six-core processor. The system also has <code>32GB</code> of DDR4 memory available. Primary storage is provided by a fast <code>500GB NVME</code>, while the secondary storage is a <code>512GB SSD</code> mounted under <code>/mnt/storage</code> all powered by an <code>Asus SABERTOOTH X99</code> | |||
The system has the following GPUs available: | |||
- NVIDIA GeForce RTX 3080 (10GB vram) | |||
- NVIDIA P104-100 (8GB vram) | |||
The system is running <code>Archlinux</code>, with the latest kernel and Nvidia drivers as well as <code>Cuda 12</code>. | |||
<span id="ai-models"></span> | |||
= AI models = | |||
<span id="llm-large-language-model"></span> | |||
=== LLM (Large Language Model) === | |||
One of the AI machine’s primary usages is the LLM. It’s integrated with Nurdbot. For this we use https://github.com/LostRuins/koboldcpp | |||
You can talk to the LLM by going to <code>http://gpu.lan.nurd.space:5001/</code>the api-endpoint is available at <code>http://gpu.lan.nurd.space:5001/api</code> | |||
As of right now, the model we are running is https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GGML (<code>Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_0.bin</code>) | |||
(At the time of writing, the extra parameters used right now are <code>-usecublas 0 --threads 10 --gpulayers 40</code>. However, more tweaking is required. It’s also running on the first GPU but I want to test moving it to the second GPU) | |||
<span id="stable-diffusion"></span> | |||
=== Stable Diffusion === | |||
Right now this is still work in progress. | |||
<span id="other"></span> | |||
= Other = | |||
<span id="setup-notes"></span> | |||
== Setup notes == | |||
In order for the system to be able come back out of suspend properly with the Nvidia cards remembering their state, these settings have to be enabled. | |||
<syntaxhighlight lang="bash">sudo systemctl enable nvidia-suspend.service | |||
sudo systemctl enable nvidia-hibernate.service | |||
sudo systemctl enable nvidia-resume.service | |||
echo "options nvidia NVreg_PreserveVideoMemoryAllocations=1" >> /lib/modprobe.d/systemd.conf | |||
sudo dracut --force</syntaxhighlight> |
Latest revision as of 13:57, 27 March 2024
gpu | |
---|---|
Owner | Space |
Status | Infra |
Hostname | gpu.lan.nurd.space |
Tool | Yes |
Tool category | Infrastructure |
naaimachine.jpg {{{InventoryOwner}}}Property "Tool Owner" (as page type) with input value "{{{InventoryOwner}}}" contains invalid characters or is incomplete and therefore can cause unexpected results during a query or annotation process. {{{Location}}}Property "Tool Location" (as page type) with input value "{{{Location}}}" contains invalid characters or is incomplete and therefore can cause unexpected results during a query or annotation process.
(Na)AI machine, is a system to be used for AI at the space located inside the rack near the door of zaal 1.
It’s mainly used to run an LLM and Stable Diffusion but it can also be used for other things that need GPU power as well. If you want to use it for something, don’t hesitate to ask Melan about it!
Address: gpu.lan.nurd.space (10.208.1.20) # suspend & waking up The system is designed to suspend 15 minutes after there has been no GPU load. You can also put the system to sleep with systemctl suspend
Waking up is done by a small script running in an LXC on Erratic which provides a small API-endpoint. You can see the state of the AI-machine and it’s open ports by going to http://gpu-wake.dhcp.nurd.space:8000/
Waking up is done by making a GET request to http://gpu-wake.dhcp.nurd.space:8000/wake
or sending a magic WOL packet to 08:62:66:81:14:6f
Specs
The system is equipped with a Intel Xeon E5-2643 v4 @ 3.700GHz
which is a six-core processor. The system also has 32GB
of DDR4 memory available. Primary storage is provided by a fast 500GB NVME
, while the secondary storage is a 512GB SSD
mounted under /mnt/storage
all powered by an Asus SABERTOOTH X99
The system has the following GPUs available:
- NVIDIA GeForce RTX 3080 (10GB vram)
- NVIDIA P104-100 (8GB vram)
The system is running Archlinux
, with the latest kernel and Nvidia drivers as well as Cuda 12
.
AI models
LLM (Large Language Model)
One of the AI machine’s primary usages is the LLM. It’s integrated with Nurdbot. For this we use https://github.com/LostRuins/koboldcpp
You can talk to the LLM by going to http://gpu.lan.nurd.space:5001/
the api-endpoint is available at http://gpu.lan.nurd.space:5001/api
As of right now, the model we are running is https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GGML (Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_0.bin
)
(At the time of writing, the extra parameters used right now are -usecublas 0 --threads 10 --gpulayers 40
. However, more tweaking is required. It’s also running on the first GPU but I want to test moving it to the second GPU)
Stable Diffusion
Right now this is still work in progress.
Other
Setup notes
In order for the system to be able come back out of suspend properly with the Nvidia cards remembering their state, these settings have to be enabled.
<syntaxhighlight lang="bash">sudo systemctl enable nvidia-suspend.service sudo systemctl enable nvidia-hibernate.service sudo systemctl enable nvidia-resume.service
echo "options nvidia NVreg_PreserveVideoMemoryAllocations=1" >> /lib/modprobe.d/systemd.conf
sudo dracut --force</syntaxhighlight>