Network/ManagedVM: Difference between revisions

From NURDspace
No edit summary
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
== Bootstrapping a new VM ==
== Bootstrapping a new VM ==
This procedure describes the step you must follow to rollout a new managed vm. This can be done two-fold: A newly deployed managed VM and retroactively adding a VM to management.
This procedure describes the step you must follow to rollout a new managed vm. Note that this procedure can only be executed by someone in the BOFH team. We are planning to make this functionality available to all members.
 
We support two flavours of machines currently:
 
* Debian 12 (bookworm) with backports enabled
* AlmaLinux 9 with SELinux enabled
 
All code that is used to deploy these works with both these distros, and it is fairly easy to add additional distributions if the need arises.


=== Deploying a new VM ===
=== Deploying a new VM ===
==== Add DNS records ====
==== Add DNS records ====
First up, you will need to create DNS records for the new machine. This is done by logging into [[https://ipam.nurd.space/|ipam]]. In here, navigate to NURDspace -> 10.208.30.0/24, and select the next available free ip address. Fill in the form, be sure to add yourself as the owner of the ip address, and dont forget to add both an A record and a PTR record. You will need both.
First up, you will need to create DNS records for the new machine. This is done by logging into [https://ipa.nurd.space/ FreeIPA]. Find a free ip, and create a record for the host you want to create. Dont forget to create a PTR record.


==== Validate that DNS records work ====
TODO: Ensure that zone transfers happen sooner. This sometimes needs to be prodded
Since DNS is crucial for getting a managed VM to work, you need to validate that the DNS records you just created work. To do this, run the following commands on the nurdspace network:


r3boot@egg:~$ dig +short @10.208.30.254 a testerdetest.vm.nurd.space  
==== Update ansible inventory ====
10.208.30.20
Checkout the [https://git.nurd.space/bofh/ansible|ansible repository] and add a new host under <tt>inventories/nurdspace/inventory.yml</tt>. Add the host to the following groups:
r3boot@egg:~$ dig +short @10.208.30.254 ptr 20.30.208.10.in-addr.arpa
testerdetest.vm.nurd.space.


Make sure that the A record points to the correct ip address, and that the ip address points to the correct fqdn.
* all
 
* vergersweg
==== Update ansible inventory ====
* debian or almalinux (depending on the distro)
Next, add the host to the ansible inventory under both the [all] and the [debian] groups, and commit your changes:


cd ansible
Optionally you can create your own group. This is used to deploy a system where the service itself is also managed via ansible. This is done for all BOFH systems. Next, commit your changes, and check the CI/CD pipeline to see if the ansible archive is correctly deployed.
git pull
vi inventory # make your changes
git add inventory
git commit -m 'Added some host to the inventory'
git push


==== Create new virtual machine ====
==== Create new virtual machine ====
Now it is time to create the new VM. Perform a KVM or CT, and follow the steps you require the host to have. There are two things that you must fix:
Start by creating a clone of either the [https://git.nurd.space/bofh/opentofu/machines/almalinux9 AlmaLinux] or [https://git.nurd.space/bofh/opentofu/machines/debian12 Debian] template repository. Be sure to remove the existing <tt>.git</tt> directory. Next, edit machines.tf and customize it to your needs. See the example below:
 
# The root account must have the well-known password
# Ssh must allow root logins using a password (set PermitRootLogin yes in /etc/ssh/sshd_config)
 
==== Run ansible on new machine ====
Once the machine is rebooted, run ansible (TODO: from your local system or from some management station on the network, to be determined)
 
cd ansible
git pull
./scripts/ansible-playbook manage.yml -l testerdetest.vm.nurd.space
 
Once this is finished without errors, the host is correctly provisioned


=== Retroactively making a VM managed ===
module "NAME" {
==== Validate that DNS records work ====
  source = "git::https://git.nurd.space/bofh/opentofu/modules/proxmox"
Since DNS is crucial for getting a managed VM to work, you need to validate that the DNS records you just created work. To do this, run the following commands on the nurdspace network:
  hostname    = "NAME"
  distribution = "debian12" # or `almalinux9`
  ipv4_address = "10.208.X.Y"
  cluster_node = "erratic"
  datastore    = "hddvg"
  vault_password = var.vault_password
}


r3boot@egg:~$ dig +short @10.208.30.254 any testerdetest.vm.nurd.space
Each machine will get 2 cores, 2GB ram and 20GB diskspace by default. You can customize this by setting the <tt>cores</tt>, <tt>memory</tt> or <tt>storage</tt> parameters. If you want to use ansible to also manage the service, set the <tt>role</tt> parameter to the name of the ansible group you want to deploy onto the system.
10.208.30.20
r3boot@egg:~$ dig +short @10.208.30.254 ptr 20.30.208.10.in-addr.arpa
testerdetest.vm.nurd.space.


Make sure that the A record points to the correct ip address, and that the ip address points to the correct fqdn.
==== Deploy the system ====
 
Once you are satisfied with the machine configuration, create a new repository for this system under the [https://git.nurd.space/bofh/opentofu/machines machines group]. Initialize the copy of the repository you cloned from one of the templates, and add the newly created gitlab repository as a remote. Push your code, and a CI/CD pipeline will pick this up and deploy your system.
==== Update ansible inventory ====
Next, add the host to the ansible inventory under both the [all] and the [debian] groups, and commit your changes:


cd ansible
==== Actual deployment ====
git pull
Once the pipeline is kicked off, the following steps will be performed:
vi inventory # make your changes
git add inventory
git commit -m 'Added some host to the inventory'
git push


==== Run ansible ====
* OpenTOFU will create a new virtual machine on Proxmox. The machine is configured with a cloud-config tailored to the system
Run the following command to provision the VM using the nurdspace ansible codebase.
* During the first boot, cloud-init downloads and intalls ansible-puller, and will perform the first run of ansible-puller
* Ansible puller will download and install ansible in a venv on the system, and will pull a tarball with the latest copy of the ansible repository
* After unpacking, ansible-puller will perform the actual configuration of the system


cd ansible
This whole cycle takes approximately 10 to 15 minutes. Once it is completed, you can login to the system using your LDAP credentials. If anything fails, you can login to the system using the <tt>debug</tt> user to see what went wrong. Ask one of the BOFH members for the password for this account.
git pull
./scripts/ansible-playbook manage.yml -l testerdetest.vm.nurd.space

Latest revision as of 21:37, 6 September 2024

Bootstrapping a new VM

This procedure describes the step you must follow to rollout a new managed vm. Note that this procedure can only be executed by someone in the BOFH team. We are planning to make this functionality available to all members.

We support two flavours of machines currently:

  • Debian 12 (bookworm) with backports enabled
  • AlmaLinux 9 with SELinux enabled

All code that is used to deploy these works with both these distros, and it is fairly easy to add additional distributions if the need arises.

Deploying a new VM

Add DNS records

First up, you will need to create DNS records for the new machine. This is done by logging into FreeIPA. Find a free ip, and create a record for the host you want to create. Dont forget to create a PTR record.

TODO: Ensure that zone transfers happen sooner. This sometimes needs to be prodded

Update ansible inventory

Checkout the repository and add a new host under inventories/nurdspace/inventory.yml. Add the host to the following groups:

  • all
  • vergersweg
  • debian or almalinux (depending on the distro)

Optionally you can create your own group. This is used to deploy a system where the service itself is also managed via ansible. This is done for all BOFH systems. Next, commit your changes, and check the CI/CD pipeline to see if the ansible archive is correctly deployed.

Create new virtual machine

Start by creating a clone of either the AlmaLinux or Debian template repository. Be sure to remove the existing .git directory. Next, edit machines.tf and customize it to your needs. See the example below:

module "NAME" {
  source = "git::https://git.nurd.space/bofh/opentofu/modules/proxmox"

  hostname     = "NAME"
  distribution = "debian12" # or `almalinux9`
  ipv4_address = "10.208.X.Y"

  cluster_node = "erratic"
  datastore    = "hddvg"

  vault_password = var.vault_password
}

Each machine will get 2 cores, 2GB ram and 20GB diskspace by default. You can customize this by setting the cores, memory or storage parameters. If you want to use ansible to also manage the service, set the role parameter to the name of the ansible group you want to deploy onto the system.

Deploy the system

Once you are satisfied with the machine configuration, create a new repository for this system under the machines group. Initialize the copy of the repository you cloned from one of the templates, and add the newly created gitlab repository as a remote. Push your code, and a CI/CD pipeline will pick this up and deploy your system.

Actual deployment

Once the pipeline is kicked off, the following steps will be performed:

  • OpenTOFU will create a new virtual machine on Proxmox. The machine is configured with a cloud-config tailored to the system
  • During the first boot, cloud-init downloads and intalls ansible-puller, and will perform the first run of ansible-puller
  • Ansible puller will download and install ansible in a venv on the system, and will pull a tarball with the latest copy of the ansible repository
  • After unpacking, ansible-puller will perform the actual configuration of the system

This whole cycle takes approximately 10 to 15 minutes. Once it is completed, you can login to the system using your LDAP credentials. If anything fails, you can login to the system using the debug user to see what went wrong. Ask one of the BOFH members for the password for this account.