Network/Roadmap/RedundantCoreServices: Difference between revisions

From NURDspace
No edit summary
Line 5: Line 5:
* NSD and Unbound for DNS
* NSD and Unbound for DNS
* ISC-DHCPD for DHCP
* ISC-DHCPD for DHCP
* OpenLDAP
* <del>OpenLDAP</del>FreeIPA


= Architecture =
= Architecture =
There will be two systems, which can either be a VM or a physical system. Both of these systems should be running (almost) identical configurations, and should not require any further configuration or maintenance. Furthermore, care must be taken to make these machines as robust as possible. Both machines should be able to run by themselves. That is, there should be *no* external dependencies for these systems. If this means that, during an outage, the systems continue to run but their configuration cannot be modified, this is acceptable.  
There will be two systems, which can either be a VM or a physical system. Both of these systems should be running (almost) identical configurations, and should not require any further configuration or maintenance. Furthermore, care must be taken to make these machines as robust as possible. Both machines should be able to run by themselves. That is, there should be *no* external dependencies for these systems. If this means that, during an outage, the systems continue to run but their configuration cannot be modified, this is acceptable.


For DNS, the authoritative zones live in IPAM. On the ipam machine, a powerdns instance is running that notifies two BIND instances running on the two systems.
For redundancy, we use a combination of VRRP and software-based loadbalancing. For VRRP, we use keepalived. This allows us to configure all exposed services with a dedicated ip address, which 'floats' between the two services systems. If the active system goes down, the secondary system will detect this, and take ownership of the ip address. There is no automatic fallback configured. For loadbalancing, we use the [https://github.com/UlricE/pen pen] loadbalancer, which does both TCP and UDP loadbalancing with sticky sessions based on a hash of the source ip.
For DHCP, we will either use a isc-dhcpd cluster or multiple machines running kea combined with JSON based configuration (which can be pushed towards both system).
 
For LDAP, we run OpenLDAP with the syncprov overlay in a master-master fashion. Optionally, we can deploy slaves for systems that need to run standalone.
For DNS, the authoritative zones <del>live in IPAM. On the ipam machine, a powerdns instance is running that notifies two BIND instances running on the two systems.</del> will be moved to FreeIPA. These zones will be slaved onto an NSD instance running on the services boxen, which in turn are queried by unbound. Unbound is exposed onto the network using the aformentioned loadbalancing setup. There is a separate zone (dhcp.nurd.space) which is created by scripts based on the DHCP lease file, and this is used to allow clients to configure their own hostname under this zone via DHCP. All DNS zones are also exposed in hosts file format on [https://dns.lan.nurd.space/hosts.txt this url]
 
For DHCP, we use a isc-dhcpd cluster. This cluster is configured to use short (1 minute) lease times. This allows us to use DHCP leases as a non-intrusive form of presence detection.
 
For LDAP, <del>we run OpenLDAP with the syncprov overlay in a master-master fashion. Optionally, we can deploy slaves for systems that need to run standalone.</del> a FreeIPA server is being setup, and all services will be moved towards this setup in the upcoming couple of months. The services machines will run a replica of this setup to provide additional redundancy.

Revision as of 16:44, 2 September 2024

Summary

We want our core services (DHCP, DNS, LDAP) to always work. So, we need to have them redundant. Not only as in multiple processes running the same functionality, but also on different physical systems.

Software to use

  • NSD and Unbound for DNS
  • ISC-DHCPD for DHCP
  • OpenLDAPFreeIPA

Architecture

There will be two systems, which can either be a VM or a physical system. Both of these systems should be running (almost) identical configurations, and should not require any further configuration or maintenance. Furthermore, care must be taken to make these machines as robust as possible. Both machines should be able to run by themselves. That is, there should be *no* external dependencies for these systems. If this means that, during an outage, the systems continue to run but their configuration cannot be modified, this is acceptable.

For redundancy, we use a combination of VRRP and software-based loadbalancing. For VRRP, we use keepalived. This allows us to configure all exposed services with a dedicated ip address, which 'floats' between the two services systems. If the active system goes down, the secondary system will detect this, and take ownership of the ip address. There is no automatic fallback configured. For loadbalancing, we use the pen loadbalancer, which does both TCP and UDP loadbalancing with sticky sessions based on a hash of the source ip.

For DNS, the authoritative zones live in IPAM. On the ipam machine, a powerdns instance is running that notifies two BIND instances running on the two systems. will be moved to FreeIPA. These zones will be slaved onto an NSD instance running on the services boxen, which in turn are queried by unbound. Unbound is exposed onto the network using the aformentioned loadbalancing setup. There is a separate zone (dhcp.nurd.space) which is created by scripts based on the DHCP lease file, and this is used to allow clients to configure their own hostname under this zone via DHCP. All DNS zones are also exposed in hosts file format on this url

For DHCP, we use a isc-dhcpd cluster. This cluster is configured to use short (1 minute) lease times. This allows us to use DHCP leases as a non-intrusive form of presence detection.

For LDAP, we run OpenLDAP with the syncprov overlay in a master-master fashion. Optionally, we can deploy slaves for systems that need to run standalone. a FreeIPA server is being setup, and all services will be moved towards this setup in the upcoming couple of months. The services machines will run a replica of this setup to provide additional redundancy.