No edit summary |
No edit summary |
||
(21 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
{{Inventory | {{Inventory | ||
|Name=Dell PowerEdge 2950 confusion | |Name=Dell PowerEdge 2950 confusion | ||
|Status=Gone | |||
|Status= | |||
|Hostname=confusion | |Hostname=confusion | ||
|Location= | |Location=Gone | ||
|Picture=Confusion.jpg | |Picture=Confusion.jpg | ||
|Tool=No | |Tool=No | ||
}} | }} | ||
Old Proxmox box, we now use [[Coherence]]. | |||
== Specs == | == Specs == | ||
* | * 32GB fully buffered DDR2. Max is 32GB or 64GB for gen III | ||
* iDRAC @ [https://ipmi-confusion.management.nurd.space/] | * iDRAC @ [https://ipmi-confusion.management.nurd.space/] | ||
* CPU 2x X5450 Xeon 3GHz (4 cores each) | * CPU 2x X5450 Xeon 3GHz (4 cores each) | ||
* HDDs SAS 3x146GB and 3x73GB (264GB logical drive in OS) | * HDDs SAS 3x146GB and 3x73GB (264GB logical drive in OS) | ||
* 2 onboard network ports and 2x2 ports on cards | * 2 onboard network ports and 2x2 ports on cards. | ||
[http://downloads.dell.com/Manuals/all-products/esuprt_ser_stor_net/esuprt_poweredge/poweredge-2950_owner%27s%20manual_en-us.pdf The owners manual] | [http://downloads.dell.com/Manuals/all-products/esuprt_ser_stor_net/esuprt_poweredge/poweredge-2950_owner%27s%20manual_en-us.pdf The owners manual] | ||
== OS == | == OS == | ||
* Proxmox 4, management on [https://confusion:8006] | * Proxmox 4, management on [https://confusion.nurdspace.lan:8006] | ||
== Network Layout == | == Network Layout == | ||
nurds-sw-04 | nurds-sw-04 ports 23/12 set up to accept VLAN tagged packets. | ||
eth2 connected to sw-04 port 23 from upper NIC | eth2 connected to sw-04 port 23 from upper NIC | ||
eth3 connected to sw-08 port 24 from lower NIC | eth3 connected to sw-08 port 24 from lower NIC | ||
Line 30: | Line 30: | ||
vmbr8 connected to VLAN8, bridging bond1.8 interface (i.e. tagging packets as VLAN8). | vmbr8 connected to VLAN8, bridging bond1.8 interface (i.e. tagging packets as VLAN8). | ||
vmbr7 and vmbr8 available for VM/container binding, with no management interface connection. | vmbr7 and vmbr8 available for VM/container binding, with no management interface connection. | ||
== Firewall == | |||
Open, but with one modification to allow port 443 to connect to port 8006: | |||
*nat | |||
-A PREROUTING -i vmbr0 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 8006 | |||
== Issues == | == Issues == | ||
* Wake on LAN on connected NIC does't seem to work. I could not find the appropriate option in the BIOS | * Wake on LAN on connected NIC does't seem to work. I could not find the appropriate option in the BIOS | ||
Line 35: | Line 39: | ||
== Todo == | == Todo == | ||
* DRAC email alerts | * DRAC email alerts | ||
* Logins through LDAP | * Logins through LDAP | ||
* | == Migrations == | ||
* | As per https://pve.proxmox.com/wiki/Convert_OpenVZ_to_LXC , migration of most of the containers is riduclously easy: | ||
For containers, a script!: | |||
#!/bin/bash | |||
if [ $# -lt 1 ] ; then | |||
echo "$(basename $0) <NUM to migrate>" | |||
exit 1 | |||
fi | |||
CT=$1 | |||
if ssh precious vzlist | grep -F -w -q " $CT " ; then | |||
ssh precious vzctl stop $CT | |||
ssh precious vzdump $CT -dumpdir /mnt/pve/dumpster-a-vm/dump/ | |||
pct restore $CT /mnt/pve/dumpster-a-vm/dump/vzdump-openvz-${CT}*.tar -storage dumpster-b-vm | |||
pct set $CT -net0 name=eth0,bridge=vmbr7 | |||
pct start $CT | |||
else | |||
echo "This container ID doesn't exist on precious" | |||
exit 1 | |||
fi | |||
On a few machines with static IP's, the IP can be entered in the main webconsole. | |||
When sid's been used, proxmox doesn't like it much, so it needs to be switched to 'unmanaged' ostype, e,g,: | |||
pct restore NUM /mnt/pve/dumpster-a-vm/dump/vzdump-openvz-NUM*.tar -storage dumpster-b-vm -ostype unmanaged | |||
For vms: | |||
* on precious: | |||
vzdump NUM -dumpdir /mnt/pve/dumpster-a-vm/dump/ | |||
* on confusion: | |||
qmrestore /mnt/pve/dumpster-a-vm/dump/vzdump-qemu-NUM*.vma NUM -storage dumpster-b-vm | |||
DON'T create vms on the same storage block without removing FIRST - it'll duplicate the disks, which is strange when you remove the hosts from the original, and delete the rootfs from under the machine. | |||
= Auth = | |||
To enable LDAP on this machine, enable it in PAM like on the [[LDAP]] page. This does cause auth to request information from one of its own containers, but this is only an issue if the LDAP VM doesn't come up. root is still locally authed. | |||
Then, you MUST restart pvedaemon. This may knock a few containers offline, watch it. | |||
Now ldap users exist in realm PAM. | |||
To generate a pool and a group per LDAP user, do this per user: | |||
pveum useradd "$user"@pam | |||
pveum groupadd "$user" | |||
pveum usermod "$user"@pam -append -group "$user" | |||
pvesh create /pools -poolid "$user" | |||
pveum aclmod /pool/"$user"/ -group "$user" -role PVEAdmin | |||
= VM Issues = | |||
Trying to mount nfs inside a proxmox container under LXC won't work, as the default apparmor profile (lxc-container-default-cgns) completely disallows all mount operations. | |||
To allow this on specific containers, you need to do this: | |||
* Add mount fstype=nfs to a new apparmor profile (e.g. lxc-default-with-mounting ) | |||
* Tell proxmox to use this profile: | |||
** Edit /etc/pve/lxc/$CTID.conf to say: | |||
lxc.aa_profile = lxc-container-default-with-mounting | |||
* Restart the container. | |||
Trying to use systemd in a container? Then you also need: | |||
fstype=cgroup | |||
In your apparmor profile. | |||
= Clustering = | |||
This machine is clustered with [[Precious]] to allow HA. | |||
= VM's = | |||
{{InventoryLocation|{{PAGENAME}}}} |
Latest revision as of 13:42, 7 June 2020
Dell PowerEdge 2950 confusion | |
---|---|
Owner | The wikipage input value is empty (e.g. <code>SomeProperty::, [[]]</code>) and therefore it cannot be used as a name or as part of a query condition. |
Status | Gone |
Hostname | confusion |
Location | Gone |
Tool | No |
Tool category |
Confusion.jpg {{{InventoryOwner}}}Property "Tool Owner" (as page type) with input value "{{{InventoryOwner}}}" contains invalid characters or is incomplete and therefore can cause unexpected results during a query or annotation process. Gone
Old Proxmox box, we now use Coherence.
Specs
- 32GB fully buffered DDR2. Max is 32GB or 64GB for gen III
- iDRAC @ [1]
- CPU 2x X5450 Xeon 3GHz (4 cores each)
- HDDs SAS 3x146GB and 3x73GB (264GB logical drive in OS)
- 2 onboard network ports and 2x2 ports on cards.
OS
- Proxmox 4, management on [2]
Network Layout
nurds-sw-04 ports 23/12 set up to accept VLAN tagged packets. eth2 connected to sw-04 port 23 from upper NIC eth3 connected to sw-08 port 24 from lower NIC eth4 connected to sw-08 port 22 from lower NIC eth5 connected to sw-04 port 12 from upper NIC bond0 set up but doing nothing on eth[0,1] bond1 set active-backup on eth[4,2] (cross-card, primary upper) bond2 set active-backup on eth[3,5] (cross-card, primary lower) vmbr0 providing management console on VLAN1, bridging bond2 interface. This is for the NFS connections to the storage. vmbr7 connected to VLAN7, bridging bond1.7 interface (i.e. tagging packets as VLAN7). vmbr8 connected to VLAN8, bridging bond1.8 interface (i.e. tagging packets as VLAN8). vmbr7 and vmbr8 available for VM/container binding, with no management interface connection.
Firewall
Open, but with one modification to allow port 443 to connect to port 8006:
*nat -A PREROUTING -i vmbr0 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 8006
Issues
- Wake on LAN on connected NIC does't seem to work. I could not find the appropriate option in the BIOS
- Console redirection in DRAC interface won't work with tried browsers and java
Todo
- DRAC email alerts
- Logins through LDAP
Migrations
As per https://pve.proxmox.com/wiki/Convert_OpenVZ_to_LXC , migration of most of the containers is riduclously easy:
For containers, a script!:
#!/bin/bash if [ $# -lt 1 ] ; then echo "$(basename $0) <NUM to migrate>" exit 1 fi CT=$1 if ssh precious vzlist | grep -F -w -q " $CT " ; then ssh precious vzctl stop $CT ssh precious vzdump $CT -dumpdir /mnt/pve/dumpster-a-vm/dump/ pct restore $CT /mnt/pve/dumpster-a-vm/dump/vzdump-openvz-${CT}*.tar -storage dumpster-b-vm pct set $CT -net0 name=eth0,bridge=vmbr7 pct start $CT else echo "This container ID doesn't exist on precious" exit 1 fi
On a few machines with static IP's, the IP can be entered in the main webconsole.
When sid's been used, proxmox doesn't like it much, so it needs to be switched to 'unmanaged' ostype, e,g,:
pct restore NUM /mnt/pve/dumpster-a-vm/dump/vzdump-openvz-NUM*.tar -storage dumpster-b-vm -ostype unmanaged
For vms:
- on precious:
vzdump NUM -dumpdir /mnt/pve/dumpster-a-vm/dump/
- on confusion:
qmrestore /mnt/pve/dumpster-a-vm/dump/vzdump-qemu-NUM*.vma NUM -storage dumpster-b-vm
DON'T create vms on the same storage block without removing FIRST - it'll duplicate the disks, which is strange when you remove the hosts from the original, and delete the rootfs from under the machine.
Auth
To enable LDAP on this machine, enable it in PAM like on the LDAP page. This does cause auth to request information from one of its own containers, but this is only an issue if the LDAP VM doesn't come up. root is still locally authed.
Then, you MUST restart pvedaemon. This may knock a few containers offline, watch it. Now ldap users exist in realm PAM.
To generate a pool and a group per LDAP user, do this per user:
pveum useradd "$user"@pam pveum groupadd "$user" pveum usermod "$user"@pam -append -group "$user" pvesh create /pools -poolid "$user" pveum aclmod /pool/"$user"/ -group "$user" -role PVEAdmin
VM Issues
Trying to mount nfs inside a proxmox container under LXC won't work, as the default apparmor profile (lxc-container-default-cgns) completely disallows all mount operations. To allow this on specific containers, you need to do this:
- Add mount fstype=nfs to a new apparmor profile (e.g. lxc-default-with-mounting )
- Tell proxmox to use this profile:
- Edit /etc/pve/lxc/$CTID.conf to say:
lxc.aa_profile = lxc-container-default-with-mounting
- Restart the container.
Trying to use systemd in a container? Then you also need:
fstype=cgroup
In your apparmor profile.
Clustering
This machine is clustered with Precious to allow HA.