We are using Kubernetes PodDisruptionBudget to prevent voluntary disruptions of services. This should include the drain of worker nodes during worker node upgrade. While a manual kubectl drain fails in case of insufficient running replicas, IBM Cloud worker node upgrade ignores the PodDisruptionBudget and proceeds with the shutdown of the worker node.
Knowing that, I have to check every service regarding a defined PDB and whether the upgrade will cause a disruption in sense of PDB or not. The other option is to drain every node manually to see if it complies with the PDB.
So please make the automations for the worker node upgrade aware of the actual work load and respect the PDB and not only the ConfigMap ibm-cluster-update-configuration with the unavailability rules on worker node level.
provide a second workload aware upgrade option, so the user can choose (and can still use the existing forced upgrade option)
NOTICE TO EU RESIDENTS: per EU Data Protection Policy, if you wish to remove your personal information from the IBM ideas portal, please login to the ideas portal using your previously registered information then change your email to "email@example.com" and first name to "anonymous" and last name to "anonymous". This will ensure that IBM will not send any emails to you about all idea submissions