Additional Steps When Replacing Worker Nodes

When you replace an existing worker node (where peers/orderers are running) with a new worker node, you must also complete the following additional steps:

  1. Ensure that the Persistent Volumes that are mounted on the existing node can be migrated to and accessed from the new node. To do this on Oracle Kubernetes Engine, create a node in the same Availability Domain as the existing node.
  2. Stop all instances that use the older node.
  3. Cordon and drain the older node. This might affect Blockchain Platform Manager services, if those services are running on the older node. Wait for the running pods to move into the new node.
  4. Run the following commands to get the list of all of the peers and orderers that were running on the cordoned node.
    kubectl get peer -A -o=custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,NODESELECTOR:.spec.nodeSelector'
    kubectl get orderernode -A -o=custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,NODESELECTOR:.spec.nodeSelector'
  5. For the peers and orderers that were configured with nodeSelector for the older node, run the following commands to update the custom resource .spec.nodeSelector to select the new node.
    kubectl patch peer <PEER> -n <NAMESPACE> -p '{"spec":{"nodeSelector":{"kubernetes.io/hostname":"<NEW_NODE_HOSTNAME>"}}}' --type='merge'
    kubectl patch orderernode <ORDERER> -n <NAMESPACE> -p '{"spec":{"nodeSelector":{"kubernetes.io/hostname":"<NEW_NODE_HOSTNAME>"}}}' --type='merge'
  6. Verify the updated nodeSelector value by running the commands from Step 4 again.
  7. Start all instances that were previously stopped.