d9982a3b7e
The existing /var/run/.node_locked flag file is volatile. Meaning it is lost over a host reboot which has DOR implications. Service Management (SM) sometimes selects and activates services on a locked controller following a DOR (Dead Office Recovery). This update is part one of a two-part update that solves both of the above problems. Part two is a change to SM in the ha git. This update can be merged without part two. This update maintains the existing volatile node locked file because it is looked at by other system services. So to minimize the change and therefore patchback impact, a new non-volatile 'backup' of the existing node locked flag file is created. This update incorporates modifications to the mtcAgent and mtcClient, introducing a new backup file and ensuring their synchronized management to guarantee their simultaneous presence or absence. Note: A design choice was made to not use a symlink of one to the other rather than add support to manage symlinks in the code. This approach was chosen for its simplicity and reliability in directly managing both files. At some point in the future volatile file could be deprecated contingent upon identifying and updating all services that directly reference it. This update also removes some dead code that was adjacent to my update. Test Plan: This test plan covers the maintenance management of both files to ensure they always align and the expected behavior exists. PASS: Verify AIO DX Install. PASS: Verify Storage System Install. PASS: Verify Swact back and forth. PASS: Verify mtcClient and mtcAgent logging. PASS: Verify node lock/unlock soak. Non-volatile (Nv) node locked management test cases: PASS: Verify Nv node locked file is present when a node is locked. Confirmed on all node types. PASS: Verify any system node install comes up locked with both node locked flag files present. PASS: Verify mtcClient logs when a node is locked and unlocked. PASS: Verify Nv node locked file present/absent state mirrors the already existing /var/run/.node_locked flag file. PASS: Verify node locked file is present on controller-0 during ansible run following initial install and removed as part of the self-unlock. PASS: Verify the Nv node locked file is removed over the unlock along with the administrative state change prior to the unlock reboot. PASS: Verify both node locked files are always present or absent together. PASS: Verify node locked file management while the management interface is down. File is still managed over cluster network. PASS: Verify node locked file management while the cluster interface is down. File is still managed over management network. PASS: Verify behavior if the new unlocked message is received by a mtcClient process that does not support it ; unknown command log. PASS: Verify a node locked state is auto corrected while not in a locked/unlocked action change state. ... Manually remove either file on locked node and verify they are both recreated within 5 seconds. ... Manually create either node locked file on unlocked worker or storage node and verify the created files are removed within 5 seconds. Note: doing this to the new backup file on the active controller will cause SM to shutdown as expected. PASS: Verify Nv node locked file is auto created on a node that spontaneously rebooted while it was unlocked. During the reboot the node was administratively locked. The node should come online with both node locked files present. Partial-Bug: 2051578 Change-Id: I0c279b92491e526682d43d78c66f8736934221de Signed-off-by: Eric MacDonald <eric.macdonald@windriver.com> |
||
---|---|---|
.. | ||
centos | ||
debian | ||
opensuse | ||
src | ||
PKG-INFO |