Job:
periodic-ci-openshift-release-master-ci-4.13-upgrade-from-stable-4.12-e2e-azure-ovn-upgrade (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1790373533675687936junit2 days ago
May 14 15:54:10.898 E ns/openshift-ovn-kubernetes pod/ovnkube-node-4skhw node/ci-op-f2t75sr7-efead-rjwj2-worker-centralus2-htqjm uid/38dc56ba-b1a8-4c84-9fea-d48958181ad1 container/ovnkube-node reason/ContainerExit code/137 cause/ContainerStatusUnknown The container could not be located when the pod was deleted.  The container used to be Running
May 14 15:54:12.240 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.ci-2024-05-14-094503: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-11161778c6536a06ca152b546ae9a22e expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-0cfc5a50c7192b9a64eb0b1081b4544f, retrying]
May 14 15:54:12.240 - 519s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.ci-2024-05-14-094503: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-11161778c6536a06ca152b546ae9a22e expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-0cfc5a50c7192b9a64eb0b1081b4544f, retrying]

... 1 lines not shown

#1787836596838469632junit9 days ago
May 07 15:35:45.344 - 1s    E node/ci-op-lii24t84-efead-6gls5-worker-centralus3-p87dg reason/FailedToDeleteCGroupsPath May 07 15:35:45.344316 ci-op-lii24t84-efead-6gls5-worker-centralus3-p87dg kubenswrapper[2279]: I0507 15:35:45.344249    2279 pod_container_manager_linux.go:191] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod08365f76-fcd7-4801-be3f-9b259d11f99d] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod08365f76-fcd7-4801-be3f-9b259d11f99d] : Timed out while waiting for systemd to remove kubepods-burstable-pod08365f76_fcd7_4801_be3f_9b259d11f99d.slice"
May 07 15:36:14.613 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.ci-2024-05-06-192520: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-b7a314699b1c0b1af3972bdad9f788bf expected 88140227ee7d79c77092efd92b5a4363971333e2 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-8ca86d79b7c3dd808a5352dcc526dab3, retrying]
May 07 15:36:14.613 - 482s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.ci-2024-05-06-192520: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-b7a314699b1c0b1af3972bdad9f788bf expected 88140227ee7d79c77092efd92b5a4363971333e2 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-8ca86d79b7c3dd808a5352dcc526dab3, retrying]

... 1 lines not shown

Found in 100.00% of runs (100.00% of failures) across 2 total runs and 1 jobs (100.00% failed) in 421ms - clear search | chart view - source code located on github