Job:
periodic-ci-openshift-release-master-ci-4.13-upgrade-from-stable-4.12-e2e-gcp-ovn-rt-upgrade (all) - 17 runs, 24% failed, 375% of failures match = 88% impact
#1791203424675565568junit4 hours ago
May 16 22:27:25.469 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-frl4pwtg-3a480-rqkt8-worker-b-pgrbk uid/96093532-1d6d-4d2e-b831-3e31a0abe7ee container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2024-05-16T21:49:09.057783112Z caller=main.go:111 msg="Starting prometheus-config-reloader" version="(version=0.63.0, branch=rhaos-4.13-rhel-8, revision=386c3b2)"\nlevel=info ts=2024-05-16T21:49:09.057882984Z caller=main.go:112 build_context="(go=go1.19.13 X:strictfipsruntime, platform=linux/amd64, user=root, date=20240515-05:38:43)"\nlevel=info ts=2024-05-16T21:49:09.058221957Z caller=main.go:149 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2024-05-16T21:49:15.452996157Z caller=reloader.go:374 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2024-05-16T21:49:15.453121267Z caller=reloader.go:235 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2024-05-16T21:50:31.780753246Z caller=reloader.go:374 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2024-05-16T21:52:40.564418054Z caller=reloader.go:374 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2024-05-16T21:56:28.517112479Z caller=reloader.go:374 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules
May 16 22:28:10.029 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-16-191307: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-ace6423e554a90bd1a8ead8c53f29a7e expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-a0c3db1b3d3d05e61db8e5ff98ce14c0, retrying]
May 16 22:28:10.029 - 608s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-16-191307: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-ace6423e554a90bd1a8ead8c53f29a7e expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-a0c3db1b3d3d05e61db8e5ff98ce14c0, retrying]

... 1 lines not shown

#1791055589065887744junit16 hours ago
May 16 12:42:26.631 E ns/e2e-k8s-sig-apps-job-upgrade-9057 pod/foo-hqxxh node/ci-op-4gqxx3d9-3a480-nfh7b-worker-b-wb2cq uid/0288be86-e6a8-4157-abee-9d4fceba3d92 container/c reason/ContainerExit code/137 cause/Error
May 16 12:43:21.100 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.42: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-c798ef2e7afff26eb3abc2998a9d4613 expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-5bc5df821f9967289028956d893c0f12, retrying]
May 16 12:43:21.100 - 604s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.42: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-c798ef2e7afff26eb3abc2998a9d4613 expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-5bc5df821f9967289028956d893c0f12, retrying]

... 1 lines not shown

#1790788541391835136junit34 hours ago
May 15 19:03:03.217 E ns/openshift-multus pod/multus-admission-controller-767658f5bf-mmqnl node/ci-op-fq228fqy-3a480-2kbbs-master-2 uid/42d5340d-8f03-4e0b-a461-b0c52d3a6ac8 container/multus-admission-controller reason/ContainerExit code/137 cause/Error
May 15 19:04:43.002 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-15-165412: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-d7db59c72d1d55af4f14f7bf8903b58e expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-3ced2f9e87a338db48d1e9a9afd0808d, retrying]
May 15 19:04:43.002 - 283s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-15-165412: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-d7db59c72d1d55af4f14f7bf8903b58e expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-3ced2f9e87a338db48d1e9a9afd0808d, retrying]

... 1 lines not shown

#1791096180516589568junit13 hours ago
May 16 15:36:26.826 - 1s    E node/ci-op-0ri0ni1h-3a480-r5m6s-master-2 reason/FailedToDeleteCGroupsPath May 16 15:36:26.826601 ci-op-0ri0ni1h-3a480-r5m6s-master-2 kubenswrapper[2213]: I0516 15:36:26.826524    2213 pod_container_manager_linux.go:191] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod02ffb61a61fe5476633b8039fbbbf403] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod02ffb61a61fe5476633b8039fbbbf403] : Timed out while waiting for systemd to remove kubepods-burstable-pod02ffb61a61fe5476633b8039fbbbf403.slice"
May 16 15:37:03.494 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-16-131307: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-f45cdcf7f4ae8380d12d30001aa4cb09 expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-d8edac4c3d7677f15e51fd133d81091f, retrying]
May 16 15:37:03.494 - 239s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-16-131307: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-f45cdcf7f4ae8380d12d30001aa4cb09 expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-d8edac4c3d7677f15e51fd133d81091f, retrying]

... 1 lines not shown

#1790697945339793408junit39 hours ago
May 15 13:19:18.565 - 1s    E node/ci-op-2ikbfd01-3a480-5hmvd-master-2 reason/FailedToDeleteCGroupsPath May 15 13:19:18.565324 ci-op-2ikbfd01-3a480-5hmvd-master-2 kubenswrapper[2226]: I0515 13:19:18.565119    2226 pod_container_manager_linux.go:191] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod71e4f3ab285c195af68a63bb81a93fe0] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod71e4f3ab285c195af68a63bb81a93fe0] : Timed out while waiting for systemd to remove kubepods-burstable-pod71e4f3ab285c195af68a63bb81a93fe0.slice"
May 15 13:20:15.422 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-15-105412: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-747bba4e0701256a279317414e238e72 expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-abfa2b132d72d7343277075ff8d4d2fc, retrying]
May 15 13:20:15.422 - 253s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-15-105412: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-747bba4e0701256a279317414e238e72 expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-abfa2b132d72d7343277075ff8d4d2fc, retrying]

... 1 lines not shown

#1790292722658054144junit2 days ago
May 14 10:09:46.719 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-tmdzzlcf-3a480-vrhrz-worker-b-vzv98 uid/c009e553-2b5b-41ba-b81e-b29e1c1504e6 container/alertmanager reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 14 10:10:12.914 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-14-080341: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-ffc0bc35fe3341d53b659f220234efc7 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-42c9bbb1730c9950daf4a24ae780402b, retrying]
May 14 10:10:12.914 - 558s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-14-080341: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-ffc0bc35fe3341d53b659f220234efc7 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-42c9bbb1730c9950daf4a24ae780402b, retrying]

... 1 lines not shown

#1790436907864297472junit2 days ago
May 14 19:44:01.793 E ns/openshift-monitoring pod/openshift-state-metrics-58bd8f6545-vwtm7 node/ci-op-9nhdwsix-3a480-gvqlr-worker-b-vw57g uid/2d656170-e076-4e05-ae4c-4d17556e4f14 container/openshift-state-metrics reason/ContainerExit code/2 cause/Error
May 14 19:45:15.494 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-14-140838: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-cd0be58a19e9b09a4d1718f4528a1767 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-7919509e0408ed55f4b80f3f16d61e12, retrying]
May 14 19:45:15.494 - 567s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-14-140838: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-cd0be58a19e9b09a4d1718f4528a1767 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-7919509e0408ed55f4b80f3f16d61e12, retrying]

... 1 lines not shown

#1790977303497412608junit21 hours ago
May 16 07:32:54.539 E ns/openshift-multus pod/multus-admission-controller-767658f5bf-dr2t4 node/ci-op-f9hwh793-3a480-xppv9-master-2 uid/d8caaa8a-ebfa-4bad-ba88-d4ad4895d9e0 container/multus-admission-controller reason/ContainerExit code/137 cause/Error
May 16 07:35:47.715 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-16-052445: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-e7bf0dc5025f2c12328bd03dcb478491 expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-9e744fe489e9761ca53583dff7608653, retrying]
May 16 07:35:47.715 - 125s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-16-052445: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-e7bf0dc5025f2c12328bd03dcb478491 expected 2d7a09cc9acd5ffd96ef777a1bd2c348cd49a0c9 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-9e744fe489e9761ca53583dff7608653, retrying]

... 1 lines not shown

#1790384825580916736junit2 days ago
May 14 16:17:53.300 E ns/e2e-k8s-sig-apps-job-upgrade-540 pod/foo-945hl node/ci-op-htjhgghs-3a480-4ptl4-worker-b-k2scq uid/0db4718e-f547-44ed-b745-2daea9c1f3bf container/c reason/ContainerExit code/137 cause/Error
May 14 16:18:18.567 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-14-140838: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-1a050461b1281836adae0c6d99cbad6f expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-9b286acb8d8da0720159d1aefc28255e, retrying]
May 14 16:18:18.567 - 614s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-14-140838: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-1a050461b1281836adae0c6d99cbad6f expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-9b286acb8d8da0720159d1aefc28255e, retrying]

... 1 lines not shown

#1788928666093228032junit6 days ago
May 10 15:48:56.146 - 1s    E node/ci-op-9hn9h6vn-3a480-m74zs-master-1 reason/FailedToDeleteCGroupsPath May 10 15:48:56.146392 ci-op-9hn9h6vn-3a480-m74zs-master-1 kubenswrapper[2217]: I0510 15:48:56.146314    2217 pod_container_manager_linux.go:191] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podaa28de30444e7cd33bff8748b35e0640] err="unable to destroy cgroup paths for cgroup [kubepods burstable podaa28de30444e7cd33bff8748b35e0640] : Timed out while waiting for systemd to remove kubepods-burstable-podaa28de30444e7cd33bff8748b35e0640.slice"
May 10 15:50:46.877 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-10-134150: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-df686aebe9848ab2f71d93f29be63a3d expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-b305c9232d370cf10b5386f03f2ac1bd, retrying]
May 10 15:50:46.877 - 594s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-10-134150: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-df686aebe9848ab2f71d93f29be63a3d expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-b305c9232d370cf10b5386f03f2ac1bd, retrying]

... 1 lines not shown

#1788796231594545152junit6 days ago
May 10 07:08:22.392 E ns/openshift-monitoring pod/kube-state-metrics-7cdb78b8db-29lck node/ci-op-x3qccjb2-3a480-m66l6-worker-b-dt6xt uid/f48c89fd-e9c2-423a-9c65-11c46c1966b1 container/kube-state-metrics reason/ContainerExit code/2 cause/Error
May 10 07:08:42.746 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-10-045757: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-26378417daf1dea1d7c66ab31f7a1622 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-7c385c7a60bf045d3388ab5d430e081a, retrying]
May 10 07:08:42.746 - 571s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-10-045757: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-26378417daf1dea1d7c66ab31f7a1622 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-7c385c7a60bf045d3388ab5d430e081a, retrying]

... 1 lines not shown

#1788556403221204992junit7 days ago
May 09 15:15:51.605 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5p57rfpt-3a480-2bchv-worker-b-6qqgt uid/0c030d16-e463-44e1-8631-e797e431bb54 container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2024-05-09T14:37:52.971819686Z caller=main.go:111 msg="Starting prometheus-config-reloader" version="(version=0.63.0, branch=rhaos-4.13-rhel-8, revision=7aaa0d9)"\nlevel=info ts=2024-05-09T14:37:52.97194669Z caller=main.go:112 build_context="(go=go1.19.13 X:strictfipsruntime, platform=linux/amd64, user=root, date=20240507-18:09:48)"\nlevel=info ts=2024-05-09T14:37:52.972344383Z caller=main.go:149 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2024-05-09T14:37:59.164491939Z caller=reloader.go:374 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2024-05-09T14:37:59.16460983Z caller=reloader.go:235 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2024-05-09T14:40:36.611849483Z caller=reloader.go:374 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2024-05-09T14:45:51.930205922Z caller=reloader.go:374 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\n
May 09 15:16:39.716 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-09-130302: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-5bbd35fdd0a97be0719aab7002f8dc13 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-5b3c6a112df6ad2317fe9aa4a45b715e, retrying]
May 09 15:16:39.716 - 597s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-09-130302: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-5bbd35fdd0a97be0719aab7002f8dc13 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-5b3c6a112df6ad2317fe9aa4a45b715e, retrying]

... 1 lines not shown

#1788233128502890496junit8 days ago
May 08 18:01:43.000 - 1s    E ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new reason/DisruptionBegan ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new stopped responding to GET requests over new connections: Get "https://test-disruption-new-openshift-image-registry.apps.ci-op-idk6wdny-3a480.XXXXXXXXXXXXXXXXXXXXXX/healthz": read tcp 10.129.194.3:59516->34.29.254.93:443: read: connection reset by peer
May 08 18:03:39.478 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-08-153833: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-30a36c6c5a4324a0e358647548c9615d expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-86f605ab616b5d7ff3c75364197bb72e, retrying]
May 08 18:03:39.478 - 211s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-08-153833: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-30a36c6c5a4324a0e358647548c9615d expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-86f605ab616b5d7ff3c75364197bb72e, retrying]

... 1 lines not shown

#1788040008859389952junit9 days ago
May 08 05:01:28.884 - 1s    E node/ci-op-8k3txdvt-3a480-lrb92-master-2 reason/FailedToDeleteCGroupsPath May 08 05:01:28.884665 ci-op-8k3txdvt-3a480-lrb92-master-2 kubenswrapper[2220]: I0508 05:01:28.884584    2220 pod_container_manager_linux.go:191] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podd82e7f1e-8c76-46bc-be5f-c91d6c002a9e] err="unable to destroy cgroup paths for cgroup [kubepods burstable podd82e7f1e-8c76-46bc-be5f-c91d6c002a9e] : Timed out while waiting for systemd to remove kubepods-burstable-podd82e7f1e_8c76_46bc_be5f_c91d6c002a9e.slice"
May 08 05:04:17.160 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-08-025210: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-3b81dabb40f2d6ccc258afec3c70bb63 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-b4e5468f89fae3117848660586dbfe6f, retrying]
May 08 05:04:17.160 - 211s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-08-025210: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-3b81dabb40f2d6ccc258afec3c70bb63 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-b4e5468f89fae3117848660586dbfe6f, retrying]

... 1 lines not shown

#1788135084574904320junit8 days ago
May 08 11:20:51.467 E ns/e2e-k8s-sig-apps-job-upgrade-2173 pod/foo-46kvp node/ci-op-9bi4153c-3a480-xr64q-worker-b-zl2x5 uid/e7751521-8f2a-4ef7-9377-8533411fa11f container/c reason/ContainerExit code/137 cause/Error
May 08 11:20:56.782 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.nightly-2024-05-08-090908: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-08c665de93661db26ec841d72b865009 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-4c66abab56d72ded82e235b8b1a4df45, retrying]
May 08 11:20:56.782 - 554s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.13.0-0.nightly-2024-05-08-090908: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-08c665de93661db26ec841d72b865009 expected bb00d4f6f57ab3a3fc11644a958a6e1781389a23 has 33a010772d604aff2cb625d04a9469a47f53c96e: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-4c66abab56d72ded82e235b8b1a4df45, retrying]

... 1 lines not shown

Found in 88.24% of runs (375.00% of failures) across 17 total runs and 1 jobs (23.53% failed) in 255ms - clear search | chart view - source code located on github