Job:
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-ovn-rt-upgrade (all) - 6 runs, 33% failed, 300% of failures match = 100% impact
#1790659335106334720junit41 hours ago
May 15 11:16:23.842 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-m1slx3ss-6bb16-8t7sq-worker-b-8q4vz uid/d0600ae6-77aa-42ff-890a-25bb0e5ba554 container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2024-05-15T10:40:36.806288177Z caller=main.go:111 msg="Starting prometheus-config-reloader" version="(version=0.60.1, branch=release-4.12, revision=d1e399d5c)"\nlevel=info ts=2024-05-15T10:40:36.806366226Z caller=main.go:112 build_context="(go=go1.19.13 X:strictfipsruntime, user=root, date=20231109-13:38:17)"\nlevel=info ts=2024-05-15T10:40:36.806696149Z caller=main.go:149 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2024-05-15T10:40:43.214143577Z caller=reloader.go:374 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2024-05-15T10:40:43.214291245Z caller=reloader.go:235 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2024-05-15T10:42:15.239823799Z caller=reloader.go:374 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2024-05-15T10:49:48.338185633Z caller=reloader.go:374 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\n
May 15 11:18:11.954 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.12.0-0.ci-2024-05-14-122945: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-0c042a6ea06109bf93bfddf0b1dc8ab7 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-cced2b94c5dbe9ec86838fc7b26c6038, retrying]
May 15 11:18:11.954 - 414s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.12.0-0.ci-2024-05-14-122945: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-0c042a6ea06109bf93bfddf0b1dc8ab7 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-cced2b94c5dbe9ec86838fc7b26c6038, retrying]

... 1 lines not shown

#1789934416210956288junit3 days ago
May 13 10:14:45.840 - 1s    E node/ci-op-fzrzdsth-6bb16-sxbr5-master-1 reason/FailedToDeleteCGroupsPath May 13 10:14:45.840380 ci-op-fzrzdsth-6bb16-sxbr5-master-1 hyperkube[2130]: I0513 10:14:45.840327    2130 pod_container_manager_linux.go:192] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod1e560cc4916c09f8cfca2339b98f12ed] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod1e560cc4916c09f8cfca2339b98f12ed] : Timed out while waiting for systemd to remove kubepods-burstable-pod1e560cc4916c09f8cfca2339b98f12ed.slice"
May 13 10:16:54.187 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.12.0-0.ci-2024-05-11-181333: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-332bc0f75a4cb15b6f937fd2747ce923 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-691032af9def31926c12cdce6cef50e5, retrying]
May 13 10:16:54.187 - 450s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.12.0-0.ci-2024-05-11-181333: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-332bc0f75a4cb15b6f937fd2747ce923 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-691032af9def31926c12cdce6cef50e5, retrying]

... 1 lines not shown

#1786672928654364672junit12 days ago
May 04 10:21:32.060 - 1s    E disruption/kube-api connection/reused reason/DisruptionBegan disruption/kube-api connection/reused stopped responding to GET requests over reused connections: error running request: 500 Internal Server Error: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rpc error: code = Unknown desc = malformed header: missing HTTP content-type","code":500}\n
May 04 10:22:46.835 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.12.0-0.ci-2024-05-03-193550: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-0e79df50e8d7959809b11f6720440eb6 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-38f18c5997b0620456f8debc0cc3d98a, retrying]
May 04 10:22:46.835 - 97s   E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.12.0-0.ci-2024-05-03-193550: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-0e79df50e8d7959809b11f6720440eb6 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-38f18c5997b0620456f8debc0cc3d98a, retrying]

... 1 lines not shown

#1791021752025878528junit18 hours ago
May 16 10:22:23.904 E ns/openshift-multus pod/cni-sysctl-allowlist-ds-sb8mh node/ci-op-gqbx3w4d-6bb16-4jlvw-worker-a-vcv2j uid/5b9e8294-2935-42e6-9b85-61bad24b527b container/kube-multus-additional-cni-plugins reason/ContainerExit code/137 cause/Error
May 16 10:24:51.440 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.12.0-0.ci-2024-05-16-010831: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-858429e509635508bbe256ffc0b76f87 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-310d7dbf7aae820e2eb7fcbf51a3ace7, retrying]
May 16 10:24:51.440 - 465s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.12.0-0.ci-2024-05-16-010831: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-858429e509635508bbe256ffc0b76f87 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-310d7dbf7aae820e2eb7fcbf51a3ace7, retrying]

... 1 lines not shown

#1788847380221661184junit6 days ago
May 10 10:19:18.069 - 18s   E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: an error on the server ("Internal Server Error: \"/apis/packages.operators.coreos.com/v1\": Post \"https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 172.30.0.1:443: connect: connection refused") has prevented the request from succeeding
May 10 10:19:59.161 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.12.0-0.ci-2024-05-09-061655: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-b3e327440710c88d17a64c90c65cc0f8 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-dbbb561aa171233b007f31024f166fd6, retrying]
May 10 10:19:59.161 - 406s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.12.0-0.ci-2024-05-09-061655: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-b3e327440710c88d17a64c90c65cc0f8 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-dbbb561aa171233b007f31024f166fd6, retrying]

... 1 lines not shown

#1787760054506622976junit9 days ago
May 07 10:18:47.095 E ns/openshift-multus pod/cni-sysctl-allowlist-ds-gdcn9 node/ci-op-v1nrsizz-6bb16-8ldk9-worker-a-v9mfk uid/b091d4e8-ac0a-48da-afec-cb75ce70c44e container/kube-multus-additional-cni-plugins reason/ContainerExit code/137 cause/Error
May 07 10:21:42.007 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.12.0-0.ci-2024-05-06-183542: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-08ae61fcb995cfa944447fa4315ce873 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-04155805173d0761d39ec20f67043892, retrying]
May 07 10:21:42.007 - 464s  E clusteroperator/machine-config condition/Degraded status/True reason/Unable to apply 4.12.0-0.ci-2024-05-06-183542: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-08ae61fcb995cfa944447fa4315ce873 expected 33a010772d604aff2cb625d04a9469a47f53c96e has 15d0b0288a4330b90ac89f14c781dfa7349af52c: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-04155805173d0761d39ec20f67043892, retrying]

... 1 lines not shown

Found in 100.00% of runs (300.00% of failures) across 6 total runs and 1 jobs (33.33% failed) in 169ms - clear search | chart view - source code located on github