Job:
#OCPBUGS-30267issue4 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
#OCPBUGS-15430issue13 days agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
periodic-ci-openshift-multiarch-master-nightly-4.12-upgrade-from-stable-4.11-ocp-e2e-aws-sdn-arm64 (all) - 26 runs, 35% failed, 167% of failures match = 58% impact
#1784574814078373888junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m30s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1m30s, firing for 0s:
Apr 28 13:57:28.932 - 90s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1783825312132370432junit2 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 10m38s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 10m38s, firing for 0s:
Apr 26 12:26:50.386 - 638s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1783459678039052288junit3 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h13m30s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1h13m30s, firing for 0s:
Apr 25 12:14:52.651 - 122s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 25 12:14:52.651 - 1034s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 25 12:14:52.651 - 3254s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1782308123818594304junit6 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 19m18s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 19m18s, firing for 0s:
Apr 22 07:52:26.688 - 1158s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1782248742406066176junit6 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 9m26s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 9m26s, firing for 0s:
Apr 22 03:55:24.972 - 448s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 22 04:04:24.972 - 118s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1783117257149255680junit4 days ago
Apr 24 13:59:54.006 E ns/openshift-machine-api pod/cluster-autoscaler-operator-f8566d7c5-5qhzx node/ip-10-0-163-227.us-east-2.compute.internal uid/6455911e-6bb9-432b-9c50-cdff6de8c681 container/cluster-autoscaler-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Apr 24 13:59:55.058 E ns/openshift-insights pod/insights-operator-7d7d6cc9fc-8wr8w node/ip-10-0-163-227.us-east-2.compute.internal uid/9b8a2bb7-5c8a-4df6-ab6d-efb4053eac86 container/insights-operator reason/ContainerExit code/2 cause/Error se" has state "pending"\nI0424 13:59:30.115770       1 conditional_gatherer.go:278] alert "APIRemovedInNextReleaseInUse" has state "pending"\nI0424 13:59:30.115800       1 conditional_gatherer.go:278] alert "AlertmanagerReceiversNotConfigured" has state "firing"\nI0424 13:59:30.115833       1 conditional_gatherer.go:278] alert "ClusterNotUpgradeable" has state "pending"\nI0424 13:59:30.115846       1 conditional_gatherer.go:278] alert "KubeAPIErrorBudgetBurn" has state "pending"\nI0424 13:59:30.115851       1 conditional_gatherer.go:278] alert "PodSecurityViolation" has state "firing"\nI0424 13:59:30.115855       1 conditional_gatherer.go:278] alert "Watchdog" has state "firing"\nI0424 13:59:30.115925       1 conditional_gatherer.go:288] updating version cache for conditional gatherer\nI0424 13:59:30.124231       1 conditional_gatherer.go:296] cluster version is '4.12.0-0.nightly-arm64-2024-04-24-125211'\nI0424 13:59:30.124335       1 tasks_processing.go:45] number of workers: 1\nI0424 13:59:30.124369       1 tasks_processing.go:69] worker 0 listening for tasks.\nI0424 13:59:30.124397       1 tasks_processing.go:71] worker 0 working on conditional_gatherer_rules task.\nI0424 13:59:30.124478       1 recorder.go:70] Recording insights-operator/conditional-gatherer-rules with fingerprint=8dbbbde181184600277bd0c8401374b23c24c4f4b08634e52ed045ff5aa12179\nI0424 13:59:30.124512       1 gather.go:180] gatherer "conditional" function "conditional_gatherer_rules" took 1.428µs to process 1 records\nI0424 13:59:30.124548       1 tasks_processing.go:74] worker 0 stopped.\nI0424 13:59:30.124581       1 periodic.go:132] Periodic gather conditional completed in 305ms\nI0424 13:59:30.124757       1 recorder.go:70] Recording insights-operator/gathers with fingerprint=30b5b73f3004426a425dcf2035e2f367f12e8787fe20de064118b6cf51dfcd69\nI0424 13:59:30.125526       1 diskrecorder.go:70] Writing 184 records to /var/lib/insights-operator/insights-2024-04-24-135930.tar.gz\nI0424 13:59:30.139589       1 diskrecorder.go:51] Wrote 184 records to disk in 14ms\n
Apr 24 13:59:55.058 E ns/openshift-insights pod/insights-operator-7d7d6cc9fc-8wr8w node/ip-10-0-163-227.us-east-2.compute.internal uid/9b8a2bb7-5c8a-4df6-ab6d-efb4053eac86 container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1783117257149255680junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 51m0s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 51m0s, firing for 0s:
Apr 24 13:31:32.441 - 286s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 24 13:31:32.441 - 2746s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 24 13:37:50.441 - 28s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1782763975809699840junit5 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 33m18s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 33m18s, firing for 0s:
Apr 23 14:07:08.038 - 54s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 23 14:07:08.038 - 1944s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1781346115858206720junit9 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m44s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 8m44s, firing for 0s:
Apr 19 16:08:21.810 - 524s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1781301920086888448junit9 days ago
Found files: [/logs/artifacts/junit/test-failures-summary_20240419-131811.json /logs/artifacts/junit/test-failures-summary_20240419-142907.json]
response Body: {"ProwJobName":"periodic-ci-openshift-multiarch-master-nightly-4.12-upgrade-from-stable-4.11-ocp-e2e-aws-sdn-arm64","ProwJobRunID":1781301920086888448,"Release":"4.12","CompareRelease":"4.12","Tests":[{"Name":"[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info","Risk":{"Level":{"Name":"High","Level":100},"Reasons":["This test has passed 100.00% of 26 runs on jobs ['periodic-ci-openshift-multiarch-master-nightly-4.12-upgrade-from-stable-4.11-ocp-e2e-aws-sdn-arm64'] in the last 14 days."]},"OpenBugs":[]}],"OverallRisk":{"Level":{"Name":"High","Level":100},"Reasons":["Maximum failed test risk: High"]},"OpenBugs":[]}
#1781301920086888448junit9 days ago
Found files: [/logs/artifacts/junit/test-failures-summary_20240419-131811.json /logs/artifacts/junit/test-failures-summary_20240419-142907.json]
response Body: {"ProwJobName":"periodic-ci-openshift-multiarch-master-nightly-4.12-upgrade-from-stable-4.11-ocp-e2e-aws-sdn-arm64","ProwJobRunID":1781301920086888448,"Release":"4.12","CompareRelease":"4.12","Tests":[{"Name":"[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info","Risk":{"Level":{"Name":"High","Level":100},"Reasons":["This test has passed 100.00% of 26 runs on jobs ['periodic-ci-openshift-multiarch-master-nightly-4.12-upgrade-from-stable-4.11-ocp-e2e-aws-sdn-arm64'] in the last 14 days."]},"OpenBugs":[]}],"OverallRisk":{"Level":{"Name":"High","Level":100},"Reasons":["Maximum failed test risk: High"]},"OpenBugs":[]}
#1781301920086888448junit9 days ago
Apr 19 13:18:11.601 - 4160s E alert/Watchdog ns/openshift-monitoring ALERTS{alertname="Watchdog", alertstate="firing", namespace="openshift-monitoring", prometheus="openshift-monitoring/k8s", severity="none"}
Apr 19 13:18:41.601 - 28s   E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 19 13:21:37.197 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update role "openshift-cluster-version/prometheus-k8s" (8 of 976)
#1781301920086888448junit9 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h13m52s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1h13m24s, firing for 28s:
Apr 19 13:18:11.601 - 30s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 19 13:18:11.601 - 30s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 19 13:19:09.601 - 762s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 19 13:19:09.601 - 3582s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 19 13:18:41.601 - 28s   E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1781073482902147072junit9 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 57m12s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 57m12s, firing for 0s:
Apr 18 22:08:32.115 - 292s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 18 22:08:32.115 - 3022s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 18 22:14:56.115 - 118s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1781030427197181952junit10 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m18s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 13m18s, firing for 0s:
Apr 18 19:14:45.433 - 798s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1780975389141635072junit10 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m22s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1m22s, firing for 0s:
Apr 18 16:11:30.043 - 82s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1780890487918432256junit10 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 44s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 44s, firing for 0s:
Apr 18 09:59:59.111 - 44s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1780805895626690560junit10 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m16s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 5m16s, firing for 0s:
Apr 18 04:30:04.951 - 316s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1779970173562785792junit12 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m44s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1m44s, firing for 0s:
Apr 15 20:59:36.534 - 104s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}

Found in 57.69% of runs (166.67% of failures) across 26 total runs and 1 jobs (34.62% failed) in 175ms - clear search | chart view - source code located on github