Job:
#OCPBUGS-30267issue6 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
#OCPBUGS-15430issue4 weeks agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
periodic-ci-openshift-ovn-kubernetes-release-4.14-e2e-ibmcloud-ipi-ovn-periodic (all) - 14 runs, 100% failed, 71% of failures match = 71% impact
#1790533071146061824junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 31m24s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 31m24s, firing for 0s:
May 15 01:03:37.367 - 478s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 15 01:03:37.367 - 1198s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 15 01:04:49.367 - 208s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1790170631988318208junit35 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 25m24s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 25m24s, firing for 0s:
May 14 01:11:51.755 - 928s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 14 01:12:21.755 - 478s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 14 01:12:57.755 - 118s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1788721063643844608junit5 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 29m22s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 29m22s, firing for 0s:
May 10 01:21:43.598 - 1318s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 10 01:22:13.598 - 298s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 10 01:22:23.598 - 28s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
May 10 01:23:23.598 - 118s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1789808280223092736junit2 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 23m26s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 23m26s, firing for 0s:
May 13 01:17:12.922 - 928s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 13 01:17:42.922 - 478s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1789445938494836736junit3 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 19m54s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 19m54s, firing for 0s:
May 12 01:31:08.351 - 718s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 12 01:31:38.351 - 328s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 12 01:32:04.351 - 148s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1789083430579867648junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 20m24s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 20m24s, firing for 0s:
May 11 01:24:09.608 - 118s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 11 01:24:09.608 - 958s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 11 01:41:39.608 - 148s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1787633765263085568junit8 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 29m24s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 29m24s, firing for 0s:
May 07 01:18:23.198 - 418s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 07 01:18:23.198 - 1048s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 07 01:18:47.198 - 298s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1788358660502589440junit6 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 27m54s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 27m54s, firing for 0s:
May 09 01:15:38.980 - 388s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 09 01:15:38.980 - 1228s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 09 01:16:16.980 - 58s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1786546571811229696junit11 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 52m58s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 52m30s, firing for 28s:
May 04 01:08:09.561 - 162s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 04 01:08:09.561 - 162s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 04 01:08:51.561 - 120s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
May 04 01:11:19.561 - 180s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
May 04 01:11:19.561 - 318s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 04 01:11:19.561 - 2208s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 04 01:10:51.561 - 28s   E alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
#1786909106330669056junit10 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 23m54s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 23m54s, firing for 0s:
May 05 01:20:02.147 - 418s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 05 01:20:02.147 - 808s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 05 01:20:36.147 - 208s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}

Found in 71.43% of runs (71.43% of failures) across 14 total runs and 1 jobs (100.00% failed) in 126ms - clear search | chart view - source code located on github