Job:
#OCPBUGS-15430issue4 weeks agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
#OCPBUGS-30267issue6 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
pull-ci-openshift-installer-master-e2e-azurestack (all) - 32 runs, 88% failed, 57% of failures match = 50% impact
#1790412388864888832junit19 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 21m22s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 21m22s, firing for 0s:
May 14 17:48:12.629 - 778s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 14 17:48:42.629 - 178s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 14 17:49:26.629 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
May 14 18:02:42.629 - 268s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1789016223602708480junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 18m52s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 18m52s, firing for 0s:
May 10 21:17:40.223 - 208s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 10 21:17:40.223 - 778s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 10 21:18:48.223 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
May 10 21:32:10.223 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1788991325463384064junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m56s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 8m56s, firing for 0s:
May 10 19:43:55.791 - 508s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 10 19:45:25.791 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1788952717499043840junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 6m54s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 6m54s, firing for 0s:
May 10 17:18:53.794 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 10 17:18:53.794 - 268s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 10 17:24:53.794 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1788916464862892032junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 17m24s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 17m24s, firing for 0s:
May 10 14:40:19.853 - 658s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 10 14:40:49.853 - 268s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 10 14:41:01.853 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1790081261709037568junit41 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 11m26s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 11m26s, firing for 0s:
May 13 20:01:48.041 - 568s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 13 20:02:18.041 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1790376565272481792junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 12m26s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 12m26s, firing for 0s:
May 14 15:25:19.740 - 598s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 14 15:25:49.740 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1788206622292578304junit6 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 12m54s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 12m54s, firing for 0s:
May 08 15:41:39.008 - 538s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 08 15:42:09.008 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 08 15:42:19.008 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1790145768804323328junit37 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 15m48s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 15m48s, firing for 0s:
May 14 00:01:58.290 - 712s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 14 00:03:22.290 - 178s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 14 00:04:10.290 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1788589013959970816junit5 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 16m24s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 16m24s, firing for 0s:
May 09 17:01:41.487 - 238s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 09 17:01:41.487 - 628s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 09 17:02:19.487 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1787907588784918528junit7 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 25m22s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 25m22s, firing for 0s:
May 07 19:38:44.900 - 268s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 07 19:38:44.900 - 1078s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 07 19:39:18.900 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
May 07 19:57:44.900 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1787866745839554560junit7 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 16m30s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 16m30s, firing for 0s:
May 07 17:04:48.650 - 872s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 07 17:06:52.650 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1788563523152908288junit5 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 27m24s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 27m24s, firing for 0s:
May 09 15:52:36.518 - 1288s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 09 15:53:06.518 - 238s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 09 15:53:08.518 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1787485574832066560junit8 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m24s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 13m24s, firing for 0s:
May 06 15:50:53.602 - 628s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 06 15:51:23.602 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 06 15:52:15.602 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1787400823068692480junit9 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 14m22s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 14m22s, firing for 0s:
May 06 10:08:47.606 - 598s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 06 10:09:47.606 - 178s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 06 10:10:51.606 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
May 06 10:20:17.606 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1786213033194819584junit12 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 22m24s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 22m24s, firing for 0s:
May 03 03:40:30.915 - 1048s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 03 03:41:00.915 - 208s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 03 03:41:16.915 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}

Found in 50.00% of runs (57.14% of failures) across 32 total runs and 1 jobs (87.50% failed) in 452ms - clear search | chart view - source code located on github