Job:
#OCPBUGS-30267issue6 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
#OCPBUGS-15430issue4 weeks agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-ovn-upi (all) - 16 runs, 44% failed, 57% of failures match = 25% impact
#1790497092163080192junit16 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 10m12s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=2s): pending for 10m12s, firing for 0s:
May 14 22:30:36.296 - 24s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
May 14 22:30:36.296 - 114s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 14 22:30:36.296 - 474s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1788189301956153344junit7 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m20s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=2s): pending for 2m20s, firing for 0s:
May 08 13:40:49.931 - 140s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1788043163273793536junit7 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m30s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=2s): pending for 3m30s, firing for 0s:
May 08 03:57:47.593 - 210s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1785799355219841024junit13 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m52s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=2s): pending for 1m52s, firing for 0s:
May 01 23:27:18.366 - 112s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}

Found in 25.00% of runs (57.14% of failures) across 16 total runs and 1 jobs (43.75% failed) in 117ms - clear search | chart view - source code located on github