Job:
#OCPBUGS-15430issue2 weeks agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
#OCPBUGS-30267issue4 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
pull-ci-openshift-cluster-etcd-operator-release-4.14-e2e-aws-etcd-recovery (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1783417456023310336junit3 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 38m38s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 31m42s, firing for 6m56s:
Apr 25 10:00:55.350 - 966s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 25 10:01:25.350 - 936s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 25 10:17:01.350 - 416s  E alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info
KubeAPIErrorBudgetBurn was at or above info for at least 6m56s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 31m42s, firing for 6m56s:
#1783441565247606784junit3 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h5m50s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 45m28s, firing for 20m22s:
Apr 25 11:07:29.086 - 1540s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 25 11:26:55.086 - 374s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 25 11:27:39.086 - 330s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 25 11:38:37.086 - 242s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 25 11:38:37.086 - 242s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 25 11:33:09.086 - 328s  E alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
Apr 25 11:42:39.086 - 894s  E alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}

Found in 100.00% of runs (100.00% of failures) across 2 total runs and 1 jobs (100.00% failed) in 87ms - clear search | chart view - source code located on github