Job:
#OCPBUGS-15430issue13 days agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
#OCPBUGS-30267issue4 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
periodic-ci-openshift-release-master-ci-4.14-e2e-gcp-sdn-techpreview-serial (all) - 30 runs, 80% failed, 13% of failures match = 10% impact
#1783561016378920960junit3 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 30m46s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 30m46s, firing for 0s:
Apr 25 18:59:57.532 - 8s    I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 25 18:59:57.532 - 246s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 25 18:59:57.532 - 1506s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 25 19:05:35.532 - 28s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 25 19:26:35.532 - 58s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1783971993880629248junit2 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m14s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 4m14s, firing for 0s:
Apr 26 22:08:01.646 - 254s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1782366019503788032junit6 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 34m8s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 34m8s, firing for 0s:
Apr 22 11:50:09.688 - 334s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 22 11:50:09.688 - 1714s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}

Found in 10.00% of runs (12.50% of failures) across 30 total runs and 1 jobs (80.00% failed) in 136ms - clear search | chart view - source code located on github