Job:
#OCPBUGS-30267issue4 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
#OCPBUGS-15430issue13 days agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
periodic-ci-openshift-release-master-ci-4.13-e2e-aws-sdn-techpreview-serial (all) - 22 runs, 18% failed, 150% of failures match = 27% impact
#1782569259331751936junit6 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m56s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 8m56s, firing for 0s:
Apr 23 01:20:01.500 - 536s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1781857031309758464junit8 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m34s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 8m34s, firing for 0s:
Apr 21 02:00:25.859 - 514s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1781675857870327808junit8 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m0s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1m0s, firing for 0s:
Apr 20 14:05:26.556 - 60s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1781572319894835200junit8 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m10s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 4m10s, firing for 0s:
Apr 20 07:11:39.858 - 6s    I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 20 07:11:39.858 - 216s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 20 07:15:47.858 - 28s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1781208851798822912junit9 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m32s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1m32s, firing for 0s:
Apr 19 07:01:39.525 - 92s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1780805713631645696junit10 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 12m4s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 12m4s, firing for 0s:
Apr 18 04:26:55.006 - 724s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}

Found in 27.27% of runs (150.00% of failures) across 22 total runs and 1 jobs (18.18% failed) in 113ms - clear search | chart view - source code located on github