Job:
#OCPBUGS-30267issue5 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
#OCPBUGS-15430issue3 weeks agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-aws-sdn-upgrade (all) - 21 runs, 19% failed, 200% of failures match = 38% impact
#1785723655641108480junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 20m18s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 20m18s, firing for 0s:
May 01 18:07:09.910 - 980s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 01 18:25:01.910 - 238s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1785644835416313856junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m36s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1m36s, firing for 0s:
May 01 12:53:11.287 - 96s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1784852745119862784junit6 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 18s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 18s, firing for 0s:
Apr 29 08:32:19.110 - 18s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1783596497997139968junit10 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 9m56s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 9m56s, firing for 0s:
Apr 25 21:11:55.834 - 596s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1785511003597836288junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h11m44s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1h11m44s, firing for 0s:
May 01 04:03:24.837 - 832s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 01 04:03:24.837 - 3472s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1782667923500830720junit12 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m56s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1m56s, firing for 0s:
Apr 23 07:41:47.021 - 116s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1782331458271055872junit13 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m40s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 4m40s, firing for 0s:
Apr 22 09:27:53.231 - 280s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1782234404584689664junit2 weeks ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 12m10s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 12m10s, firing for 0s:
Apr 22 03:00:51.038 - 552s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 22 03:11:35.038 - 178s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}

Found in 38.10% of runs (200.00% of failures) across 21 total runs and 1 jobs (19.05% failed) in 1.199s - clear search | chart view - source code located on github