Job:
#OCPBUGS-30267issue4 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
#OCPBUGS-15430issue13 days agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
periodic-ci-openshift-ovn-kubernetes-master-e2e-ibmcloud-ipi-ovn-periodic (all) - 14 runs, 100% failed, 79% of failures match = 79% impact
#1784372484208857088junit18 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 42m20s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 42m20s, firing for 0s:
Apr 28 01:18:59.862 - 1498s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 28 01:19:29.862 - 538s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 28 01:19:39.862 - 418s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 28 01:20:09.862 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
Apr 28 01:45:29.862 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1783647469070979072junit2 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 44m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 44m0s, firing for 28s:
Apr 26 01:08:51.813 - 206s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 26 01:08:51.813 - 206s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 26 01:09:17.813 - 180s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 26 01:12:45.813 - 120s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 26 01:12:45.813 - 274s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 26 01:12:45.813 - 1654s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 26 01:12:17.813 - 28s   E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
#1783285089497518080junit3 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 45m50s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 45m50s, firing for 0s:
Apr 25 01:12:45.996 - 582s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 25 01:12:45.996 - 1782s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 25 01:14:19.996 - 298s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 25 01:42:59.996 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1784009986825785344junit42 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 38m58s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 38m58s, firing for 0s:
Apr 27 01:16:58.982 - 1592s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 27 01:18:02.982 - 448s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 27 01:18:36.982 - 298s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1782560271139606528junit5 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 35m52s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 35m52s, firing for 0s:
Apr 23 01:30:51.991 - 1824s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 23 01:32:17.991 - 328s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1781835667517476864junit7 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 37m40s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 37m40s, firing for 0s:
Apr 21 01:28:45.202 - 8s    I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 21 01:28:45.202 - 1208s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 21 01:29:55.202 - 748s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 21 01:30:31.202 - 238s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 21 01:49:25.202 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1781473174928494592junit8 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 20m26s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 20m26s, firing for 0s:
Apr 20 01:16:38.760 - 328s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 20 01:16:38.760 - 898s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1780748407288107008junit10 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 35m52s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 35m52s, firing for 0s:
Apr 18 01:15:36.278 - 1318s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 18 01:15:54.278 - 208s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 18 01:16:06.278 - 478s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 18 01:38:36.278 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1781110816800509952junit9 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 39m24s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 39m24s, firing for 0s:
Apr 19 01:29:10.040 - 568s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 19 01:29:10.040 - 1408s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 19 01:29:26.040 - 388s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1780023759248297984junit12 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 54m14s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 54m14s, firing for 0s:
Apr 16 01:14:26.925 - 4s    I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 16 01:14:26.925 - 742s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 16 01:14:26.925 - 1822s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 16 01:15:32.925 - 568s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 16 01:16:32.925 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
#1780386199815327744junit11 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h7m24s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h7m24s, firing for 0s:
Apr 17 01:27:54.035 - 2848s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 17 01:28:24.035 - 448s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 17 01:28:24.035 - 748s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}

Found in 78.57% of runs (78.57% of failures) across 14 total runs and 1 jobs (100.00% failed) in 111ms - clear search | chart view - source code located on github