Job:
#OCPBUGS-15430issue4 weeks agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
#OCPBUGS-30267issue6 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64 (all) - 17 runs, 24% failed, 225% of failures match = 53% impact
#1790397514466201600junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m20s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m20s, firing for 0s:
May 14 15:39:28.169 - 140s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1789119927496478720junit3 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m2s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m2s, firing for 0s:
May 11 03:01:22.607 - 182s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1790339201472925696junit14 hours ago
2024-05-14T12:37:13Z: Call to sippy finished after: 1.829292469s
response Body: {"ProwJobName":"periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64","ProwJobRunID":1790339201472925696,"Release":"4.13","CompareRelease":"4.13","Tests":[{"Name":"[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info","Risk":{"Level":{"Name":"Medium","Level":50},"Reasons":["This test has passed 94.44% of 18 runs on jobs ['periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64'] in the last 14 days."]},"OpenBugs":[]}],"OverallRisk":{"Level":{"Name":"Medium","Level":50},"Reasons":["Maximum failed test risk: Medium"]},"OpenBugs":[]}
#1790339201472925696junit14 hours ago
2024-05-14T12:37:13Z: Call to sippy finished after: 1.829292469s
response Body: {"ProwJobName":"periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64","ProwJobRunID":1790339201472925696,"Release":"4.13","CompareRelease":"4.13","Tests":[{"Name":"[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info","Risk":{"Level":{"Name":"Medium","Level":50},"Reasons":["This test has passed 94.44% of 18 runs on jobs ['periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64'] in the last 14 days."]},"OpenBugs":[]}],"OverallRisk":{"Level":{"Name":"Medium","Level":50},"Reasons":["Maximum failed test risk: Medium"]},"OpenBugs":[]}
#1790339201472925696junit14 hours ago
May 14 11:54:24.964 - 2s    E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
May 14 11:54:24.964 - 2524s E alert/Watchdog ns/openshift-monitoring ALERTS{alertname="Watchdog", alertstate="firing", namespace="openshift-monitoring", prometheus="openshift-monitoring/k8s", severity="none"}
#1790339201472925696junit14 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 16m46s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 16m44s, firing for 2s:
May 14 11:54:26.964 - 142s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 14 11:54:26.964 - 862s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 14 11:54:24.964 - 2s    E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1788898182235688960junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m18s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m18s, firing for 0s:
May 10 12:20:22.715 - 78s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1788796339807588352junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m22s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m22s, firing for 0s:
May 10 05:40:09.152 - 82s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1788853150237593600junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m26s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m26s, firing for 0s:
May 10 09:43:40.416 - 86s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1788263787501981696junit6 days ago
2024-05-08T19:10:26Z: Call to sippy finished after: 13.572895515s
response Body: {"ProwJobName":"periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64","ProwJobRunID":1788263787501981696,"Release":"4.13","CompareRelease":"4.13","Tests":[{"Name":"[sig-node] static pods should start after being created","Risk":{"Level":{"Name":"High","Level":100},"Reasons":["This test has passed 100.00% of 16 runs on jobs ['periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64'] in the last 14 days."]},"OpenBugs":[]},{"Name":"[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info","Risk":{"Level":{"Name":"High","Level":100},"Reasons":["This test has passed 100.00% of 16 runs on jobs ['periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64'] in the last 14 days."]},"OpenBugs":[]}],"OverallRisk":{"Level":{"Name":"High","Level":100},"Reasons":["Maximum failed test risk: High"]},"OpenBugs":[]}
#1788263787501981696junit6 days ago
2024-05-08T19:10:26Z: Call to sippy finished after: 13.572895515s
response Body: {"ProwJobName":"periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64","ProwJobRunID":1788263787501981696,"Release":"4.13","CompareRelease":"4.13","Tests":[{"Name":"[sig-node] static pods should start after being created","Risk":{"Level":{"Name":"High","Level":100},"Reasons":["This test has passed 100.00% of 16 runs on jobs ['periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64'] in the last 14 days."]},"OpenBugs":[]},{"Name":"[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info","Risk":{"Level":{"Name":"High","Level":100},"Reasons":["This test has passed 100.00% of 16 runs on jobs ['periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64'] in the last 14 days."]},"OpenBugs":[]}],"OverallRisk":{"Level":{"Name":"High","Level":100},"Reasons":["Maximum failed test risk: High"]},"OpenBugs":[]}
#1788263787501981696junit6 days ago
May 08 18:25:50.969 - 2620s E alert/Watchdog ns/openshift-monitoring ALERTS{alertname="Watchdog", alertstate="firing", namespace="openshift-monitoring", prometheus="openshift-monitoring/k8s", severity="none"}
May 08 18:26:02.969 - 88s   E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
May 08 18:26:22.981 E ns/e2e-container-lifecycle-hook-4287 pod/pod-with-poststart-exec-hook node/ip-10-0-252-140.us-west-1.compute.internal uid/5de21eb4-8927-4a4f-9394-9c48c6db0ab7 container/pod-with-poststart-exec-hook reason/ContainerExit code/2 cause/Error
#1788263787501981696junit6 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 21m46s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 20m18s, firing for 1m28s:
May 08 18:25:50.969 - 12s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 08 18:25:50.969 - 12s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 08 18:27:30.969 - 102s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
May 08 18:27:30.969 - 1092s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
May 08 18:26:02.969 - 88s   E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1786583084720721920junit10 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m4s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m4s, firing for 0s:
May 04 03:01:05.274 - 184s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1788040688017870848junit6 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m8s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m8s, firing for 0s:
May 08 03:39:12.940 - 68s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}

Found in 52.94% of runs (225.00% of failures) across 17 total runs and 1 jobs (23.53% failed) in 113ms - clear search | chart view - source code located on github