Job:
#OCPBUGS-30267issue4 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
#OCPBUGS-15430issue13 days agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-upgrade-aws-ovn-arm64 (all) - 34 runs, 15% failed, 180% of failures match = 26% impact
#1784018820403302400junit2 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m56s on platformidentification.JobType{Release:"4.13", FromRelease:"4.13", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 7m56s, firing for 0s:
Apr 27 01:13:06.359 - 476s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1782793559791898624junit5 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m44s on platformidentification.JobType{Release:"4.13", FromRelease:"4.13", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m44s, firing for 0s:
Apr 23 16:13:14.862 - 164s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1782921776720777216junit5 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m40s on platformidentification.JobType{Release:"4.13", FromRelease:"4.13", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 8m40s, firing for 0s:
Apr 24 00:29:56.808 - 520s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1782746995090264064junit5 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 17m12s on platformidentification.JobType{Release:"4.13", FromRelease:"4.13", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 17m12s, firing for 0s:
Apr 23 12:57:17.121 - 1032s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1781842737234972672junit8 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 42m58s on platformidentification.JobType{Release:"4.13", FromRelease:"4.13", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 42m58s, firing for 0s:
Apr 21 01:06:00.233 - 76s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 21 01:06:00.233 - 2116s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 21 01:08:48.233 - 28s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 21 01:42:48.233 - 358s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1781725761632210944junit8 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 23m42s on platformidentification.JobType{Release:"4.13", FromRelease:"4.13", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 23m42s, firing for 0s:
Apr 20 17:26:17.859 - 1422s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1781003095275212800junit10 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m58s on platformidentification.JobType{Release:"4.13", FromRelease:"4.13", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 5m58s, firing for 0s:
Apr 18 17:29:55.918 - 358s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1780789586683760640junit10 days ago
2024-04-18T04:08:50Z: Call to sippy finished after: 1.69744169s
response Body: {"ProwJobName":"periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-upgrade-aws-ovn-arm64","ProwJobRunID":1780789586683760640,"Release":"4.13","CompareRelease":"4.13","Tests":[{"Name":"[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info","Risk":{"Level":{"Name":"High","Level":100},"Reasons":["This test has passed 100.00% of 9 runs on jobs ['periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-upgrade-aws-ovn-arm64'] in the last 14 days."]},"OpenBugs":[]}],"OverallRisk":{"Level":{"Name":"High","Level":100},"Reasons":["Maximum failed test risk: High"]},"OpenBugs":[]}
#1780789586683760640junit10 days ago
2024-04-18T04:08:50Z: Call to sippy finished after: 1.69744169s
response Body: {"ProwJobName":"periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-upgrade-aws-ovn-arm64","ProwJobRunID":1780789586683760640,"Release":"4.13","CompareRelease":"4.13","Tests":[{"Name":"[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info","Risk":{"Level":{"Name":"High","Level":100},"Reasons":["This test has passed 100.00% of 9 runs on jobs ['periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-upgrade-aws-ovn-arm64'] in the last 14 days."]},"OpenBugs":[]}],"OverallRisk":{"Level":{"Name":"High","Level":100},"Reasons":["Maximum failed test risk: High"]},"OpenBugs":[]}
#1780789586683760640junit10 days ago
Apr 18 03:27:04.744 - 2496s E alert/Watchdog ns/openshift-monitoring ALERTS{alertname="Watchdog", alertstate="firing", namespace="openshift-monitoring", prometheus="openshift-monitoring/k8s", severity="none"}
Apr 18 03:27:40.744 - 298s  E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 18 03:30:04.683 E ns/openshift-etcd pod/etcd-guard-ip-10-0-179-58.us-west-2.compute.internal node/ip-10-0-179-58.us-west-2.compute.internal uid/5f2f802b-b107-4704-a18f-c21dabe4c2de container/guard reason/ContainerExit code/137 cause/ContainerStatusUnknown The container could not be located when the pod was deleted.  The container used to be Running
#1780789586683760640junit10 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h5m44s on platformidentification.JobType{Release:"4.13", FromRelease:"4.13", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h0m46s, firing for 4m58s:
Apr 18 03:27:04.744 - 36s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 18 03:27:04.744 - 36s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 18 03:27:04.744 - 36s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Apr 18 03:32:38.744 - 1376s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 18 03:32:38.744 - 2162s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Apr 18 03:27:40.744 - 298s  E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1779969587253612544junit13 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 34m34s on platformidentification.JobType{Release:"4.13", FromRelease:"4.13", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 34m34s, firing for 0s:
Apr 15 21:03:05.669 - 2s    I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Apr 15 21:03:05.669 - 2072s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}

Found in 26.47% of runs (180.00% of failures) across 34 total runs and 1 jobs (14.71% failed) in 166ms - clear search | chart view - source code located on github