Job:
#OCPBUGS-15430issue4 weeks agoKubeAPIDown alert rename and/or degraded status ASSIGNED
We have many guards making sure that there are always at least two instances of the kube-apiserver. If we ever reach a single kube-apiserver and it causes disruption for the clients, other alerts such as KubeAPIErrorBudgetBurn will fire.
KubeAPIDown is here to make sure that Prometheus and really any client can reach the kube-apiserver, which they can even when there is only one instance of kube-apiserver running. If they can't or that availability is disrupted, `KubeAPIErrorBudgetBurn` will fire.
Comment 23058588 by Marcel Härri at 2023-09-19T06:57:07.949+0000
#OCPBUGS-30267issue6 weeks ago[IBMCloud] MonitorTests liveness/readiness probe error events repeat MODIFIED
Mar 12 18:52:24.937 - 58s E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS
{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
periodic-ci-openshift-release-master-ci-4.14-e2e-azure-sdn-techpreview-serial (all) - 25 runs, 80% failed, 40% of failures match = 32% impact
#1790684922562744320junit8 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m28s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 4m28s, firing for 0s:
May 15 11:28:38.750 - 268s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1790610362240864256junit13 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m58s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1m58s, firing for 0s:
May 15 06:07:06.172 - 118s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1789046062187548672junit4 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
May 10 22:30:54.857 - 58s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1788276683183230976junit6 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
May 08 20:08:35.103 - 58s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1788953741391564800junit5 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
May 10 16:32:59.504 - 58s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1787637577524711424junit8 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
May 07 01:09:13.147 - 58s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1787439581327527936junit9 days ago
        <*errors.errorString | 0xc001ec47f0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|TechPreviewNoUpgrade|ClusterNotUpgradeable\",alertstate=\"firing\",severity!=\"info\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAPITerminatedRequests\",\n      \"alertstate\": \"firing\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.apps.openshift.io\",\n      \"namespace\": \"default\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.authorization.openshift.io\",\n      \"namespace\": \"default\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.build.openshift.io\",\n      \"namespace\": \"default\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.image.openshift.io\",\n      \"namespace\": \"default\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.oauth.openshift.io\",\n      \"namespace\": \"default\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.packages.operators.coreos.com\",\n      \"namespace\": \"default\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.project.openshift.io\",\n      \"namespace\": \"default\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.quota.openshift.io\",\n      \"namespace\": \"default\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.route.openshift.io\",\n      \"namespace\": \"default\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.security.openshift.io\",\n      \"namespace\": \"default\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.template.openshift.io\",\n      \"namespace\": \"default\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1714997255.142,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeAggregatedAPIErrors\",\n      \"alertstate\": \"firing\",\n      \"name\": \"v1.user.openshi...
#1786143321261871104junit12 days ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m28s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 2m28s, firing for 0s:
May 02 22:13:26.976 - 148s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}

Found in 32.00% of runs (40.00% of failures) across 25 total runs and 1 jobs (80.00% failed) in 119ms - clear search | chart view - source code located on github