Job:
pull-ci-openshift-installer-master-e2e-azurestack (all) - 34 runs, 82% failed, 21% of failures match = 18% impact
#1790795704436789248junit2 days ago
V2 alert ClusterNotUpgradeable fired for 13m10s seconds with labels: alertstate/firing severity/info ALERTS{alertname="ClusterNotUpgradeable", alertstate="firing", condition="Upgradeable", endpoint="metrics", name="version", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", severity="info"} result=reject
V2 alert ClusterOperatorDown fired for 5m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="authentication", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="OAuthServerRouteEndpointAccessibleController_EndpointUnavailable", severity="critical"} result=reject
V2 alert InsightsRecommendationActive fired for 1h9m52s seconds with labels: alertstate/firing severity/info ALERTS{alertname="InsightsRecommendationActive", alertstate="firing", container="insights-operator", description="The control plan nodes' disks of OpenShift cluster on Azure don't provide enough IOPS performance for etcd", endpoint="https", info_link="https://console.redhat.com/openshift/insights/advisor/clusters/a3753462-ec33-410b-89c3-36e5d3cb3a7f?first=ccx_rules_ocp.external.rules.disk_low_throughput_in_azure|ERROR_DISK_LOW_THROUGHPUT_IN_AZURE", instance="10.130.0.21:8443", job="metrics", namespace="openshift-insights", pod="insights-operator-796c74745b-zrng9", prometheus="openshift-monitoring/k8s", service="metrics", severity="info", total_risk="Important"} result=reject
#1790797001785348096junit2 days ago
V2 alert ClusterNotUpgradeable fired for 18m56s seconds with labels: alertstate/firing severity/info ALERTS{alertname="ClusterNotUpgradeable", alertstate="firing", condition="Upgradeable", endpoint="metrics", name="version", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", severity="info"} result=reject
V2 alert ClusterOperatorDown fired for 5m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="console", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="RouteHealth_FailedGet", severity="critical"} result=reject
V2 alert InsightsRecommendationActive fired for 1h13m8s seconds with labels: alertstate/firing severity/info ALERTS{alertname="InsightsRecommendationActive", alertstate="firing", container="insights-operator", description="The control plan nodes' disks of OpenShift cluster on Azure don't provide enough IOPS performance for etcd", endpoint="https", info_link="https://console.redhat.com/openshift/insights/advisor/clusters/1664d335-772e-4125-998e-db4aa304af83?first=ccx_rules_ocp.external.rules.disk_low_throughput_in_azure|ERROR_DISK_LOW_THROUGHPUT_IN_AZURE", instance="10.128.0.25:8443", job="metrics", namespace="openshift-insights", pod="insights-operator-76b86898-2b9tk", prometheus="openshift-monitoring/k8s", service="metrics", severity="info", total_risk="Important"} result=reject
#1790412388864888832junit3 days ago
V2 alert ClusterNotUpgradeable fired for 22m10s seconds with labels: alertstate/firing severity/info ALERTS{alertname="ClusterNotUpgradeable", alertstate="firing", condition="Upgradeable", endpoint="metrics", name="version", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", severity="info"} result=reject
V2 alert ClusterOperatorDown fired for 4m40s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="authentication", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="OAuthServerRouteEndpointAccessibleController_EndpointUnavailable", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 4m58s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="authentication", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="OAuthServerRouteEndpointAccessibleController_EndpointUnavailable", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 5m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="console", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="RouteHealth_FailedGet", severity="critical"} result=reject
V2 alert PodStartupStorageOperationsFailing fired for 3m58s seconds with labels: alertstate/firing severity/info ALERTS{alertname="PodStartupStorageOperationsFailing", alertstate="firing", endpoint="https-metrics", instance="10.0.128.4:10250", job="kubelet", metrics_path="/metrics", migrated="false", namespace="kube-system", node="ci-op-bn7q811n-195eb-lr95g-worker-mtcazs-9g9gr", operation_name="volume_mount", prometheus="openshift-monitoring/k8s", service="kubelet", severity="info", status="fail-unknown", volume_plugin="kubernetes.io/projected"} result=reject
#1790145768804323328junit4 days ago
V2 alert ClusterNotUpgradeable fired for 13m44s seconds with labels: alertstate/firing severity/info ALERTS{alertname="ClusterNotUpgradeable", alertstate="firing", condition="Upgradeable", endpoint="metrics", name="version", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", severity="info"} result=reject
V2 alert ClusterOperatorDown fired for 4m44s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="console", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="RouteHealth_FailedGet", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 5m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="console", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="RouteHealth_FailedGet", severity="critical"} result=reject
V2 alert InsightsRecommendationActive fired for 1h11m50s seconds with labels: alertstate/firing severity/info ALERTS{alertname="InsightsRecommendationActive", alertstate="firing", container="insights-operator", description="The control plan nodes' disks of OpenShift cluster on Azure don't provide enough IOPS performance for etcd", endpoint="https", info_link="https://console.redhat.com/openshift/insights/advisor/clusters/ff18a977-c736-4159-87d8-10f4f5554ab3?first=ccx_rules_ocp.external.rules.disk_low_throughput_in_azure|ERROR_DISK_LOW_THROUGHPUT_IN_AZURE", instance="10.129.0.24:8443", job="metrics", namespace="openshift-insights", pod="insights-operator-7dc557b558-7xvcw", prometheus="openshift-monitoring/k8s", service="metrics", severity="info", total_risk="Important"} result=reject
#1788563523152908288junit8 days ago
V2 alert ClusterNotUpgradeable fired for 10m46s seconds with labels: alertstate/firing severity/info ALERTS{alertname="ClusterNotUpgradeable", alertstate="firing", condition="Upgradeable", endpoint="metrics", name="version", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", severity="info"} result=reject
V2 alert ClusterOperatorDown fired for 5m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="console", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="RouteHealth_FailedGet", severity="critical"} result=reject
V2 alert InsightsRecommendationActive fired for 1h12m28s seconds with labels: alertstate/firing severity/info ALERTS{alertname="InsightsRecommendationActive", alertstate="firing", container="insights-operator", description="The control plan nodes' disks of OpenShift cluster on Azure don't provide enough IOPS performance for etcd", endpoint="https", info_link="https://console.redhat.com/openshift/insights/advisor/clusters/8a3664ae-e51f-4652-a6d4-e496bcc6034f?first=ccx_rules_ocp.external.rules.disk_low_throughput_in_azure|ERROR_DISK_LOW_THROUGHPUT_IN_AZURE", instance="10.128.0.29:8443", job="metrics", namespace="openshift-insights", pod="insights-operator-994dc546-bw4nw", prometheus="openshift-monitoring/k8s", service="metrics", severity="info", total_risk="Important"} result=reject
#1788206622292578304junit9 days ago
V2 alert ClusterNotUpgradeable fired for 20m22s seconds with labels: alertstate/firing severity/info ALERTS{alertname="ClusterNotUpgradeable", alertstate="firing", condition="Upgradeable", endpoint="metrics", name="version", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", severity="info"} result=reject
V2 alert ClusterOperatorDown fired for 4m58s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="authentication", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="OAuthServerRouteEndpointAccessibleController_EndpointUnavailable", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 5m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="console", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="RouteHealth_FailedGet", severity="critical"} result=reject
V2 alert InsightsRecommendationActive fired for 1h17m2s seconds with labels: alertstate/firing severity/info ALERTS{alertname="InsightsRecommendationActive", alertstate="firing", container="insights-operator", description="The control plan nodes' disks of OpenShift cluster on Azure don't provide enough IOPS performance for etcd", endpoint="https", info_link="https://console.redhat.com/openshift/insights/advisor/clusters/253b2a49-368e-49d2-91ee-6ee6a5be76ce?first=ccx_rules_ocp.external.rules.disk_low_throughput_in_azure|ERROR_DISK_LOW_THROUGHPUT_IN_AZURE", instance="10.130.0.30:8443", job="metrics", namespace="openshift-insights", pod="insights-operator-65655f8b9f-nxgvj", prometheus="openshift-monitoring/k8s", service="metrics", severity="info", total_risk="Important"} result=reject

Found in 17.65% of runs (21.43% of failures) across 34 total runs and 1 jobs (82.35% failed) in 101ms - clear search | chart view - source code located on github