Job:
periodic-ci-openshift-release-master-ci-4.7-upgrade-from-stable-4.6-e2e-aws-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1780085420428627968junit6 hours ago
Apr 16 05:49:06.537 E oauth-apiserver-reused-connection oauth-apiserver-reused-connection is not responding to GET requests
Apr 16 05:49:08.417 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-171-150.us-west-1.compute.internal node/ip-10-0-171-150.us-west-1.compute.internal container/kube-scheduler container exited with code 1 (Error):       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-etcd/etcd-quorum-guard-56446b67db-lwkq8" err="0/6 nodes are available: 2 node(s) didn't match Pod's node affinity, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable."\nI0416 05:48:03.505722       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-apiserver/apiserver-5bdf97b9d9-dqx6p" err="0/6 nodes are available: 2 node(s) didn't match Pod's node affinity, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable."\nI0416 05:48:04.505962       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-oauth-apiserver/apiserver-7c87fc6fcb-6g429" err="0/6 nodes are available: 2 node(s) didn't match Pod's node affinity, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable."\nI0416 05:48:04.506225       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-etcd/etcd-quorum-guard-56446b67db-lwkq8" err="0/6 nodes are available: 2 node(s) didn't match Pod's node affinity, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable."\nE0416 05:48:48.062414       1 leaderelection.go:325] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.ci-op-vw180vwt-374d8.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nI0416 05:48:48.062468       1 leaderelection.go:278] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nE0416 05:48:48.062559       1 leaderelection.go:301] Failed to release lock: resource name may not be empty\nF0416 05:48:48.062599       1 server.go:217] leaderelection lost\n
Apr 16 05:49:08.476 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6865d94464-6ftw7 node/ip-10-0-171-150.us-west-1.compute.internal container/snapshot-controller container exited with code 255 (Error):
#1780085420428627968junit6 hours ago
Apr 16 05:51:38.635 E clusteroperator/monitoring changed Degraded to True: UpdatingnodeExporterFailed: Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter DaemonSet failed: updating DaemonSet object failed: waiting for DaemonSetRollout of openshift-monitoring/node-exporter: etcdserver: request timed out
Apr 16 05:51:40.279 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-171-150.us-west-1.compute.internal node/ip-10-0-171-150.us-west-1.compute.internal container/kube-scheduler container exited with code 1 (Error): 322] "Unable to schedule pod; no fit; waiting" pod="openshift-etcd/etcd-quorum-guard-56446b67db-lwkq8" err="0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match Pod's node affinity."\nI0416 05:51:24.785875       1 scheduler.go:606] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-h5m8l" node="ip-10-0-194-44.us-west-1.compute.internal" evaluatedNodes=6 feasibleNodes=3\nI0416 05:51:33.393965       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-apiserver/apiserver-5bdf97b9d9-dqx6p" err="0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match Pod's node affinity."\nI0416 05:51:33.394380       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-oauth-apiserver/apiserver-7c87fc6fcb-6g429" err="0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match Pod's node affinity."\nI0416 05:51:33.394672       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-etcd/etcd-quorum-guard-56446b67db-lwkq8" err="0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match Pod's node affinity."\nE0416 05:51:38.478571       1 leaderelection.go:325] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: etcdserver: request timed out\nI0416 05:51:39.635963       1 leaderelection.go:278] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nE0416 05:51:39.636398       1 leaderelection.go:301] Failed to release lock: resource name may not be empty\nF0416 05:51:39.636457       1 server.go:217] leaderelection lost\n
Apr 16 05:51:40.588 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6865d94464-6ftw7 node/ip-10-0-171-150.us-west-1.compute.internal container/snapshot-controller container exited with code 255 (Error):
periodic-ci-openshift-release-master-nightly-4.9-upgrade-from-stable-4.8-e2e-aws-upgrade-ovn-single-node (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1779920330412789760junit16 hours ago
Apr 15 17:53:36.001 - 120s  E openshift-apiserver-reused-connection openshift-apiserver-reused-connection is not responding to GET requests
Apr 15 17:55:29.836 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-36.us-west-2.compute.internal node/ip-10-0-131-36.us-west-2.compute.internal container/kube-scheduler reason/ContainerExit code/1 cause/Error \nI0415 17:53:05.007737       1 factory.go:339] "Unable to schedule pod; no fit; waiting" pod="e2e-k8s-service-lb-available-5820/service-test-nsfbf" err="0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity rules, 1 node(s) didn't match pod anti-affinity rules."\nI0415 17:53:08.078751       1 scheduler.go:606] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-gnq6w" node="ip-10-0-131-36.us-west-2.compute.internal" evaluatedNodes=1 feasibleNodes=1\nI0415 17:53:17.209947       1 factory.go:339] "Unable to schedule pod; no fit; waiting" pod="e2e-k8s-service-lb-available-5820/service-test-nsfbf" err="0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity rules, 1 node(s) didn't match pod anti-affinity rules."\nI0415 17:53:28.021649       1 factory.go:339] "Unable to schedule pod; no fit; waiting" pod="e2e-k8s-service-lb-available-5820/service-test-nsfbf" err="0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity rules, 1 node(s) didn't match pod anti-affinity rules."\nE0415 17:53:42.104711       1 leaderelection.go:325] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.ci-op-d3h6f62t-3cd04.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nE0415 17:53:47.104860       1 leaderelection.go:325] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.ci-op-d3h6f62t-3cd04.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler?timeout=5s": context deadline exceeded\nI0415 17:53:47.104907       1 leaderelection.go:278] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nE0415 17:53:47.104959       1 leaderelection.go:301] Failed to release lock: resource name may not be empty\nF0415 17:53:47.104980       1 server.go:226] leaderelection lost\n
Apr 15 17:55:29.957 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-36.us-west-2.compute.internal node/ip-10-0-131-36.us-west-2.compute.internal container/kube-controller-manager reason/ContainerExit code/1 cause/Error 9] "Too few replicas" replicaSet="openshift-etcd-operator/etcd-operator-7b4794dbd6" need=1 creating=1\nI0415 17:53:04.495456       1 event.go:291] "Event occurred" object="openshift-etcd-operator/etcd-operator-7b4794dbd6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: etcd-operator-7b4794dbd6-7cqkw"\nI0415 17:53:04.507477       1 deployment_controller.go:490] "Error syncing deployment" deployment="openshift-etcd-operator/etcd-operator" err="Operation cannot be fulfilled on deployments.apps \"etcd-operator\": the object has been modified; please apply your changes to the latest version and try again"\nI0415 17:53:04.530366       1 deployment_controller.go:490] "Error syncing deployment" deployment="openshift-etcd-operator/etcd-operator" err="Operation cannot be fulfilled on deployments.apps \"etcd-operator\": the object has been modified; please apply your changes to the latest version and try again"\nE0415 17:53:41.385851       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-d3h6f62t-3cd04.aws-2.ci.openshift.org:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nE0415 17:53:46.386316       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-d3h6f62t-3cd04.aws-2.ci.openshift.org:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=5s": context deadline exceeded\nI0415 17:53:46.386361       1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nI0415 17:53:46.386467       1 event.go:291] "Event occurred" object="" kind="ConfigMap" apiVersion="v1" type="Normal" reason="LeaderElection" message="ip-10-0-131-36_37dbe421-d713-4302-9ea1-e216717776c7 stopped leading"\nF0415 17:53:46.386497       1 controllermanager.go:319] leaderelection lost\n

Found in 0.01% of runs (0.03% of failures) across 37444 total runs and 4768 jobs (20.49% failed) in 6.095s - clear search | chart view - source code located on github