Job:
periodic-ci-openshift-release-master-ci-4.7-upgrade-from-stable-4.6-e2e-aws-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1780085420428627968junit13 days ago
Apr 16 05:49:06.537 E oauth-apiserver-reused-connection oauth-apiserver-reused-connection is not responding to GET requests
Apr 16 05:49:08.417 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-171-150.us-west-1.compute.internal node/ip-10-0-171-150.us-west-1.compute.internal container/kube-scheduler container exited with code 1 (Error):       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-etcd/etcd-quorum-guard-56446b67db-lwkq8" err="0/6 nodes are available: 2 node(s) didn't match Pod's node affinity, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable."\nI0416 05:48:03.505722       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-apiserver/apiserver-5bdf97b9d9-dqx6p" err="0/6 nodes are available: 2 node(s) didn't match Pod's node affinity, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable."\nI0416 05:48:04.505962       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-oauth-apiserver/apiserver-7c87fc6fcb-6g429" err="0/6 nodes are available: 2 node(s) didn't match Pod's node affinity, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable."\nI0416 05:48:04.506225       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-etcd/etcd-quorum-guard-56446b67db-lwkq8" err="0/6 nodes are available: 2 node(s) didn't match Pod's node affinity, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable."\nE0416 05:48:48.062414       1 leaderelection.go:325] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.ci-op-vw180vwt-374d8.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nI0416 05:48:48.062468       1 leaderelection.go:278] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nE0416 05:48:48.062559       1 leaderelection.go:301] Failed to release lock: resource name may not be empty\nF0416 05:48:48.062599       1 server.go:217] leaderelection lost\n
Apr 16 05:49:08.476 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6865d94464-6ftw7 node/ip-10-0-171-150.us-west-1.compute.internal container/snapshot-controller container exited with code 255 (Error):
#1780085420428627968junit13 days ago
Apr 16 05:51:38.635 E clusteroperator/monitoring changed Degraded to True: UpdatingnodeExporterFailed: Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter DaemonSet failed: updating DaemonSet object failed: waiting for DaemonSetRollout of openshift-monitoring/node-exporter: etcdserver: request timed out
Apr 16 05:51:40.279 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-171-150.us-west-1.compute.internal node/ip-10-0-171-150.us-west-1.compute.internal container/kube-scheduler container exited with code 1 (Error): 322] "Unable to schedule pod; no fit; waiting" pod="openshift-etcd/etcd-quorum-guard-56446b67db-lwkq8" err="0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match Pod's node affinity."\nI0416 05:51:24.785875       1 scheduler.go:606] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-h5m8l" node="ip-10-0-194-44.us-west-1.compute.internal" evaluatedNodes=6 feasibleNodes=3\nI0416 05:51:33.393965       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-apiserver/apiserver-5bdf97b9d9-dqx6p" err="0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match Pod's node affinity."\nI0416 05:51:33.394380       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-oauth-apiserver/apiserver-7c87fc6fcb-6g429" err="0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match Pod's node affinity."\nI0416 05:51:33.394672       1 factory.go:322] "Unable to schedule pod; no fit; waiting" pod="openshift-etcd/etcd-quorum-guard-56446b67db-lwkq8" err="0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match Pod's node affinity."\nE0416 05:51:38.478571       1 leaderelection.go:325] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: etcdserver: request timed out\nI0416 05:51:39.635963       1 leaderelection.go:278] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nE0416 05:51:39.636398       1 leaderelection.go:301] Failed to release lock: resource name may not be empty\nF0416 05:51:39.636457       1 server.go:217] leaderelection lost\n
Apr 16 05:51:40.588 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6865d94464-6ftw7 node/ip-10-0-171-150.us-west-1.compute.internal container/snapshot-controller container exited with code 255 (Error):

Found in 100.00% of runs (100.00% of failures) across 1 total runs and 1 jobs (100.00% failed) in 90ms - clear search | chart view - source code located on github