Job:
periodic-ci-openshift-release-master-nightly-4.9-upgrade-from-stable-4.8-e2e-aws-upgrade-ovn-single-node (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1779920330412789760junit13 days ago
Apr 15 17:53:36.001 - 120s  E openshift-apiserver-reused-connection openshift-apiserver-reused-connection is not responding to GET requests
Apr 15 17:55:29.836 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-36.us-west-2.compute.internal node/ip-10-0-131-36.us-west-2.compute.internal container/kube-scheduler reason/ContainerExit code/1 cause/Error \nI0415 17:53:05.007737       1 factory.go:339] "Unable to schedule pod; no fit; waiting" pod="e2e-k8s-service-lb-available-5820/service-test-nsfbf" err="0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity rules, 1 node(s) didn't match pod anti-affinity rules."\nI0415 17:53:08.078751       1 scheduler.go:606] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-gnq6w" node="ip-10-0-131-36.us-west-2.compute.internal" evaluatedNodes=1 feasibleNodes=1\nI0415 17:53:17.209947       1 factory.go:339] "Unable to schedule pod; no fit; waiting" pod="e2e-k8s-service-lb-available-5820/service-test-nsfbf" err="0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity rules, 1 node(s) didn't match pod anti-affinity rules."\nI0415 17:53:28.021649       1 factory.go:339] "Unable to schedule pod; no fit; waiting" pod="e2e-k8s-service-lb-available-5820/service-test-nsfbf" err="0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity rules, 1 node(s) didn't match pod anti-affinity rules."\nE0415 17:53:42.104711       1 leaderelection.go:325] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.ci-op-d3h6f62t-3cd04.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nE0415 17:53:47.104860       1 leaderelection.go:325] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.ci-op-d3h6f62t-3cd04.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler?timeout=5s": context deadline exceeded\nI0415 17:53:47.104907       1 leaderelection.go:278] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nE0415 17:53:47.104959       1 leaderelection.go:301] Failed to release lock: resource name may not be empty\nF0415 17:53:47.104980       1 server.go:226] leaderelection lost\n
Apr 15 17:55:29.957 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-36.us-west-2.compute.internal node/ip-10-0-131-36.us-west-2.compute.internal container/kube-controller-manager reason/ContainerExit code/1 cause/Error 9] "Too few replicas" replicaSet="openshift-etcd-operator/etcd-operator-7b4794dbd6" need=1 creating=1\nI0415 17:53:04.495456       1 event.go:291] "Event occurred" object="openshift-etcd-operator/etcd-operator-7b4794dbd6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: etcd-operator-7b4794dbd6-7cqkw"\nI0415 17:53:04.507477       1 deployment_controller.go:490] "Error syncing deployment" deployment="openshift-etcd-operator/etcd-operator" err="Operation cannot be fulfilled on deployments.apps \"etcd-operator\": the object has been modified; please apply your changes to the latest version and try again"\nI0415 17:53:04.530366       1 deployment_controller.go:490] "Error syncing deployment" deployment="openshift-etcd-operator/etcd-operator" err="Operation cannot be fulfilled on deployments.apps \"etcd-operator\": the object has been modified; please apply your changes to the latest version and try again"\nE0415 17:53:41.385851       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-d3h6f62t-3cd04.aws-2.ci.openshift.org:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nE0415 17:53:46.386316       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-d3h6f62t-3cd04.aws-2.ci.openshift.org:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=5s": context deadline exceeded\nI0415 17:53:46.386361       1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nI0415 17:53:46.386467       1 event.go:291] "Event occurred" object="" kind="ConfigMap" apiVersion="v1" type="Normal" reason="LeaderElection" message="ip-10-0-131-36_37dbe421-d713-4302-9ea1-e216717776c7 stopped leading"\nF0415 17:53:46.386497       1 controllermanager.go:319] leaderelection lost\n

Found in 100.00% of runs (100.00% of failures) across 1 total runs and 1 jobs (100.00% failed) in 89ms - clear search | chart view - source code located on github