Job:
#OCPBUGS-32375issue10 days agoUnsuccessful cluster installation with 4.15 nightlies on s390x using ABI CLOSED
Issue 15945005: Unsuccessful cluster installation with 4.15 nightlies on s390x using ABI
Description: When used the latest s390x release builds in 4.15 nightly stream for Agent Based Installation of SNO on IBM Z KVM, installation is failing at the end while watching cluster operators even though the DNS and HA Proxy configurations are perfect as the same setup is working with 4.15.x stable release image builds 
 
 Below is the error encountered multiple times when used "release:s390x-latest" image while booting the cluster. This image is used during the boot through OPENSHIFT_INSATLL_RELEASE_IMAGE_OVERRIDE while the binary is fetched using the latest stable builds from here : [https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/latest/] for which the version would be around 4.15.x 
 
 *release-image:*
 {code:java}
 registry.build01.ci.openshift.org/ci-op-cdkdqnqn/release@sha256:c6eb4affa5c44d2ad220d7064e92270a30df5f26d221e35664f4d5547a835617
 {code}
  ** 
 
 *PROW CI Build :* [https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_release/47965/rehearse-47965-periodic-ci-openshift-multiarch-master-nightly-4.15-e2e-agent-ibmz-sno/1780162365824700416] 
 
 *Error:* 
 {code:java}
 '/root/agent-sno/openshift-install wait-for install-complete --dir /root/agent-sno/ --log-level debug'
 Warning: Permanently added '128.168.142.71' (ED25519) to the list of known hosts.
 level=debug msg=OpenShift Installer 4.15.8
 level=debug msg=Built from commit f4f5d0ee0f7591fd9ddf03ac337c804608102919
 level=debug msg=Loading Install Config...
 level=debug msg=  Loading SSH Key...
 level=debug msg=  Loading Base Domain...
 level=debug msg=    Loading Platform...
 level=debug msg=  Loading Cluster Name...
 level=debug msg=    Loading Base Domain...
 level=debug msg=    Loading Platform...
 level=debug msg=  Loading Pull Secret...
 level=debug msg=  Loading Platform...
 level=debug msg=Loading Agent Config...
 level=debug msg=Using Agent Config loaded from state file
 level=warning msg=An agent configuration was detected but this command is not the agent wait-for command
 level=info msg=Waiting up to 40m0s (until 10:15AM UTC) for the cluster at https://api.agent-sno.abi-ci.com:6443 to initialize...
 W0416 09:35:51.793770    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:51.793827    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:35:53.127917    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:53.127946    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:35:54.760896    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:54.761058    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:00.790136    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:00.790175    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:08.516333    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:08.516445    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:31.442291    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:31.442336    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:37:03.033971    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:37:03.034049    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:37:42.025487    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:37:42.025538    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:38:32.148607    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:38:32.148677    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:39:27.680156    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:39:27.680194    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:40:23.290839    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:40:23.290988    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:41:22.298200    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:41:22.298338    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:42:01.197417    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:42:01.197465    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:42:36.739577    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:42:36.739937    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:43:07.331029    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:43:07.331154    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:44:04.008310    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:44:04.008381    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:44:40.882938    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:44:40.882973    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:45:18.975189    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:45:18.975307    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:45:49.753584    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:45:49.753614    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:46:41.148207    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:46:41.148347    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:47:12.882965    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:47:12.883075    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:47:53.636491    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:47:53.636538    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:48:31.792077    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:48:31.792165    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:49:29.117579    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:49:29.117657    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:50:02.802033    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:50:02.802167    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:50:33.826705    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:50:33.826859    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:51:16.045403    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:51:16.045447    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:51:53.795710    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:51:53.795745    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:52:52.741141    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:52:52.741289    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:53:52.621642    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:53:52.621687    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:54:35.809906    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:54:35.810054    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:55:24.249298    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:55:24.249418    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:56:12.717328    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:56:12.717372    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:56:51.172375    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:56:51.172439    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:57:42.242226    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:57:42.242292    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:58:17.663810    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:58:17.663849    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:59:13.319754    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:59:13.319889    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:00:03.188117    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:00:03.188166    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:00:54.590362    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:00:54.590494    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:01:35.673592    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:01:35.673633    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:02:11.552079    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:02:11.552133    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:02:51.110525    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:02:51.110663    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:03:31.251376    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:03:31.251494    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:04:21.566895    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:04:21.566931    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:04:52.754047    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:04:52.754221    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:05:24.673675    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:05:24.673724    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:06:17.608482    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:06:17.608598    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:06:58.215116    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:06:58.215262    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:07:46.578262    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:07:46.578392    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:08:18.239710    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:08:18.239830    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:09:06.947178    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:09:06.947239    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:10:00.261401    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:10:00.261486    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:10:59.363041    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:10:59.363113    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:11:32.205551    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:11:32.205612    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:12:24.956052    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:12:24.956147    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:12:55.353860    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:12:55.354004    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:13:39.223095    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:13:39.223170    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:14:25.018278    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:14:25.018404    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:15:17.227351    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:15:17.227424    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 level=error msg=Attempted to gather ClusterOperator status after wait failure: listing ClusterOperator objects: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 10.244.64.4:6443: connect: connection refused
 level=error msg=Cluster initialization failed because one or more operators are not functioning properly.
 level=error msg=The cluster should be accessible for troubleshooting as detailed in the documentation linked below,
 level=error msg=https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html
 level=error msg=The 'wait-for install-complete' subcommand can then be used to continue the installation
 level=error msg=failed to initialize the cluster: timed out waiting for the condition
 {"component":"entrypoint","error":"wrapped process failed: exit status 6","file":"k8s.io/test-infra/prow/entrypoint/run.go:84","func":"k8s.io/test-infra/prow/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-04-16T10:15:51Z"}
 error: failed to execute wrapped command: exit status 6 {code}
Status: CLOSED
#OCPBUGS-32517issue40 hours agoMissing worker nodes on metal Verified
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[12603]: Unpause all baremetal hosts
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.630867   18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.631351   18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused

... 4 lines not shown

#OCPBUGS-31763issue10 days agogcp install cluster creation fails after 30-40 minutes New
Issue 15921939: gcp install cluster creation fails after 30-40 minutes
Description: Component Readiness has found a potential regression in install should succeed: overall.  I see this on various different platforms, but I started digging into GCP failures.  No installer log bundle is created, which seriously hinders my ability to dig further.
 
 Bootstrap succeeds, and then 30 minutes after waiting for cluster creation, it dies.
 
 From [https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-gcp-sdn-serial/1775871000018161664]
 
 search.ci tells me this affects nearly 10% of jobs on GCP:
 
 [https://search.dptools.openshift.org/?search=Attempted+to+gather+ClusterOperator+status+after+installation+failure%3A+listing+ClusterOperator+objects.*connection+refused&maxAge=168h&context=1&type=bug%2Bissue%2Bjunit&name=.*4.16.*gcp.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job]
 
  
 {code:java}
 time="2024-04-04T13:27:50Z" level=info msg="Waiting up to 40m0s (until 2:07PM UTC) for the cluster at https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443 to initialize..."
 time="2024-04-04T14:07:50Z" level=error msg="Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get \"https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/config.openshift.io/v1/clusteroperators\": dial tcp 35.238.130.20:6443: connect: connection refused"
 time="2024-04-04T14:07:50Z" level=error msg="Cluster initialization failed because one or more operators are not functioning properly.\nThe cluster should be accessible for troubleshooting as detailed in the documentation linked below,\nhttps://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html\nThe 'wait-for install-complete' subcommand can then be used to continue the installation"
 time="2024-04-04T14:07:50Z" level=error msg="failed to initialize the cluster: timed out waiting for the condition" {code}
  
 
 Probability of significant regression: 99.44%
 
 Sample (being evaluated) Release: 4.16
 Start Time: 2024-03-29T00:00:00Z
 End Time: 2024-04-04T23:59:59Z
 Success Rate: 68.75%
 Successes: 11
 Failures: 5
 Flakes: 0
 
 Base (historical) Release: 4.15
 Start Time: 2024-02-01T00:00:00Z
 End Time: 2024-02-28T23:59:59Z
 Success Rate: 96.30%
 Successes: 52
 Failures: 2
 Flakes: 0
 
 View the test details report at [https://sippy.dptools.openshift.org/sippy-ng/component_readiness/test_details?arch=amd64&arch=amd64&baseEndTime=2024-02-28%2023%3A59%3A59&baseRelease=4.15&baseStartTime=2024-02-01%2000%3A00%3A00&capability=Other&component=Installer%20%2F%20openshift-installer&confidence=95&environment=sdn%20upgrade-micro%20amd64%20gcp%20standard&excludeArches=arm64%2Cheterogeneous%2Cppc64le%2Cs390x&excludeClouds=openstack%2Cibmcloud%2Clibvirt%2Covirt%2Cunknown&excludeVariants=hypershift%2Cosd%2Cmicroshift%2Ctechpreview%2Csingle-node%2Cassisted%2Ccompact&groupBy=cloud%2Carch%2Cnetwork&ignoreDisruption=true&ignoreMissing=false&minFail=3&network=sdn&network=sdn&pity=5&platform=gcp&platform=gcp&sampleEndTime=2024-04-04%2023%3A59%3A59&sampleRelease=4.16&sampleStartTime=2024-03-29%2000%3A00%3A00&testId=cluster%20install%3A0cb1bb27e418491b1ffdacab58c5c8c0&testName=install%20should%20succeed%3A%20overall&upgrade=upgrade-micro&upgrade=upgrade-micro&variant=standard&variant=standard]
Status: New
#OCPBUGS-27755issue9 days agoopenshift-kube-apiserver down and is not being restarted New
Issue 15736514: openshift-kube-apiserver down and is not being restarted
Description: Description of problem:
 {code:none}
 SNO cluster, this is the second time that the issue happens. 
 
 Error like the following are reported:
 
 ~~~
 failed to fetch token: Post "https://api-int.<cluster>:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp <ip>:6443: connect: connection refused
 ~~~
 
 Checking the pods logs, kube-apiserver pod is terminated and is not being restarted again:
 
 ~~~
 2024-01-13T09:41:40.931716166Z I0113 09:41:40.931584       1 main.go:213] Received signal terminated. Forwarding to sub-process "hyperkube".
 ~~~{code}
 Version-Release number of selected component (if applicable):
 {code:none}
    4.13.13 {code}
 How reproducible:
 {code:none}
     Not reproducible but has happened twice{code}
 Steps to Reproduce:
 {code:none}
     1.
     2.
     3.
     {code}
 Actual results:
 {code:none}
     API is not available and kube-apiserver is not being restarted{code}
 Expected results:
 {code:none}
     We would expect to see kube-apiserver restarts{code}
 Additional info:
 {code:none}
    {code}
Status: New
#OCPBUGS-33157issue40 hours agoIPv6 metal-ipi jobs: master-bmh-update loosing access to API Verified
Issue 15978085: IPv6 metal-ipi jobs: master-bmh-update loosing access to API
Description: The last 4 IPv6 jobs are failing on the same error
 
 https://prow.ci.openshift.org/job-history/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6
 master-bmh-update.log looses access to the the API when trying to get/update the BMH details
 
 https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6/1785492737169035264
 
 
 
 {noformat}
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[4663]: Waiting for 3 masters to become provisioned
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531242   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531808   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533281   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533630   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.535180   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: The connection to the server api-int.ostest.test.metalkube.org:6443 was refused - did you specify the right host or port?
 {noformat}
Status: Verified
{noformat}
May 01 02:49:40 localhost.localdomain master-bmh-update.sh[12448]: E0501 02:49:40.429468   12448 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
{noformat}
#OCPBUGS-17183issue2 days ago[BUG] Assisted installer fails to create bond with active backup for single node installation New
Issue 15401516: [BUG] Assisted installer fails to create bond with active backup for single node installation
Description: Description of problem:
 {code:none}
 The assisted installer will always fail to create bond with active backup using nmstate yaml and the errors are : 
 
 ~~~ 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_URL's https endpoint at https://xx.xx.32.40:6443/version
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Checking validity of <hostname> of type API_INT_URL 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Successfully resolved API_INT_URL <hostname> 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_INT_URL's https endpoint at https://xx.xx.32.40:6443/versionJul 26 07:12:23 <hostname> bootkube.sh[12960]: Still waiting for the Kubernetes API: 
 Get "https://localhost:6443/readyz": dial tcp [::1]:6443: connect: connection refusedJul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? 
 Jul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? 
  ~~~ 
 
 Where, <hostname> is the actual hostname of the node. 
 
 Adding sosreport and nmstate yaml file here : https://drive.google.com/drive/u/0/folders/19dNzKUPIMmnUls2pT_stuJxr2Dxdi5eb{code}
 Version-Release number of selected component (if applicable):
 {code:none}
 4.12 
 Dell 16g Poweredge R660{code}
 How reproducible:
 {code:none}
 Always at customer side{code}
 Steps to Reproduce:
 {code:none}
 1. Open Assisted installer UI (console.redhat.com -> assisted installer) 
 2. Add the network configs as below for host1  
 
 -----------
 interfaces:
 - name: bond99
   type: bond
   state: up
   ipv4:
     address:
     - ip: xx.xx.32.40
       prefix-length: 24
     enabled: true
   link-aggregation:
     mode: active-backup
     options:
       miimon: '140'
     port:
     - eno12399
     - eno12409
 dns-resolver:
   config:
     search:
     - xxxx
     server:
     - xx.xx.xx.xx
 routes:
   config:
     - destination: 0.0.0.0/0
       metric: 150
       next-hop-address: xx.xx.xx.xx
       next-hop-interface: bond99
       table-id: 254    
 -----------
 
 3. Enter the mac addresses of interfaces in the fields. 
 4. Generate the iso and boot the node. The node will not be able to ping/ssh. This happen everytime and reproducible.
 5. As there was no way to check (due to ssh not working) what is happening on the node, we reset root password and can see that ip address was present on bond, still ping/ssh does not work.
 6. After multiple reboots, customer was able to ssh/ping and provided sosreport and we could see above mentioned error in the journal logs in sosreport.  
  {code}
 Actual results:
 {code:none}
 Fails to install. Seems there is some issue with networking.{code}
 Expected results:
 {code:none}
 Able to proceed with installation without above mentioned issues{code}
 Additional info:
 {code:none}
 - The installation works with round robbin bond mode in 4.12. 
 - Also, the installation works with active-backup 4.10. 
 - Active-backup bond with 4.12 is failing.{code}
Status: New
#OCPBUGS-30631issue2 weeks agoSNO (RT kernel) sosreport crash the SNO node CLOSED
Issue 15865131: SNO (RT kernel) sosreport crash the SNO node
Description: Description of problem:
 {code:none}
 sosreport collection causes SNO XR11 node crash.
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 - RHOCP    : 4.12.30
 - kernel   : 4.18.0-372.69.1.rt7.227.el8_6.x86_64
 - platform : x86_64{code}
 How reproducible:
 {code:none}
 sh-4.4# chrt -rr 99 toolbox
 .toolboxrc file detected, overriding defaults...
 Checking if there is a newer version of ocpdalmirror.xxx.yyy:8443/rhel8/support-tools-zzz-feb available...
 Container 'toolbox-root' already exists. Trying to start...
 (To remove the container and start with a fresh toolbox, run: sudo podman rm 'toolbox-root')
 toolbox-root
 Container started successfully. To exit, type 'exit'.
 [root@node /]# which sos
 /usr/sbin/sos
 logger: socket /dev/log: No such file or directory
 [root@node /]# taskset -c 29-31,61-63 sos report --batch -n networking,kernel,processor -k crio.all=on -k crio.logs=on -k podman.all=on -kpodman.logs=on
 
 sosreport (version 4.5.6)
 
 This command will collect diagnostic and configuration information from
 this Red Hat CoreOS system.
 
 An archive containing the collected information will be generated in
 /host/var/tmp/sos.c09e4f7z and may be provided to a Red Hat support
 representative.
 
 Any information provided to Red Hat will be treated in accordance with
 the published support policies at:
 
         Distribution Website : https://www.redhat.com/
         Commercial Support   : https://access.redhat.com/
 
 The generated archive may contain data considered sensitive and its
 content should be reviewed by the originating organization before being
 passed to any third party.
 
 No changes will be made to system configuration.
 
 
  Setting up archive ...
  Setting up plugins ...
 [plugin:auditd] Could not open conf file /etc/audit/auditd.conf: [Errno 2] No such file or directory: '/etc/audit/auditd.conf'
 caught exception in plugin method "system.setup()"
 writing traceback to sos_logs/system-plugin-errors.txt
 [plugin:systemd] skipped command 'resolvectl status': required services missing: systemd-resolved.
 [plugin:systemd] skipped command 'resolvectl statistics': required services missing: systemd-resolved.
  Running plugins. Please wait ...
 
   Starting 1/91  alternatives    [Running: alternatives]
   Starting 2/91  atomichost      [Running: alternatives atomichost]
   Starting 3/91  auditd          [Running: alternatives atomichost auditd]
   Starting 4/91  block           [Running: alternatives atomichost auditd block]
   Starting 5/91  boot            [Running: alternatives auditd block boot]
   Starting 6/91  cgroups         [Running: auditd block boot cgroups]
   Starting 7/91  chrony          [Running: auditd block cgroups chrony]
   Starting 8/91  cifs            [Running: auditd block cgroups cifs]
   Starting 9/91  conntrack       [Running: auditd block cgroups conntrack]
   Starting 10/91 console         [Running: block cgroups conntrack console]
   Starting 11/91 container_log   [Running: block cgroups conntrack container_log]
   Starting 12/91 containers_common [Running: block cgroups conntrack containers_common]
   Starting 13/91 crio            [Running: block cgroups conntrack crio]
   Starting 14/91 crypto          [Running: cgroups conntrack crio crypto]
   Starting 15/91 date            [Running: cgroups conntrack crio date]
   Starting 16/91 dbus            [Running: cgroups conntrack crio dbus]
   Starting 17/91 devicemapper    [Running: cgroups conntrack crio devicemapper]
   Starting 18/91 devices         [Running: cgroups conntrack crio devices]
   Starting 19/91 dracut          [Running: cgroups conntrack crio dracut]
   Starting 20/91 ebpf            [Running: cgroups conntrack crio ebpf]
   Starting 21/91 etcd            [Running: cgroups crio ebpf etcd]
   Starting 22/91 filesys         [Running: cgroups crio ebpf filesys]
   Starting 23/91 firewall_tables [Running: cgroups crio filesys firewall_tables]
   Starting 24/91 fwupd           [Running: cgroups crio filesys fwupd]
   Starting 25/91 gluster         [Running: cgroups crio filesys gluster]
   Starting 26/91 grub2           [Running: cgroups crio filesys grub2]
   Starting 27/91 gssproxy        [Running: cgroups crio grub2 gssproxy]
   Starting 28/91 hardware        [Running: cgroups crio grub2 hardware]
   Starting 29/91 host            [Running: cgroups crio hardware host]
   Starting 30/91 hts             [Running: cgroups crio hardware hts]
   Starting 31/91 i18n            [Running: cgroups crio hardware i18n]
   Starting 32/91 iscsi           [Running: cgroups crio hardware iscsi]
   Starting 33/91 jars            [Running: cgroups crio hardware jars]
   Starting 34/91 kdump           [Running: cgroups crio hardware kdump]
   Starting 35/91 kernelrt        [Running: cgroups crio hardware kernelrt]
   Starting 36/91 keyutils        [Running: cgroups crio hardware keyutils]
   Starting 37/91 krb5            [Running: cgroups crio hardware krb5]
   Starting 38/91 kvm             [Running: cgroups crio hardware kvm]
   Starting 39/91 ldap            [Running: cgroups crio kvm ldap]
   Starting 40/91 libraries       [Running: cgroups crio kvm libraries]
   Starting 41/91 libvirt         [Running: cgroups crio kvm libvirt]
   Starting 42/91 login           [Running: cgroups crio kvm login]
   Starting 43/91 logrotate       [Running: cgroups crio kvm logrotate]
   Starting 44/91 logs            [Running: cgroups crio kvm logs]
   Starting 45/91 lvm2            [Running: cgroups crio logs lvm2]
   Starting 46/91 md              [Running: cgroups crio logs md]
   Starting 47/91 memory          [Running: cgroups crio logs memory]
   Starting 48/91 microshift_ovn  [Running: cgroups crio logs microshift_ovn]
   Starting 49/91 multipath       [Running: cgroups crio logs multipath]
   Starting 50/91 networkmanager  [Running: cgroups crio logs networkmanager]
 
 Removing debug pod ...
 error: unable to delete the debug pod "ransno1ransnomavdallabcom-debug": Delete "https://api.ransno.mavdallab.com:6443/api/v1/namespaces/openshift-debug-mt82m/pods/ransno1ransnomavdallabcom-debug": dial tcp 10.71.136.144:6443: connect: connection refused
 {code}
 Steps to Reproduce:
 {code:none}
 Launch a debug pod and the procedure above and it crash the node{code}
 Actual results:
 {code:none}
 Node crash{code}
 Expected results:
 {code:none}
 Node does not crash{code}
 Additional info:
 {code:none}
 We have two vmcore on the associated SFDC ticket.
 This system use a RT kernel.
 Using an out of tree ice driver 1.13.7 (probably from 22 dec 2023)
 
 [  103.681608] ice: module unloaded
 [  103.830535] ice: loading out-of-tree module taints kernel.
 [  103.831106] ice: module verification failed: signature and/or required key missing - tainting kernel
 [  103.841005] ice: Intel(R) Ethernet Connection E800 Series Linux Driver - version 1.13.7
 [  103.841017] ice: Copyright (C) 2018-2023 Intel Corporation
 
 
 With the following kernel command line 
 
 Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/vmlinuz-4.18.0-372.69.1.rt7.227.el8_6.x86_64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/0 root=UUID=3e8bda80-5cf4-4c46-b139-4c84cb006354 rw rootflags=prjquota boot=UUID=1d0512c2-3f92-42c5-b26d-709ff9350b81 intel_iommu=on iommu=pt firmware_class.path=/var/lib/firmware skew_tick=1 nohz=on rcu_nocbs=3-31,35-63 tuned.non_isolcpus=00000007,00000007 systemd.cpu_affinity=0,1,2,32,33,34 intel_iommu=on iommu=pt isolcpus=managed_irq,3-31,35-63 nohz_full=3-31,35-63 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off rcutree.kthread_prio=11 default_hugepagesz=1G rcupdate.rcu_normal_after_boot=0 efi=runtime module_blacklist=irdma intel_pstate=passive intel_idle.max_cstate=0 crashkernel=256M
 
 
 
 vmcore1 show issue with the ice driver 
 
 crash vmcore tmp/vmlinux
 
 
       KERNEL: tmp/vmlinux  [TAINTED]
     DUMPFILE: vmcore  [PARTIAL DUMP]
         CPUS: 64
         DATE: Thu Mar  7 17:16:57 CET 2024
       UPTIME: 02:44:28
 LOAD AVERAGE: 24.97, 25.47, 25.46
        TASKS: 5324
     NODENAME: aaa.bbb.ccc
      RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64
      VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023
      MACHINE: x86_64  (1500 Mhz)
       MEMORY: 127.3 GB
        PANIC: "Kernel panic - not syncing:"
          PID: 693
      COMMAND: "khungtaskd"
         TASK: ff4d1890260d4000  [THREAD_INFO: ff4d1890260d4000]
          CPU: 0
        STATE: TASK_RUNNING (PANIC)
 
 crash> ps|grep sos                                                                                                                                                                                                                                                                                                           
   449071  363440  31  ff4d189005f68000  IN   0.2  506428 314484  sos                                                                                                                                                                                                                                                         
   451043  363440  63  ff4d188943a9c000  IN   0.2  506428 314484  sos                                                                                                                                                                                                                                                         
   494099  363440  29  ff4d187f941f4000  UN   0.2  506428 314484  sos     
 
  8457.517696] ------------[ cut here ]------------
 [ 8457.517698] NETDEV WATCHDOG: ens3f1 (ice): transmit queue 35 timed out
 [ 8457.517711] WARNING: CPU: 33 PID: 349 at net/sched/sch_generic.c:472 dev_watchdog+0x270/0x300
 [ 8457.517718] Modules linked in: binfmt_misc macvlan pci_pf_stub iavf vfio_pci vfio_virqfd vfio_iommu_type1 vfio vhost_net vhost vhost_iotlb tap tun xt_addrtype nf_conntrack_netlink ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_nat xt_CT tcp_diag inet_diag ip6t_MASQUERADE xt_mark ice(OE) xt_conntrack ipt_MASQUERADE nft_counter xt_comment nft_compat veth nft_chain_nat nf_tables overlay bridge 8021q garp mrp stp llc nfnetlink_cttimeout nfnetlink openvswitch nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ext4 mbcache jbd2 intel_rapl_msr iTCO_wdt iTCO_vendor_support dell_smbios wmi_bmof dell_wmi_descriptor dcdbas kvm_intel kvm irqbypass intel_rapl_common i10nm_edac nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp rapl ipmi_ssif intel_cstate intel_uncore dm_thin_pool pcspkr isst_if_mbox_pci dm_persistent_data dm_bio_prison dm_bufio isst_if_mmio isst_if_common mei_me i2c_i801 joydev mei intel_pmt wmi acpi_ipmi ipmi_si acpi_power_meter sctp ip6_udp_tunnel
 [ 8457.517770]  udp_tunnel ip_tables xfs libcrc32c i40e sd_mod t10_pi sg bnxt_re ib_uverbs ib_core crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel bnxt_en ahci libahci libata dm_multipath dm_mirror dm_region_hash dm_log dm_mod ipmi_devintf ipmi_msghandler fuse [last unloaded: ice]
 [ 8457.517784] Red Hat flags: eBPF/rawtrace
 [ 8457.517787] CPU: 33 PID: 349 Comm: ktimers/33 Kdump: loaded Tainted: G           OE    --------- -  - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1
 [ 8457.517789] Hardware name: Dell Inc. PowerEdge XR11/0P2RNT, BIOS 1.12.1 09/13/2023
 [ 8457.517790] RIP: 0010:dev_watchdog+0x270/0x300
 [ 8457.517793] Code: 17 00 e9 f0 fe ff ff 4c 89 e7 c6 05 c6 03 34 01 01 e8 14 43 fa ff 89 d9 4c 89 e6 48 c7 c7 90 37 98 9a 48 89 c2 e8 1d be 88 ff <0f> 0b eb ad 65 8b 05 05 13 fb 65 89 c0 48 0f a3 05 1b ab 36 01 73
 [ 8457.517795] RSP: 0018:ff7aeb55c73c7d78 EFLAGS: 00010286
 [ 8457.517797] RAX: 0000000000000000 RBX: 0000000000000023 RCX: 0000000000000001
 [ 8457.517798] RDX: 0000000000000000 RSI: ffffffff9a908557 RDI: 00000000ffffffff
 [ 8457.517799] RBP: 0000000000000021 R08: ffffffff9ae6b3a0 R09: 00080000000000ff
 [ 8457.517800] R10: 000000006443a462 R11: 0000000000000036 R12: ff4d187f4d1f4000
 [ 8457.517801] R13: ff4d187f4d20df00 R14: ff4d187f4d1f44a0 R15: 0000000000000080
 [ 8457.517803] FS:  0000000000000000(0000) GS:ff4d18967a040000(0000) knlGS:0000000000000000
 [ 8457.517804] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 [ 8457.517805] CR2: 00007fc47c649974 CR3: 00000019a441a005 CR4: 0000000000771ea0
 [ 8457.517806] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
 [ 8457.517807] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
 [ 8457.517808] PKRU: 55555554
 [ 8457.517810] Call Trace:
 [ 8457.517813]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517816]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517818]  call_timer_fn+0x32/0x1d0
 [ 8457.517822]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517825]  run_timer_softirq+0x1fc/0x640
 [ 8457.517828]  ? _raw_spin_unlock_irq+0x1d/0x60
 [ 8457.517833]  ? finish_task_switch+0xea/0x320
 [ 8457.517836]  ? __switch_to+0x10c/0x4d0
 [ 8457.517840]  __do_softirq+0xa5/0x33f
 [ 8457.517844]  run_timersd+0x61/0xb0
 [ 8457.517848]  smpboot_thread_fn+0x1c1/0x2b0
 [ 8457.517851]  ? smpboot_register_percpu_thread_cpumask+0x140/0x140
 [ 8457.517853]  kthread+0x151/0x170
 [ 8457.517856]  ? set_kthread_struct+0x50/0x50
 [ 8457.517858]  ret_from_fork+0x1f/0x40
 [ 8457.517861] ---[ end trace 0000000000000002 ]---
 [ 8458.520445] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0x99, HW_HEAD: 0x14, NTU: 0x15, INT: 0x0
 [ 8458.520451] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35
 [ 8506.139246] ice 0000:8a:00.1: PTP reset successful
 [ 8506.437047] ice 0000:8a:00.1: VSI rebuilt. VSI index 0, type ICE_VSI_PF
 [ 8506.445482] ice 0000:8a:00.1: VSI rebuilt. VSI index 1, type ICE_VSI_CTRL
 [ 8540.459707] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0xe3, HW_HEAD: 0xe7, NTU: 0xe8, INT: 0x0
 [ 8540.459714] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35
 [ 8563.891356] ice 0000:8a:00.1: PTP reset successful
 ~~~
 
 Second vmcore on the same node show issue with the SSD drive
 
 $ crash vmcore-2 tmp/vmlinux
 
       KERNEL: tmp/vmlinux  [TAINTED]
     DUMPFILE: vmcore-2  [PARTIAL DUMP]
         CPUS: 64
         DATE: Thu Mar  7 14:29:31 CET 2024
       UPTIME: 1 days, 07:19:52
 LOAD AVERAGE: 25.55, 26.42, 28.30
        TASKS: 5409
     NODENAME: aaa.bbb.ccc
      RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64
      VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023
      MACHINE: x86_64  (1500 Mhz)
       MEMORY: 127.3 GB
        PANIC: "Kernel panic - not syncing:"
          PID: 696
      COMMAND: "khungtaskd"
         TASK: ff2b35ed48d30000  [THREAD_INFO: ff2b35ed48d30000]
          CPU: 34
        STATE: TASK_RUNNING (PANIC)
 
 crash> ps |grep sos
   719784  718369  62  ff2b35ff00830000  IN   0.4 1215636 563388  sos
   721740  718369  61  ff2b3605579f8000  IN   0.4 1215636 563388  sos
   721742  718369  63  ff2b35fa5eb9c000  IN   0.4 1215636 563388  sos
   721744  718369  30  ff2b3603367fc000  IN   0.4 1215636 563388  sos
   721746  718369  29  ff2b360557944000  IN   0.4 1215636 563388  sos
   743356  718369  62  ff2b36042c8e0000  IN   0.4 1215636 563388  sos
   743818  718369  29  ff2b35f6186d0000  IN   0.4 1215636 563388  sos
   748518  718369  61  ff2b3602cfb84000  IN   0.4 1215636 563388  sos
   748884  718369  62  ff2b360713418000  UN   0.4 1215636 563388  sos
 
 crash> dmesg
 
 [111871.309883] ata3.00: exception Emask 0x0 SAct 0x3ff8 SErr 0x0 action 0x6 frozen
 [111871.309889] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309891] ata3.00: cmd 61/40:18:28:47:4b/00:00:00:00:00/40 tag 3 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309895] ata3.00: status: { DRDY }
 [111871.309897] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309904] ata3.00: cmd 61/40:20:68:47:4b/00:00:00:00:00/40 tag 4 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309908] ata3.00: status: { DRDY }
 [111871.309909] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309910] ata3.00: cmd 61/40:28:a8:47:4b/00:00:00:00:00/40 tag 5 ncq dma 32768 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309913] ata3.00: status: { DRDY }
 [111871.309914] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309915] ata3.00: cmd 61/40:30:e8:47:4b/00:00:00:00:00/40 tag 6 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309918] ata3.00: status: { DRDY }
 [111871.309919] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309919] ata3.00: cmd 61/70:38:48:37:2b/00:00:1c:00:00/40 tag 7 ncq dma 57344 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309922] ata3.00: status: { DRDY }
 [111871.309923] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309924] ata3.00: cmd 61/20:40:78:29:0c/00:00:19:00:00/40 tag 8 ncq dma 16384 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309927] ata3.00: status: { DRDY }
 [111871.309928] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309929] ata3.00: cmd 61/08:48:08:0c:c0/00:00:1c:00:00/40 tag 9 ncq dma 4096 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309932] ata3.00: status: { DRDY }
 [111871.309933] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309934] ata3.00: cmd 61/40:50:28:48:4b/00:00:00:00:00/40 tag 10 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309937] ata3.00: status: { DRDY }
 [111871.309938] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309939] ata3.00: cmd 61/40:58:68:48:4b/00:00:00:00:00/40 tag 11 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309942] ata3.00: status: { DRDY }
 [111871.309943] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309944] ata3.00: cmd 61/40:60:a8:48:4b/00:00:00:00:00/40 tag 12 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309946] ata3.00: status: { DRDY }
 [111871.309947] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309948] ata3.00: cmd 61/40:68:e8:48:4b/00:00:00:00:00/40 tag 13 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309951] ata3.00: status: { DRDY }
 [111871.309953] ata3: hard resetting link
 ...
 ...
 ...
 [112789.787310] INFO: task sos:748884 blocked for more than 600 seconds.                                                                                                                                                                                                                                                     
 [112789.787314]       Tainted: G           OE    --------- -  - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1                                                                                                                                                                                                                      
 [112789.787316] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.                                                                                                                                                                                                                                    
 [112789.787316] task:sos             state:D stack:    0 pid:748884 ppid:718369 flags:0x00084080                                                                                                                                                                                                                             
 [112789.787320] Call Trace:                                                                                                                                                                                                                                                                                                  
 [112789.787323]  __schedule+0x37b/0x8e0                                                                                                                                                                                                                                                                                      
 [112789.787330]  schedule+0x6c/0x120                                                                                                                                                                                                                                                                                         
 [112789.787333]  schedule_timeout+0x2b7/0x410                                                                                                                                                                                                                                                                                
 [112789.787336]  ? enqueue_entity+0x130/0x790                                                                                                                                                                                                                                                                                
 [112789.787340]  wait_for_completion+0x84/0xf0                                                                                                                                                                                                                                                                               
 [112789.787343]  flush_work+0x120/0x1d0                                                                                                                                                                                                                                                                                      
 [112789.787347]  ? flush_workqueue_prep_pwqs+0x130/0x130                                                                                                                                                                                                                                                                     
 [112789.787350]  schedule_on_each_cpu+0xa7/0xe0                                                                                                                                                                                                                                                                              
 [112789.787353]  vmstat_refresh+0x22/0xa0                                                                                                                                                                                                                                                                                    
 [112789.787357]  proc_sys_call_handler+0x174/0x1d0                                                                                                                                                                                                                                                                           
 [112789.787361]  vfs_read+0x91/0x150                                                                                                                                                                                                                                                                                         
 [112789.787364]  ksys_read+0x52/0xc0                                                                                                                                                                                                                                                                                         
 [112789.787366]  do_syscall_64+0x87/0x1b0                                                                                                                                                                                                                                                                                    
 [112789.787369]  entry_SYSCALL_64_after_hwframe+0x61/0xc6                                                                                                                                                                                                                                                                    
 [112789.787372] RIP: 0033:0x7f2dca8c2ab4                                                                                                                                                                                                                                                                                     
 [112789.787378] Code: Unable to access opcode bytes at RIP 0x7f2dca8c2a8a.                                                                                                                                                                                                                                                   
 [112789.787378] RSP: 002b:00007f2dbbffc5e0 EFLAGS: 00000246 ORIG_RAX: 0000000000000000                                                                                                                                                                                                                                       
 [112789.787380] RAX: ffffffffffffffda RBX: 0000000000000008 RCX: 00007f2dca8c2ab4                                                                                                                                                                                                                                            
 [112789.787382] RDX: 0000000000004000 RSI: 00007f2db402b5a0 RDI: 0000000000000008                                                                                                                                                                                                                                            
 [112789.787383] RBP: 00007f2db402b5a0 R08: 0000000000000000 R09: 00007f2dcace27bb                                                                                                                                                                                                                                            
 [112789.787383] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000004000                                                                                                                                                                                                                                            
 [112789.787384] R13: 0000000000000008 R14: 00007f2db402b5a0 R15: 00007f2da4001a90                                                                                                                                                                                                                                            
 [112789.787418] NMI backtrace for cpu 34    {code}
Status: CLOSED
#OCPBUGS-32091issue4 weeks agoCAPI-Installer leaks processes during unsuccessful installs MODIFIED
ERROR Attempted to gather debug logs after installation failure: failed to create SSH client: ssh: handshake failed: ssh: disconnect, reason 2: Too many authentication failures
ERROR Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.134.9.157:6443: connect: connection refused
ERROR Bootstrap failed to complete: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/version": dial tcp 18.222.8.23:6443: connect: connection refused

... 1 lines not shown

periodic-ci-openshift-multiarch-master-nightly-4.12-upgrade-from-stable-4.11-ocp-e2e-aws-heterogeneous-upgrade (all) - 52 runs, 29% failed, 320% of failures match = 92% impact
#1791611452977582080junit24 hours ago
May 18 00:41:51.832 E ns/openshift-multus pod/multus-znxf8 node/ip-10-0-207-78.ec2.internal uid/a9e5a23b-ebd2-4a81-830f-7062830d8346 container/kube-multus reason/ContainerExit code/137 cause/ContainerStatusUnknown The container could not be located when the pod was deleted.  The container used to be Running
May 18 00:42:00.700 E ns/openshift-sdn pod/sdn-controller-czmnq node/ip-10-0-254-91.ec2.internal uid/7de1f2dc-64e9-42e4-af06-f8b934937fcf container/sdn-controller reason/ContainerExit code/2 cause/Error I0517 23:40:02.017799       1 server.go:27] Starting HTTP metrics server\nI0517 23:40:02.018078       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0517 23:40:02.033198       1 leaderelection.go:334] error initially creating leader election record: configmaps "openshift-network-controller" already exists\nE0517 23:50:53.946066       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-d087cpk3-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.213.185:6443: connect: connection refused\nE0517 23:51:36.058369       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-d087cpk3-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.131.144:6443: connect: connection refused\n
May 18 00:42:06.794 E ns/openshift-multus pod/multus-additional-cni-plugins-q8xs2 node/ip-10-0-165-142.ec2.internal uid/eca37767-6ee2-48c6-b999-dd1063fafc07 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1791611452977582080junit24 hours ago
May 18 00:42:09.230 E ns/openshift-network-diagnostics pod/network-check-target-sjx2q node/ip-10-0-233-72.ec2.internal uid/b79f336b-2987-440e-aa66-84793f0d35e0 container/network-check-target-container reason/ContainerExit code/2 cause/Error
May 18 00:42:10.905 E ns/openshift-sdn pod/sdn-controller-rd542 node/ip-10-0-132-216.ec2.internal uid/53d4a447-3b2f-4647-bc1d-9bac861ab719 container/sdn-controller reason/ContainerExit code/2 cause/Error I0517 23:40:02.589922       1 server.go:27] Starting HTTP metrics server\nI0517 23:40:02.590157       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0517 23:48:15.741850       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-d087cpk3-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.213.185:6443: connect: connection refused\nE0517 23:50:49.647910       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-d087cpk3-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.213.185:6443: connect: connection refused\n
May 18 00:42:12.926 E ns/openshift-multus pod/multus-admission-controller-5m9gp node/ip-10-0-132-216.ec2.internal uid/b675f007-ecaa-4f4e-89bf-e2c782afc68b container/multus-admission-controller reason/ContainerExit code/137 cause/Error
#1791566856180469760junit27 hours ago
May 17 21:57:47.046 E ns/openshift-network-diagnostics pod/network-check-target-flprx node/ip-10-0-186-81.us-west-2.compute.internal uid/41165649-7628-44a6-83ff-2a13a2bc3582 container/network-check-target-container reason/ContainerExit code/2 cause/Error
May 17 21:57:51.343 E ns/openshift-sdn pod/sdn-controller-8d6vb node/ip-10-0-172-41.us-west-2.compute.internal uid/7eb0523e-8b0f-4cf6-b1ba-3f08a6789e80 container/sdn-controller reason/ContainerExit code/2 cause/Error I0517 20:51:24.270763       1 server.go:27] Starting HTTP metrics server\nI0517 20:51:24.271099       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0517 20:59:13.860151       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0517 21:00:10.871767       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-xqwghdn5-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.201.230:6443: connect: connection refused\n
May 17 21:58:03.108 E ns/openshift-sdn pod/sdn-controller-d85gx node/ip-10-0-165-10.us-west-2.compute.internal uid/ea56024c-c742-41d6-9d38-4e54ac63b814 container/sdn-controller reason/ContainerExit code/2 cause/Error I0517 20:51:24.333764       1 server.go:27] Starting HTTP metrics server\nI0517 20:51:24.333953       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0517 20:58:59.221342       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0517 21:00:00.763541       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-xqwghdn5-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.181.238:6443: connect: connection refused\nE0517 21:07:01.930284       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-xqwghdn5-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.201.230:6443: connect: connection refused\n

... 1 lines not shown

#1790802916672540672junit3 days ago
May 15 19:30:50.902 E ns/openshift-monitoring pod/node-exporter-ckdhr node/ip-10-0-133-211.ec2.internal uid/3fd52eff-d30f-447c-9aca-b55837acf7e1 container/node-exporter reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 15 19:30:57.522 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-211.ec2.internal node/ip-10-0-133-211.ec2.internal uid/1241011f-ce7c-44e0-853a-89d6a969aa5c container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0515 19:30:55.602211       1 cmd.go:216] Using insecure, self-signed certificates\nI0515 19:30:55.610461       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715801455 cert, and key in /tmp/serving-cert-220373289/serving-signer.crt, /tmp/serving-cert-220373289/serving-signer.key\nI0515 19:30:56.047026       1 observer_polling.go:159] Starting file observer\nW0515 19:30:56.080673       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-133-211.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0515 19:30:56.080850       1 builder.go:271] check-endpoints version 4.12.0-202405141537.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0515 19:30:56.098097       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-220373289/tls.crt::/tmp/serving-cert-220373289/tls.key"\nF0515 19:30:56.565903       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 15 19:30:58.656 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is degraded

... 2 lines not shown

#1790775415715926016junit3 days ago
May 15 17:39:58.531 E ns/openshift-network-diagnostics pod/network-check-target-8phsc node/ip-10-0-141-122.us-west-1.compute.internal uid/491093b6-eb1d-43bc-873d-f1cb7b1d058b container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 15 17:39:59.540 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-122.us-west-1.compute.internal node/ip-10-0-141-122.us-west-1.compute.internal uid/7e8d6ce7-0ceb-47ff-8aa5-335b5dc05ba4 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0515 17:39:56.926794       1 cmd.go:216] Using insecure, self-signed certificates\nI0515 17:39:56.941353       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715794796 cert, and key in /tmp/serving-cert-2374044691/serving-signer.crt, /tmp/serving-cert-2374044691/serving-signer.key\nI0515 17:39:57.930725       1 observer_polling.go:159] Starting file observer\nW0515 17:39:57.942660       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-141-122.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0515 17:39:57.942771       1 builder.go:271] check-endpoints version 4.12.0-202405141537.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0515 17:39:57.949849       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2374044691/tls.crt::/tmp/serving-cert-2374044691/tls.key"\nF0515 17:39:58.406891       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 15 17:40:04.818 E ns/openshift-multus pod/network-metrics-daemon-cc9wx node/ip-10-0-141-122.us-west-1.compute.internal uid/dba2b9d1-ecfd-4051-a3df-01d71bdcf1f6 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

#1791119505099853824junit2 days ago
May 16 16:29:44.257 E ns/openshift-monitoring pod/node-exporter-h9rtx node/ip-10-0-164-124.ec2.internal uid/d720a3a3-99b2-4cbc-80e3-9ddf10cedead container/node-exporter reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 16 16:29:49.980 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-164-124.ec2.internal node/ip-10-0-164-124.ec2.internal uid/9c60b72f-34fc-4bf6-8802-3655fa6d24ba container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0516 16:29:48.839955       1 cmd.go:216] Using insecure, self-signed certificates\nI0516 16:29:48.840758       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715876988 cert, and key in /tmp/serving-cert-3939972515/serving-signer.crt, /tmp/serving-cert-3939972515/serving-signer.key\nI0516 16:29:49.475852       1 observer_polling.go:159] Starting file observer\nW0516 16:29:49.484894       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-164-124.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0516 16:29:49.485084       1 builder.go:271] check-endpoints version 4.12.0-202405141537.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0516 16:29:49.498454       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3939972515/tls.crt::/tmp/serving-cert-3939972515/tls.key"\nF0516 16:29:49.760612       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 16 16:29:53.548 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-164-124.ec2.internal node/ip-10-0-164-124.ec2.internal uid/9c60b72f-34fc-4bf6-8802-3655fa6d24ba container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0516 16:29:48.839955       1 cmd.go:216] Using insecure, self-signed certificates\nI0516 16:29:48.840758       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715876988 cert, and key in /tmp/serving-cert-3939972515/serving-signer.crt, /tmp/serving-cert-3939972515/serving-signer.key\nI0516 16:29:49.475852       1 observer_polling.go:159] Starting file observer\nW0516 16:29:49.484894       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-164-124.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0516 16:29:49.485084       1 builder.go:271] check-endpoints version 4.12.0-202405141537.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0516 16:29:49.498454       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3939972515/tls.crt::/tmp/serving-cert-3939972515/tls.key"\nF0516 16:29:49.760612       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1790672943907344384junit3 days ago
May 15 10:39:54.575 E ns/openshift-multus pod/multus-additional-cni-plugins-jf9bv node/ip-10-0-150-106.us-west-1.compute.internal uid/27040cf0-c312-46da-b11b-a44117c20423 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 15 10:40:10.760 E ns/openshift-sdn pod/sdn-controller-pjtsw node/ip-10-0-150-106.us-west-1.compute.internal uid/e00144e4-b288-4626-ac8a-434eacd37d3a container/sdn-controller reason/ContainerExit code/2 cause/Error t may still be processing the request (get configmaps openshift-network-controller)\nE0515 09:39:08.582279       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-s3ziy1kw-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.230.186:6443: connect: connection refused\nE0515 09:45:43.974701       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-s3ziy1kw-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.135.20:6443: connect: connection refused\nE0515 09:46:48.942533       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-s3ziy1kw-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.230.186:6443: connect: connection refused\nE0515 09:47:36.408727       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-s3ziy1kw-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.135.20:6443: connect: connection refused\nE0515 09:48:14.219515       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-s3ziy1kw-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.135.20:6443: connect: connection refused\nE0515 09:48:57.451647       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-s3ziy1kw-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.135.20:6443: connect: connection refused\n
May 15 10:40:16.089 E ns/openshift-multus pod/multus-admission-controller-sl8c5 node/ip-10-0-224-127.us-west-1.compute.internal uid/698ee05b-9a7c-4d47-bd2f-a19df90f3020 container/multus-admission-controller reason/ContainerExit code/137 cause/Error
#1790672943907344384junit3 days ago
May 15 10:40:32.149 E ns/openshift-console pod/console-84b88dcd6c-mqlgk node/ip-10-0-224-127.us-west-1.compute.internal uid/31575bac-c211-4113-b5f2-c23b1472d04e container/console reason/ContainerExit code/2 cause/Error W0515 09:50:32.087181       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0515 09:50:32.087203       1 main.go:364] cookies are secure!\nI0515 09:50:32.123206       1 main.go:798] Binding to [::]:8443...\nI0515 09:50:32.123234       1 main.go:800] using TLS\n
May 15 10:40:33.149 E ns/openshift-sdn pod/sdn-controller-qltzq node/ip-10-0-224-127.us-west-1.compute.internal uid/d200c54d-0759-4383-b962-ca800e9ec90e container/sdn-controller reason/ContainerExit code/2 cause/Error I0515 09:33:17.082853       1 server.go:27] Starting HTTP metrics server\nI0515 09:33:17.083158       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0515 09:38:42.278670       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0515 09:39:21.715102       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-s3ziy1kw-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.230.186:6443: connect: connection refused\nE0515 09:39:58.264657       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-s3ziy1kw-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.230.186:6443: connect: connection refused\nE0515 09:40:24.923238       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-s3ziy1kw-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.230.186:6443: connect: connection refused\n
May 15 10:40:40.222 E ns/openshift-multus pod/multus-additional-cni-plugins-d2z5j node/ip-10-0-200-147.us-west-1.compute.internal uid/070845f4-9716-45e4-b32c-0b545035ac24 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1790558110348218368junit3 days ago
May 15 03:02:42.249 E ns/openshift-multus pod/multus-additional-cni-plugins-l55pc node/ip-10-0-161-89.us-west-1.compute.internal uid/3056cb6b-e3fe-484b-8e10-10a9f3f9fd55 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 15 03:02:56.989 E ns/openshift-sdn pod/sdn-controller-g978g node/ip-10-0-235-31.us-west-1.compute.internal uid/18314a4b-0c8a-4b55-89e4-39143dfb114d container/sdn-controller reason/ContainerExit code/2 cause/Error I0515 01:54:27.794683       1 server.go:27] Starting HTTP metrics server\nI0515 01:54:27.794817       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0515 02:01:30.055947       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7kymxhkh-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.240.195:6443: connect: connection refused\n
May 15 03:03:04.025 E ns/openshift-multus pod/multus-admission-controller-fkkbc node/ip-10-0-235-31.us-west-1.compute.internal uid/700ead69-5923-4d2e-9c21-9064f4c01554 container/multus-admission-controller reason/ContainerExit code/137 cause/Error
#1790558110348218368junit3 days ago
May 15 03:03:08.109 E ns/openshift-network-diagnostics pod/network-check-target-qfn4x node/ip-10-0-235-31.us-west-1.compute.internal uid/70cb020a-bb9b-4a3e-a3c3-57432e33167f container/network-check-target-container reason/ContainerExit code/2 cause/Error
May 15 03:03:10.316 E ns/openshift-sdn pod/sdn-controller-dh55g node/ip-10-0-161-89.us-west-1.compute.internal uid/d1d940ec-b22a-4b17-9b32-31cd92f29acd container/sdn-controller reason/ContainerExit code/2 cause/Error I0515 01:54:27.552274       1 server.go:27] Starting HTTP metrics server\nI0515 01:54:27.552575       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0515 02:01:28.979189       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7kymxhkh-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.139.197:6443: connect: connection refused\n
May 15 03:03:18.379 E ns/openshift-multus pod/multus-additional-cni-plugins-fxh69 node/ip-10-0-187-176.us-west-1.compute.internal uid/a50dbad7-1b98-47f8-ab01-420c0fd95b43 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1790496953423892480junit4 days ago
May 14 23:09:17.083 E ns/openshift-multus pod/network-metrics-daemon-fk59v node/ip-10-0-158-47.ec2.internal uid/93ffabf8-39d7-458d-b23e-37a6e21dc52c container/kube-rbac-proxy reason/ContainerExit code/137 cause/ContainerStatusUnknown The container could not be located when the pod was deleted.  The container used to be Running
May 14 23:09:22.178 E ns/openshift-sdn pod/sdn-controller-kk26l node/ip-10-0-157-16.ec2.internal uid/ba876f14-9b4f-4b6a-a734-fbc01a695b66 container/sdn-controller reason/ContainerExit code/2 cause/Error I0514 22:03:11.128704       1 server.go:27] Starting HTTP metrics server\nI0514 22:03:11.128839       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0514 22:10:43.886555       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-mvdtnys3-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.180.70:6443: connect: connection refused\nE0514 22:11:17.539143       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-mvdtnys3-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.243.61:6443: connect: connection refused\nE0514 22:11:53.214324       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-mvdtnys3-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.243.61:6443: connect: connection refused\n
May 14 23:09:25.284 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-d2prq node/ip-10-0-182-250.ec2.internal uid/5d4f637c-fcac-4a71-8b49-933b4fbe94f7 container/csi-liveness-probe reason/ContainerExit code/2 cause/Error
#1790496953423892480junit4 days ago
May 14 23:09:40.496 E ns/openshift-console pod/console-d8dfb9bf4-hj254 node/ip-10-0-157-16.ec2.internal uid/ac948eec-98ff-4855-a0cf-aeaaf00c0c7d container/console reason/ContainerExit code/2 cause/Error W0514 22:22:37.916553       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0514 22:22:37.916579       1 main.go:364] cookies are secure!\nI0514 22:22:37.954820       1 main.go:798] Binding to [::]:8443...\nI0514 22:22:37.954850       1 main.go:800] using TLS\n
May 14 23:09:43.731 E ns/openshift-sdn pod/sdn-controller-jq6lw node/ip-10-0-182-250.ec2.internal uid/dea037fd-1d6d-4c9f-8e7a-06b7c56a11d7 container/sdn-controller reason/ContainerExit code/2 cause/Error I0514 22:03:55.832508       1 server.go:27] Starting HTTP metrics server\nI0514 22:03:55.832649       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0514 22:09:52.097248       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-mvdtnys3-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.243.61:6443: connect: connection refused\nE0514 22:11:11.647325       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-mvdtnys3-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.243.61:6443: connect: connection refused\nE0514 22:19:44.078067       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-mvdtnys3-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.180.70:6443: connect: connection refused\nE0514 22:20:22.258679       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-mvdtnys3-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.180.70:6443: connect: connection refused\n
May 14 23:09:45.061 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-55dc8c6654-ndvcw node/ip-10-0-232-161.ec2.internal uid/fe9fcd33-a5d5-49a7-b676-8981802b483f container/csi-attacher reason/ContainerExit code/1 cause/Error Lost connection to CSI driver, exiting
#1790528815152238592junit3 days ago
May 15 01:06:59.179 E ns/openshift-multus pod/multus-additional-cni-plugins-nlpv6 node/ip-10-0-131-142.us-west-2.compute.internal uid/08064f30-523f-4f46-9480-8e125382d4f4 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 15 01:07:13.323 E ns/openshift-sdn pod/sdn-controller-hmnxz node/ip-10-0-249-251.us-west-2.compute.internal uid/f7005f25-6b70-451c-8138-14c86da526bf container/sdn-controller reason/ContainerExit code/2 cause/Error I0514 23:57:59.891557       1 server.go:27] Starting HTTP metrics server\nI0514 23:57:59.891771       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0515 00:05:18.502071       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-9ffg8c6b-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.157.74:6443: connect: connection refused\n
May 15 01:07:20.018 E ns/openshift-multus pod/multus-admission-controller-ch8n5 node/ip-10-0-173-211.us-west-2.compute.internal uid/24859a99-9932-4885-93cb-a2a10bad0d18 container/multus-admission-controller reason/ContainerExit code/137 cause/Error
#1790528815152238592junit3 days ago
May 15 01:07:22.358 E ns/openshift-network-diagnostics pod/network-check-target-s4p8f node/ip-10-0-181-112.us-west-2.compute.internal uid/8700c6ee-5c79-4209-b3e0-8d6dc60efb45 container/network-check-target-container reason/ContainerExit code/2 cause/Error
May 15 01:07:26.357 E ns/openshift-sdn pod/sdn-controller-cm6w7 node/ip-10-0-181-112.us-west-2.compute.internal uid/24c485fe-978b-48a7-ac48-1ca4131b95d8 container/sdn-controller reason/ContainerExit code/2 cause/Error I0514 23:57:59.764293       1 server.go:27] Starting HTTP metrics server\nI0514 23:57:59.764878       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0515 00:13:12.482770       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-9ffg8c6b-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.157.74:6443: connect: connection refused\n
May 15 01:07:38.077 E ns/openshift-sdn pod/sdn-controller-jf2r5 node/ip-10-0-173-211.us-west-2.compute.internal uid/e90e310f-a2e4-481c-b729-4b2f743cec20 container/sdn-controller reason/ContainerExit code/2 cause/Error \nI0515 00:26:30.325298       1 vnids.go:105] Allocated netid 10049301 for namespace "e2e-check-for-alerts-1423"\nI0515 00:26:30.526764       1 vnids.go:105] Allocated netid 16764567 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-5317"\nI0515 00:26:30.718811       1 vnids.go:105] Allocated netid 3877904 for namespace "e2e-k8s-sig-apps-deployment-upgrade-2967"\nI0515 00:26:30.914515       1 vnids.go:105] Allocated netid 1979842 for namespace "e2e-check-for-deletes-7930"\nI0515 00:26:31.117295       1 vnids.go:105] Allocated netid 10830122 for namespace "e2e-image-registry-reused-3256"\nI0515 00:26:31.316796       1 vnids.go:105] Allocated netid 11239288 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-4331"\nI0515 00:26:31.522817       1 vnids.go:105] Allocated netid 5086339 for namespace "e2e-prometheus-metrics-available-after-upgrade-1391"\nI0515 00:26:31.919768       1 vnids.go:127] Released netid 1914025 for namespace "e2e-test-schema-status-check-d7drm"\nI0515 00:26:32.120974       1 vnids.go:127] Released netid 9310086 for namespace "e2e-test-job-names-fmhnq"\nI0515 00:26:33.215856       1 vnids.go:127] Released netid 14599032 for namespace "e2e-test-scheduling-pod-check-5lh75"\nI0515 00:26:33.961938       1 vnids.go:127] Released netid 15772650 for namespace "e2e-test-prometheus-rfqmc"\nI0515 00:26:34.039037       1 vnids.go:127] Released netid 8586632 for namespace "e2e-test-scheduling-pod-check-45qzb"\nI0515 00:26:34.187405       1 vnids.go:127] Released netid 11888049 for namespace "e2e-test-scheduling-pod-check-9l8tg"\nI0515 00:26:34.445904       1 vnids.go:127] Released netid 10601845 for namespace "e2e-test-scheduling-pod-check-lf85k"\nI0515 00:26:34.456073       1 vnids.go:127] Released netid 4452581 for namespace "e2e-test-scheduling-pod-check-rnscq"\nI0515 00:26:34.526257       1 vnids.go:127] Released netid 12729822 for namespace "e2e-test-scheduling-pod-check-nnvp9"\nI0515 00:26:34.965613       1 vnids.go:127] Released netid 796698 for namespace "e2e-test-subresource-status-check-nnptg"\n
#1790373032041123840junit4 days ago
May 14 15:01:58.999 E ns/openshift-network-diagnostics pod/network-check-target-9vg2f node/ip-10-0-162-108.us-west-2.compute.internal uid/16698caf-dfa7-41c0-bf9f-806e4dfeff0b container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 14 15:02:03.027 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-162-108.us-west-2.compute.internal node/ip-10-0-162-108.us-west-2.compute.internal uid/5b26bcdf-d9cc-4198-bdcd-403cc77185ef container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 15:02:00.952954       1 cmd.go:216] Using insecure, self-signed certificates\nI0514 15:02:00.957426       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715698920 cert, and key in /tmp/serving-cert-2651578402/serving-signer.crt, /tmp/serving-cert-2651578402/serving-signer.key\nI0514 15:02:01.506531       1 observer_polling.go:159] Starting file observer\nW0514 15:02:01.529598       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-162-108.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 15:02:01.529764       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0514 15:02:01.558085       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2651578402/tls.crt::/tmp/serving-cert-2651578402/tls.key"\nF0514 15:02:02.272299       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 14 15:02:04.058 E ns/openshift-multus pod/network-metrics-daemon-48hbt node/ip-10-0-162-108.us-west-2.compute.internal uid/2bd5487b-a6f2-4b10-b5bf-d7cae7ec83dc container/network-metrics-daemon reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

#1790150272744427520junit5 days ago
May 13 23:55:08.560 E ns/openshift-network-diagnostics pod/network-check-target-dvphv node/ip-10-0-225-199.us-west-2.compute.internal uid/64b10633-40ae-49e6-a180-295caddd2e25 container/network-check-target-container reason/ContainerExit code/2 cause/Error
May 13 23:55:13.794 E ns/openshift-sdn pod/sdn-controller-bg7cz node/ip-10-0-141-57.us-west-2.compute.internal uid/f6fb363f-7be1-4bd9-aa69-2d0dc7c26886 container/sdn-controller reason/ContainerExit code/2 cause/Error I0513 22:54:08.684368       1 server.go:27] Starting HTTP metrics server\nI0513 22:54:08.685764       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0513 23:03:53.434040       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-cmv4zq7y-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.240.136:6443: connect: connection refused\n
May 13 23:55:24.103 E ns/openshift-multus pod/multus-additional-cni-plugins-fr64w node/ip-10-0-149-56.us-west-2.compute.internal uid/a4a64809-0c25-417c-985c-1bc595dc6f21 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1790150272744427520junit5 days ago
May 14 00:12:31.433 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-75p4q node/ip-10-0-141-57.us-west-2.compute.internal uid/c1becd26-3422-41e9-968f-46dd59b20594 container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 14 00:12:38.200 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-57.us-west-2.compute.internal node/ip-10-0-141-57.us-west-2.compute.internal uid/03d4d3f8-ef85-40e2-baa6-63b375fac8fa container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 00:12:36.269167       1 cmd.go:216] Using insecure, self-signed certificates\nI0514 00:12:36.269647       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715645556 cert, and key in /tmp/serving-cert-713745533/serving-signer.crt, /tmp/serving-cert-713745533/serving-signer.key\nI0514 00:12:36.835802       1 observer_polling.go:159] Starting file observer\nW0514 00:12:36.844695       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-141-57.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 00:12:36.844799       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0514 00:12:36.846869       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-713745533/tls.crt::/tmp/serving-cert-713745533/tls.key"\nF0514 00:12:37.437862       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 14 00:12:40.391 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-57.us-west-2.compute.internal node/ip-10-0-141-57.us-west-2.compute.internal uid/03d4d3f8-ef85-40e2-baa6-63b375fac8fa container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 00:12:36.269167       1 cmd.go:216] Using insecure, self-signed certificates\nI0514 00:12:36.269647       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715645556 cert, and key in /tmp/serving-cert-713745533/serving-signer.crt, /tmp/serving-cert-713745533/serving-signer.key\nI0514 00:12:36.835802       1 observer_polling.go:159] Starting file observer\nW0514 00:12:36.844695       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-141-57.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 00:12:36.844799       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0514 00:12:36.846869       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-713745533/tls.crt::/tmp/serving-cert-713745533/tls.key"\nF0514 00:12:37.437862       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1790128582140366848junit5 days ago
May 13 22:47:58.626 E ns/openshift-sdn pod/sdn-controller-jwf7s node/ip-10-0-159-159.us-west-2.compute.internal uid/26f6a20e-8bc7-4186-a2e9-b8d5c8cf2316 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 13 22:48:03.454 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-159-159.us-west-2.compute.internal node/ip-10-0-159-159.us-west-2.compute.internal uid/41770e99-0de3-4d85-beed-030f7b6311c2 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0513 22:48:01.850400       1 cmd.go:216] Using insecure, self-signed certificates\nI0513 22:48:01.865426       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715640481 cert, and key in /tmp/serving-cert-3023962622/serving-signer.crt, /tmp/serving-cert-3023962622/serving-signer.key\nI0513 22:48:02.234866       1 observer_polling.go:159] Starting file observer\nW0513 22:48:02.248407       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-159-159.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0513 22:48:02.248582       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0513 22:48:02.249268       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3023962622/tls.crt::/tmp/serving-cert-3023962622/tls.key"\nF0513 22:48:02.585989       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 13 22:48:05.444 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-159-159.us-west-2.compute.internal node/ip-10-0-159-159.us-west-2.compute.internal uid/41770e99-0de3-4d85-beed-030f7b6311c2 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0513 22:48:01.850400       1 cmd.go:216] Using insecure, self-signed certificates\nI0513 22:48:01.865426       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715640481 cert, and key in /tmp/serving-cert-3023962622/serving-signer.crt, /tmp/serving-cert-3023962622/serving-signer.key\nI0513 22:48:02.234866       1 observer_polling.go:159] Starting file observer\nW0513 22:48:02.248407       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-159-159.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0513 22:48:02.248582       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0513 22:48:02.249268       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3023962622/tls.crt::/tmp/serving-cert-3023962622/tls.key"\nF0513 22:48:02.585989       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1789130839959801856junit7 days ago
May 11 04:36:19.815 E ns/openshift-multus pod/multus-additional-cni-plugins-mrpdv node/ip-10-0-153-57.us-west-2.compute.internal uid/37c57237-19ab-4ae0-9990-f6893e5da48c container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 11 04:36:36.000 E ns/openshift-sdn pod/sdn-controller-27gcc node/ip-10-0-153-57.us-west-2.compute.internal uid/717d37ef-fc14-4f3b-83f6-dfba0fa3a592 container/sdn-controller reason/ContainerExit code/2 cause/Error I0511 03:38:35.689596       1 server.go:27] Starting HTTP metrics server\nI0511 03:38:35.689736       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0511 03:38:35.700164       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-yby16f8w-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.160.76:6443: connect: connection refused\nE0511 03:39:03.628942       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-yby16f8w-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.192.39:6443: connect: connection refused\n
May 11 04:36:36.000 E ns/openshift-sdn pod/sdn-controller-27gcc node/ip-10-0-153-57.us-west-2.compute.internal uid/717d37ef-fc14-4f3b-83f6-dfba0fa3a592 container/sdn-controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1789130839959801856junit7 days ago
May 11 04:37:02.809 E ns/openshift-multus pod/multus-additional-cni-plugins-xjtpq node/ip-10-0-253-78.us-west-2.compute.internal uid/e5d65643-4315-4b1c-8595-a5a664e692d0 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 11 04:37:04.086 E ns/openshift-sdn pod/sdn-controller-k68pv node/ip-10-0-216-85.us-west-2.compute.internal uid/29e90d3a-b10e-49dd-b3cb-1b8d6d5d12c4 container/sdn-controller reason/ContainerExit code/2 cause/Error I0511 03:29:16.763285       1 server.go:27] Starting HTTP metrics server\nI0511 03:29:16.763635       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0511 03:37:45.014725       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0511 03:45:09.746264       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-yby16f8w-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.160.76:6443: connect: connection refused\n
May 11 04:37:25.354 E ns/openshift-network-diagnostics pod/network-check-target-qdclx node/ip-10-0-153-57.us-west-2.compute.internal uid/3800bd9a-b44f-4452-92d7-310945e07509 container/network-check-target-container reason/ContainerExit code/2 cause/Error
#1789263118434570240junit7 days ago
May 11 13:30:14.947 E ns/openshift-machine-config-operator pod/machine-config-daemon-zsq6r node/ip-10-0-151-221.us-east-2.compute.internal uid/393bc81a-fa39-4f93-925d-7e211ec73b1d container/machine-config-daemon reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 11 13:30:21.558 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-151-221.us-east-2.compute.internal node/ip-10-0-151-221.us-east-2.compute.internal uid/9162c6e4-e3c1-4a0c-8b7a-ad86b7a311dd container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0511 13:30:19.503014       1 cmd.go:216] Using insecure, self-signed certificates\nI0511 13:30:19.514134       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715434219 cert, and key in /tmp/serving-cert-514042822/serving-signer.crt, /tmp/serving-cert-514042822/serving-signer.key\nI0511 13:30:20.216465       1 observer_polling.go:159] Starting file observer\nW0511 13:30:20.232741       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-151-221.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0511 13:30:20.232921       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0511 13:30:20.239366       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-514042822/tls.crt::/tmp/serving-cert-514042822/tls.key"\nF0511 13:30:20.690047       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 11 13:30:22.784 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-151-221.us-east-2.compute.internal node/ip-10-0-151-221.us-east-2.compute.internal uid/9162c6e4-e3c1-4a0c-8b7a-ad86b7a311dd container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0511 13:30:19.503014       1 cmd.go:216] Using insecure, self-signed certificates\nI0511 13:30:19.514134       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715434219 cert, and key in /tmp/serving-cert-514042822/serving-signer.crt, /tmp/serving-cert-514042822/serving-signer.key\nI0511 13:30:20.216465       1 observer_polling.go:159] Starting file observer\nW0511 13:30:20.232741       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-151-221.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0511 13:30:20.232921       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0511 13:30:20.239366       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-514042822/tls.crt::/tmp/serving-cert-514042822/tls.key"\nF0511 13:30:20.690047       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 2 lines not shown

#1789114118125391872junit7 days ago
May 11 03:39:28.750 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-6k27q node/ip-10-0-201-180.ec2.internal uid/056df125-de1d-4ff4-9c4c-9ea676e32294 container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 11 03:39:35.523 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-201-180.ec2.internal node/ip-10-0-201-180.ec2.internal uid/429756df-e15a-462a-ac50-95550f0588b2 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0511 03:39:33.335376       1 cmd.go:216] Using insecure, self-signed certificates\nI0511 03:39:33.342113       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715398773 cert, and key in /tmp/serving-cert-4178275936/serving-signer.crt, /tmp/serving-cert-4178275936/serving-signer.key\nI0511 03:39:34.189808       1 observer_polling.go:159] Starting file observer\nW0511 03:39:34.211495       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-201-180.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0511 03:39:34.211632       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0511 03:39:34.230434       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4178275936/tls.crt::/tmp/serving-cert-4178275936/tls.key"\nF0511 03:39:34.502233       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 11 03:39:37.627 E ns/openshift-network-diagnostics pod/network-check-target-w8qtm node/ip-10-0-201-180.ec2.internal uid/e7f14647-bffb-4826-83bd-4d3f6202b98d container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 2 lines not shown

#1788934821938991104junit8 days ago
May 10 15:43:52.085 E ns/openshift-monitoring pod/node-exporter-ksjmj node/ip-10-0-222-114.us-east-2.compute.internal uid/1189df5e-2b9b-4d8f-aebc-5a5ce0dee015 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 10 15:43:58.552 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-222-114.us-east-2.compute.internal node/ip-10-0-222-114.us-east-2.compute.internal uid/cab6e58a-4db7-4ffd-aad7-1a34c1073cee container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 15:43:56.422584       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 15:43:56.429428       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715355836 cert, and key in /tmp/serving-cert-2903421071/serving-signer.crt, /tmp/serving-cert-2903421071/serving-signer.key\nI0510 15:43:57.139512       1 observer_polling.go:159] Starting file observer\nW0510 15:43:57.165593       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-222-114.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 15:43:57.165732       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0510 15:43:57.172918       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2903421071/tls.crt::/tmp/serving-cert-2903421071/tls.key"\nF0510 15:43:57.575140       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 10 15:44:00.431 E ns/openshift-network-diagnostics pod/network-check-target-mrb5p node/ip-10-0-222-114.us-east-2.compute.internal uid/2bed2e71-9be8-47c6-9230-edc3b7887a41 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 2 lines not shown

#1788801098723627008junit8 days ago
May 10 06:44:19.988 E ns/openshift-multus pod/multus-6hjbw node/ip-10-0-244-48.ec2.internal uid/152b4d79-a424-4866-adce-a1842de8d01e container/kube-multus reason/ContainerExit code/137 cause/ContainerStatusUnknown The container could not be located when the pod was deleted.  The container used to be Running
May 10 06:44:30.915 E ns/openshift-sdn pod/sdn-controller-6rdpr node/ip-10-0-138-2.ec2.internal uid/265055ce-0505-4151-8551-b8a127606352 container/sdn-controller reason/ContainerExit code/2 cause/Error I0510 05:37:16.020053       1 server.go:27] Starting HTTP metrics server\nI0510 05:37:16.020183       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0510 05:42:12.757048       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0510 05:42:54.226150       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-iffpp7sg-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.128.188:6443: connect: connection refused\nE0510 05:43:49.329790       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-iffpp7sg-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.128.188:6443: connect: connection refused\n
May 10 06:44:32.939 E ns/openshift-multus pod/multus-additional-cni-plugins-zmk85 node/ip-10-0-138-2.ec2.internal uid/fe18f30b-2433-4903-b726-d0e15ddae46f container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1788801098723627008junit8 days ago
May 10 06:57:45.169 E ns/openshift-sdn pod/sdn-controller-2wtxd node/ip-10-0-216-171.ec2.internal uid/0cb51221-6ffd-42d6-a18a-adb9db84051e container/sdn-controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 10 06:57:51.632 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-216-171.ec2.internal node/ip-10-0-216-171.ec2.internal uid/ec3c1a75-2f1e-4dca-9e1f-414dbd3d3623 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 06:57:49.334486       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 06:57:49.352040       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715324269 cert, and key in /tmp/serving-cert-3249066246/serving-signer.crt, /tmp/serving-cert-3249066246/serving-signer.key\nI0510 06:57:50.643990       1 observer_polling.go:159] Starting file observer\nW0510 06:57:50.693787       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-216-171.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 06:57:50.694059       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0510 06:57:50.723683       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3249066246/tls.crt::/tmp/serving-cert-3249066246/tls.key"\nF0510 06:57:51.166165       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 10 06:57:54.809 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-216-171.ec2.internal node/ip-10-0-216-171.ec2.internal uid/ec3c1a75-2f1e-4dca-9e1f-414dbd3d3623 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 06:57:49.334486       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 06:57:49.352040       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715324269 cert, and key in /tmp/serving-cert-3249066246/serving-signer.crt, /tmp/serving-cert-3249066246/serving-signer.key\nI0510 06:57:50.643990       1 observer_polling.go:159] Starting file observer\nW0510 06:57:50.693787       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-216-171.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 06:57:50.694059       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0510 06:57:50.723683       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3249066246/tls.crt::/tmp/serving-cert-3249066246/tls.key"\nF0510 06:57:51.166165       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788786186580398080junit8 days ago
May 10 05:54:25.451 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-lqjhc node/ip-10-0-245-9.us-east-2.compute.internal uid/7c548767-ec83-447a-a903-6e6322c35afa container/csi-liveness-probe reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 10 05:54:30.138 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-245-9.us-east-2.compute.internal node/ip-10-0-245-9.us-east-2.compute.internal uid/59acaab6-5ab0-4ec0-81d6-6bd10747b4e0 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 05:54:28.677866       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 05:54:28.684454       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715320468 cert, and key in /tmp/serving-cert-3670550405/serving-signer.crt, /tmp/serving-cert-3670550405/serving-signer.key\nI0510 05:54:29.214817       1 observer_polling.go:159] Starting file observer\nW0510 05:54:29.233937       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-245-9.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 05:54:29.234041       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0510 05:54:29.244559       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3670550405/tls.crt::/tmp/serving-cert-3670550405/tls.key"\nF0510 05:54:29.601047       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 10 05:54:32.181 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-245-9.us-east-2.compute.internal node/ip-10-0-245-9.us-east-2.compute.internal uid/59acaab6-5ab0-4ec0-81d6-6bd10747b4e0 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 05:54:28.677866       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 05:54:28.684454       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715320468 cert, and key in /tmp/serving-cert-3670550405/serving-signer.crt, /tmp/serving-cert-3670550405/serving-signer.key\nI0510 05:54:29.214817       1 observer_polling.go:159] Starting file observer\nW0510 05:54:29.233937       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-245-9.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 05:54:29.234041       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0510 05:54:29.244559       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3670550405/tls.crt::/tmp/serving-cert-3670550405/tls.key"\nF0510 05:54:29.601047       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788769444495888384junit8 days ago
May 10 04:35:15.140 E ns/openshift-console pod/console-5767c47979-5gzxv node/ip-10-0-196-20.us-west-1.compute.internal uid/71d8828d-2816-488b-9a26-0249756d6de2 container/console reason/ContainerExit code/2 cause/Error W0510 03:48:15.676973       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0510 03:48:15.676996       1 main.go:364] cookies are secure!\nI0510 03:48:15.742339       1 main.go:798] Binding to [::]:8443...\nI0510 03:48:15.742366       1 main.go:800] using TLS\n
May 10 04:35:16.412 E ns/openshift-sdn pod/sdn-controller-l9m7c node/ip-10-0-132-94.us-west-1.compute.internal uid/7dd89c21-b952-4123-8d24-db75978c47fc container/sdn-controller reason/ContainerExit code/2 cause/Error I0510 03:26:32.133877       1 server.go:27] Starting HTTP metrics server\nI0510 03:26:32.134374       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0510 03:42:54.014435       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-dh1d3xq7-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.202.35:6443: connect: connection refused\n
May 10 04:35:26.492 E ns/openshift-console pod/console-5767c47979-lsjv2 node/ip-10-0-132-94.us-west-1.compute.internal uid/8afe97b1-8e7b-436b-aa42-7dc4831dcb68 container/console reason/ContainerExit code/2 cause/Error W0510 03:47:53.914865       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0510 03:47:53.914893       1 main.go:364] cookies are secure!\nI0510 03:47:53.951056       1 main.go:798] Binding to [::]:8443...\nI0510 03:47:53.951092       1 main.go:800] using TLS\n

... 2 lines not shown

#1788756431231520768junit8 days ago
May 10 03:01:17.347 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-5c4b5cf569-68mps node/ip-10-0-235-110.us-west-2.compute.internal uid/3b624f79-c5af-4814-bced-d269e72aa491 container/kube-apiserver-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 10 03:12:25.059 - 999ms E disruption/cache-kube-api connection/new reason/DisruptionBegan disruption/cache-kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-b6trvimq-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/default?resourceVersion=0": dial tcp 52.27.83.145:6443: connect: connection refused
May 10 03:14:07.019 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-5f775c4b45-wmm8v node/ip-10-0-235-110.us-west-2.compute.internal uid/4776f4a6-2d23-4922-8610-e14edf9c2d40 container/kube-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1788756431231520768junit8 days ago
May 10 03:59:45.074 E ns/openshift-monitoring pod/node-exporter-j65bq node/ip-10-0-134-116.us-west-2.compute.internal uid/dddb732f-c30e-4286-964f-9bb31ea68a38 container/node-exporter reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 10 03:59:49.773 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-116.us-west-2.compute.internal node/ip-10-0-134-116.us-west-2.compute.internal uid/3b54c013-c0fa-42fe-a35d-45cabcf5d1f9 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 03:59:48.547530       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 03:59:48.555754       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715313588 cert, and key in /tmp/serving-cert-2000005685/serving-signer.crt, /tmp/serving-cert-2000005685/serving-signer.key\nI0510 03:59:49.221635       1 observer_polling.go:159] Starting file observer\nW0510 03:59:49.271973       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-134-116.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 03:59:49.272086       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0510 03:59:49.303150       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2000005685/tls.crt::/tmp/serving-cert-2000005685/tls.key"\nF0510 03:59:49.591232       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 10 03:59:51.789 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-116.us-west-2.compute.internal node/ip-10-0-134-116.us-west-2.compute.internal uid/3b54c013-c0fa-42fe-a35d-45cabcf5d1f9 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 03:59:48.547530       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 03:59:48.555754       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715313588 cert, and key in /tmp/serving-cert-2000005685/serving-signer.crt, /tmp/serving-cert-2000005685/serving-signer.key\nI0510 03:59:49.221635       1 observer_polling.go:159] Starting file observer\nW0510 03:59:49.271973       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-134-116.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 03:59:49.272086       1 builder.go:271] check-endpoints version 4.12.0-202405091536.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0510 03:59:49.303150       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2000005685/tls.crt::/tmp/serving-cert-2000005685/tls.key"\nF0510 03:59:49.591232       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788751281003696128junit8 days ago
May 10 03:16:26.832 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-w2zt6 node/ip-10-0-160-249.us-west-1.compute.internal uid/6d537d14-250d-4a42-80d0-84e50c79da51 container/csi-driver reason/ContainerExit code/2 cause/Error
May 10 03:16:35.045 E ns/openshift-sdn pod/sdn-controller-7md6z node/ip-10-0-157-4.us-west-1.compute.internal uid/82855a85-2477-471b-ad10-278f233fde75 container/sdn-controller reason/ContainerExit code/2 cause/Error I0510 02:14:52.015856       1 server.go:27] Starting HTTP metrics server\nI0510 02:14:52.015974       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0510 02:22:45.690368       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7kskppzv-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.176.29:6443: connect: connection refused\n
May 10 03:16:37.071 E ns/openshift-console pod/console-c9b8dff89-z2fjs node/ip-10-0-157-4.us-west-1.compute.internal uid/6105c356-ae82-449e-8ea2-2ffc5273010b container/console reason/ContainerExit code/2 cause/Error W0510 02:29:51.011477       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0510 02:29:51.011499       1 main.go:364] cookies are secure!\nI0510 02:29:51.051983       1 main.go:798] Binding to [::]:8443...\nI0510 02:29:51.052019       1 main.go:800] using TLS\n
#1788751281003696128junit8 days ago
May 10 03:16:44.014 E ns/openshift-network-diagnostics pod/network-check-target-k2wzj node/ip-10-0-155-100.us-west-1.compute.internal uid/398660a1-a532-40df-8cf6-90e793a7387b container/network-check-target-container reason/ContainerExit code/2 cause/Error
May 10 03:16:49.507 E ns/openshift-sdn pod/sdn-controller-xdldf node/ip-10-0-146-165.us-west-1.compute.internal uid/f58844e2-9a81-450c-b7a6-e7501e319a83 container/sdn-controller reason/ContainerExit code/2 cause/Error nshift-network-controller": dial tcp 10.0.176.29:6443: connect: connection refused\nE0510 02:21:48.292038       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7kskppzv-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.176.29:6443: connect: connection refused\nE0510 02:22:29.726176       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7kskppzv-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.201.155:6443: connect: connection refused\nE0510 02:23:15.411349       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7kskppzv-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.176.29:6443: connect: connection refused\nE0510 02:23:56.312965       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7kskppzv-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.201.155:6443: connect: connection refused\nE0510 02:24:40.048223       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7kskppzv-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.201.155:6443: connect: connection refused\nE0510 02:25:20.181945       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7kskppzv-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.176.29:6443: connect: connection refused\n
May 10 03:16:51.507 E ns/openshift-multus pod/multus-admission-controller-2wzn9 node/ip-10-0-248-154.us-west-1.compute.internal uid/6dce5d74-9b70-43e4-927e-48d66ba66798 container/multus-admission-controller reason/ContainerExit code/137 cause/Error
#1788679017033895936junit9 days ago
May 09 22:34:00.190 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-r6rbv node/ip-10-0-208-85.us-west-1.compute.internal uid/1d78d7c9-6d9e-4cb1-b820-da0a634e5816 container/csi-driver reason/ContainerExit code/2 cause/Error
May 09 22:34:04.333 E ns/openshift-sdn pod/sdn-controller-5kfmh node/ip-10-0-140-249.us-west-1.compute.internal uid/45903003-6925-4ee1-bf17-78dd45447315 container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 21:27:58.058920       1 server.go:27] Starting HTTP metrics server\nI0509 21:27:58.059218       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 21:36:09.111769       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0509 21:37:47.788316       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0509 21:38:45.571805       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-k1j482cx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.128.82:6443: connect: connection refused\nE0509 21:39:15.136911       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-k1j482cx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.128.82:6443: connect: connection refused\n
May 09 22:34:11.998 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-lqlcv node/ip-10-0-240-146.us-west-1.compute.internal uid/a3c70bed-a5b3-4994-8427-6e7b3f430138 container/csi-liveness-probe reason/ContainerExit code/2 cause/Error
#1788679017033895936junit9 days ago
May 09 22:34:14.867 E ns/openshift-multus pod/multus-additional-cni-plugins-gkhvb node/ip-10-0-176-90.us-west-1.compute.internal uid/4213b849-43ca-463e-961d-651c6a4f784c container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 09 22:34:17.494 E ns/openshift-sdn pod/sdn-controller-pwsk8 node/ip-10-0-191-169.us-west-1.compute.internal uid/a27fd06d-ade9-4317-8a24-420928257d9f container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 21:37:06.200845       1 server.go:27] Starting HTTP metrics server\nI0509 21:37:06.200978       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 21:37:06.202716       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-k1j482cx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.128.82:6443: connect: connection refused\nE0509 21:37:47.603263       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-k1j482cx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.196.51:6443: connect: connection refused\nE0509 21:38:49.679497       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-k1j482cx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.196.51:6443: connect: connection refused\nE0509 21:39:16.530704       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-k1j482cx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.128.82:6443: connect: connection refused\n
May 09 22:34:17.494 E ns/openshift-sdn pod/sdn-controller-pwsk8 node/ip-10-0-191-169.us-west-1.compute.internal uid/a27fd06d-ade9-4317-8a24-420928257d9f container/sdn-controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1788588425331347456junit9 days ago
May 09 16:45:47.621 E ns/openshift-multus pod/multus-additional-cni-plugins-4rvgp node/ip-10-0-225-101.us-west-2.compute.internal uid/fef695a3-f339-4f1a-be29-2206c45167c8 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 09 16:46:03.718 E ns/openshift-sdn pod/sdn-controller-2qtpt node/ip-10-0-225-101.us-west-2.compute.internal uid/79f8c207-70e3-4a28-be24-1077278c59f8 container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 15:28:21.153923       1 server.go:27] Starting HTTP metrics server\nI0509 15:28:21.154079       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 15:34:00.261675       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0509 15:35:02.024384       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-c51if45j-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.177.18:6443: connect: connection refused\nE0509 15:43:26.951585       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-c51if45j-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.228.137:6443: connect: connection refused\n
May 09 16:46:08.268 E ns/openshift-sdn pod/sdn-jw7wq node/ip-10-0-179-152.us-west-2.compute.internal uid/5c0fe988-86e5-46cd-a8ed-4eb754f1b988 container/kube-rbac-proxy reason/ContainerExit code/137 cause/ContainerStatusUnknown The container could not be located when the pod was deleted.  The container used to be Running
#1788588425331347456junit9 days ago
May 09 16:46:10.914 E ns/openshift-multus pod/multus-admission-controller-tl82h node/ip-10-0-225-101.us-west-2.compute.internal uid/2d4596bf-f3ad-47a4-bb16-674a17e09ceb container/multus-admission-controller reason/ContainerExit code/137 cause/Error
May 09 16:46:16.612 E ns/openshift-sdn pod/sdn-controller-m7vjx node/ip-10-0-171-141.us-west-2.compute.internal uid/a76e105d-db7b-49ff-b564-a5c4a71f48f5 container/sdn-controller reason/ContainerExit code/2 cause/Error shift-network-controller": dial tcp 10.0.177.18:6443: connect: connection refused\nE0509 15:35:48.968911       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-c51if45j-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.228.137:6443: connect: connection refused\nE0509 15:36:24.972180       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-c51if45j-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.228.137:6443: connect: connection refused\nE0509 15:36:51.527299       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-c51if45j-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.177.18:6443: connect: connection refused\nE0509 15:41:46.791961       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-c51if45j-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.228.137:6443: connect: connection refused\nE0509 15:42:29.420330       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-c51if45j-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.177.18:6443: connect: connection refused\nE0509 15:42:58.840312       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-c51if45j-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.228.137:6443: connect: connection refused\n
May 09 16:46:19.368 E ns/openshift-network-diagnostics pod/network-check-target-ghrvd node/ip-10-0-179-152.us-west-2.compute.internal uid/93fdfb5c-357e-4162-a33c-e3cdd3f2b8e1 container/network-check-target-container reason/ContainerExit code/2 cause/Error
#1788568471479521280junit9 days ago
May 09 15:31:14.119 E ns/openshift-monitoring pod/node-exporter-pkp7x node/ip-10-0-206-152.us-west-2.compute.internal uid/607868ee-1940-4536-a16a-2fc6561632bb container/node-exporter reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 15:31:15.160 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-206-152.us-west-2.compute.internal node/ip-10-0-206-152.us-west-2.compute.internal uid/fc47449d-c5b7-4dd3-afd8-cc182ba12995 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 15:31:13.815314       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 15:31:13.815523       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715268673 cert, and key in /tmp/serving-cert-714620078/serving-signer.crt, /tmp/serving-cert-714620078/serving-signer.key\nI0509 15:31:14.064765       1 observer_polling.go:159] Starting file observer\nW0509 15:31:14.078626       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-206-152.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 15:31:14.078858       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 15:31:14.105747       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-714620078/tls.crt::/tmp/serving-cert-714620078/tls.key"\nF0509 15:31:14.658129       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 15:31:16.282 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-206-152.us-west-2.compute.internal node/ip-10-0-206-152.us-west-2.compute.internal uid/fc47449d-c5b7-4dd3-afd8-cc182ba12995 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 15:31:13.815314       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 15:31:13.815523       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715268673 cert, and key in /tmp/serving-cert-714620078/serving-signer.crt, /tmp/serving-cert-714620078/serving-signer.key\nI0509 15:31:14.064765       1 observer_polling.go:159] Starting file observer\nW0509 15:31:14.078626       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-206-152.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 15:31:14.078858       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 15:31:14.105747       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-714620078/tls.crt::/tmp/serving-cert-714620078/tls.key"\nF0509 15:31:14.658129       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788556229484744704junit9 days ago
May 09 14:32:39.296 E ns/openshift-sdn pod/sdn-bq2h7 node/ip-10-0-195-37.ec2.internal uid/dd646017-16b4-452d-863f-54e7d0fed0ed container/sdn reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 14:32:45.015 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-195-37.ec2.internal node/ip-10-0-195-37.ec2.internal uid/2661b91d-d2ed-4828-949d-f8e23677dd3e container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 14:32:43.308395       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 14:32:43.315611       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715265163 cert, and key in /tmp/serving-cert-1708293974/serving-signer.crt, /tmp/serving-cert-1708293974/serving-signer.key\nI0509 14:32:43.880156       1 observer_polling.go:159] Starting file observer\nW0509 14:32:43.896861       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-195-37.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 14:32:43.897014       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 14:32:43.900672       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1708293974/tls.crt::/tmp/serving-cert-1708293974/tls.key"\nF0509 14:32:44.280849       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 14:32:46.014 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-195-37.ec2.internal node/ip-10-0-195-37.ec2.internal uid/2661b91d-d2ed-4828-949d-f8e23677dd3e container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 14:32:43.308395       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 14:32:43.315611       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715265163 cert, and key in /tmp/serving-cert-1708293974/serving-signer.crt, /tmp/serving-cert-1708293974/serving-signer.key\nI0509 14:32:43.880156       1 observer_polling.go:159] Starting file observer\nW0509 14:32:43.896861       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-195-37.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 14:32:43.897014       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 14:32:43.900672       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1708293974/tls.crt::/tmp/serving-cert-1708293974/tls.key"\nF0509 14:32:44.280849       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788541509239312384junit9 days ago
May 09 13:37:28.255 E ns/openshift-multus pod/multus-additional-cni-plugins-fh9cm node/ip-10-0-148-84.us-west-1.compute.internal uid/cb689cea-b1d0-4e3a-bac0-087a5ebadbab container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 09 13:37:43.785 E ns/openshift-sdn pod/sdn-controller-llm9n node/ip-10-0-139-209.us-west-1.compute.internal uid/590f9c34-b9ad-4690-a336-fa1019430d5e container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 12:23:15.877887       1 server.go:27] Starting HTTP metrics server\nI0509 12:23:15.878048       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 12:31:11.938923       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0509 12:32:56.617962       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0509 12:33:41.042284       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-0pjkp0xn-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.166.6:6443: connect: connection refused\n
May 09 13:37:49.856 E ns/openshift-multus pod/multus-admission-controller-rrrhn node/ip-10-0-139-209.us-west-1.compute.internal uid/72b46eb4-9e1d-4e65-83a9-46de70914e38 container/multus-admission-controller reason/ContainerExit code/137 cause/Error
#1788541509239312384junit9 days ago
May 09 13:54:42.905 E ns/openshift-monitoring pod/node-exporter-5ftmc node/ip-10-0-139-209.us-west-1.compute.internal uid/fedecea9-5505-459b-9fdd-50f08cabd610 container/node-exporter reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 13:54:48.766 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-209.us-west-1.compute.internal node/ip-10-0-139-209.us-west-1.compute.internal uid/4aaf0099-0c17-413f-8396-4a1b867a30a5 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 13:54:46.768067       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 13:54:46.768452       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715262886 cert, and key in /tmp/serving-cert-2808342975/serving-signer.crt, /tmp/serving-cert-2808342975/serving-signer.key\nI0509 13:54:47.533264       1 observer_polling.go:159] Starting file observer\nW0509 13:54:47.545461       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-139-209.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 13:54:47.545565       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 13:54:47.551505       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2808342975/tls.crt::/tmp/serving-cert-2808342975/tls.key"\nF0509 13:54:48.386286       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 13:54:50.957 E ns/openshift-network-diagnostics pod/network-check-target-xf6gt node/ip-10-0-139-209.us-west-1.compute.internal uid/0b58de06-05eb-4524-8a6d-05ff8f8990e4 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1788528424722108416junit9 days ago
May 09 12:54:11.398 E ns/openshift-multus pod/multus-additional-cni-plugins-ssj2h node/ip-10-0-151-72.ec2.internal uid/f55e6a01-33c9-4260-bd7e-02d567a260d6 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 09 12:54:11.438 E ns/openshift-sdn pod/sdn-controller-7nm7b node/ip-10-0-167-183.ec2.internal uid/447e412d-8eea-4442-bbf1-5e7987d68848 container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 11:28:59.934287       1 server.go:27] Starting HTTP metrics server\nI0509 11:28:59.934394       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 11:36:55.445207       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2mn09hds-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.181.144:6443: connect: connection refused\n
May 09 12:54:12.472 E ns/openshift-network-diagnostics pod/network-check-target-cqxqk node/ip-10-0-167-183.ec2.internal uid/310f167b-d1b5-4419-987c-4b35e33e5205 container/network-check-target-container reason/ContainerExit code/2 cause/Error
#1788528424722108416junit9 days ago
May 09 12:54:16.678 E ns/openshift-multus pod/multus-admission-controller-sjpll node/ip-10-0-158-145.ec2.internal uid/f7f0faa1-c408-4b57-91ee-d7308f0f704e container/multus-admission-controller reason/ContainerExit code/137 cause/Error
May 09 12:54:22.717 E ns/openshift-sdn pod/sdn-controller-stxcr node/ip-10-0-158-145.ec2.internal uid/3a666c7e-5fb2-48ad-86d7-92d40288aa15 container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 11:28:54.089355       1 server.go:27] Starting HTTP metrics server\nI0509 11:28:54.092115       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 11:36:50.078069       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2mn09hds-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.225.62:6443: connect: connection refused\nE0509 11:37:35.732762       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2mn09hds-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.181.144:6443: connect: connection refused\nE0509 11:38:04.498819       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2mn09hds-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.225.62:6443: connect: connection refused\n
May 09 12:54:30.048 E ns/openshift-multus pod/multus-additional-cni-plugins-sxpz7 node/ip-10-0-199-123.ec2.internal uid/ce0f1fa5-acd2-4b08-83a5-6102a9e7a985 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1788511071670112256junit9 days ago
May 09 11:22:10.406 E ns/openshift-multus pod/multus-additional-cni-plugins-jcpd5 node/ip-10-0-159-111.ec2.internal uid/963d0ccd-b2d9-46fb-aaa6-7f97af9faea9 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 09 11:22:25.602 E ns/openshift-sdn pod/sdn-controller-4nmnz node/ip-10-0-249-237.ec2.internal uid/f1aca0dc-111c-476b-8def-c699e681cfab container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 10:21:40.818195       1 server.go:27] Starting HTTP metrics server\nI0509 10:21:40.818487       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 10:31:02.961926       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2067vq6b-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.158.95:6443: connect: connection refused\n
May 09 11:22:26.972 E ns/openshift-multus pod/network-metrics-daemon-hg5qq node/ip-10-0-247-188.ec2.internal uid/706cd745-8c27-4fa2-82a7-e272b6d669f0 container/kube-rbac-proxy reason/ContainerExit code/137 cause/ContainerStatusUnknown The container could not be located when the pod was deleted.  The container used to be Running
#1788511071670112256junit9 days ago
May 09 11:22:39.415 E ns/openshift-console pod/console-7665bbd78d-tt5z4 node/ip-10-0-157-36.ec2.internal uid/3e5d79b7-bc59-4e30-b4a0-c0d3c62461fd container/console reason/ContainerExit code/2 cause/Error W0509 10:30:28.631916       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0509 10:30:28.631943       1 main.go:364] cookies are secure!\nE0509 10:30:28.660186       1 auth.go:232] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nE0509 10:30:38.676842       1 auth.go:232] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nE0509 10:30:48.683002       1 auth.go:232] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nE0509 10:30:58.687954       1 auth.go:232] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 429 Too Many Requests\nE0509 10:31:10.644170       1 auth.go:232] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 403 Forbidden\nI0509 10:31:20.687950       1 main.go:798] Binding to [::]:8443...\nI0509 10:31:20.688002       1 main.go:800] using TLS\nE0509 10:51:56.698994       1 auth.go:250] failed to get latest auth source data: discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 429 Too Many Requests\nE0509 10:51:57.703004       1 auth.go:250] failed to get latest auth source data: discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 429 Too Many Requests\n
May 09 11:22:40.708 E ns/openshift-sdn pod/sdn-controller-z4lnc node/ip-10-0-157-36.ec2.internal uid/8bdd2c60-6627-418e-8949-8cd30d4309a8 container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 10:20:11.707210       1 server.go:27] Starting HTTP metrics server\nI0509 10:20:11.707326       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 10:28:06.882659       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2067vq6b-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.158.95:6443: connect: connection refused\n
May 09 11:22:53.451 E ns/openshift-multus pod/multus-additional-cni-plugins-5gxmg node/ip-10-0-143-50.ec2.internal uid/9edc5d06-faf5-466a-93f9-18d30848d5e4 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1788494663645138944junit9 days ago
May 09 10:23:46.841 E ns/openshift-network-diagnostics pod/network-check-target-k6grv node/ip-10-0-254-186.ec2.internal uid/ea9cdc8c-eca2-47a7-863a-227a22334fdd container/network-check-target-container reason/ContainerExit code/2 cause/Error
May 09 10:23:48.076 E ns/openshift-sdn pod/sdn-controller-n6qlm node/ip-10-0-177-153.ec2.internal uid/133cd482-c28c-4118-8be7-abf10400a031 container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 09:14:43.167806       1 server.go:27] Starting HTTP metrics server\nI0509 09:14:43.167949       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 09:22:24.441996       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0509 09:23:56.253269       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0509 09:24:49.379885       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-8m59x7qb-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.233.20:6443: connect: connection refused\n
May 09 10:23:48.410 E ns/openshift-multus pod/multus-admission-controller-f9bbr node/ip-10-0-195-175.ec2.internal uid/122c16c8-4c3f-4a20-9aa4-bac11e5eb47c container/multus-admission-controller reason/ContainerExit code/137 cause/Error
#1788494663645138944junit9 days ago
May 09 10:23:55.023 E ns/openshift-console pod/console-5c98f5b9d6-tm45j node/ip-10-0-151-234.ec2.internal uid/87edb39c-1db0-4750-9daf-557e59f1fc39 container/console reason/ContainerExit code/2 cause/Error W0509 09:37:37.058981       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0509 09:37:37.059008       1 main.go:364] cookies are secure!\nI0509 09:37:37.100266       1 main.go:798] Binding to [::]:8443...\nI0509 09:37:37.100291       1 main.go:800] using TLS\n
May 09 10:23:56.034 E ns/openshift-sdn pod/sdn-controller-sgd2g node/ip-10-0-151-234.ec2.internal uid/346c550a-fb50-4b4a-bba1-75b5339554d5 container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 09:14:41.509814       1 server.go:27] Starting HTTP metrics server\nI0509 09:14:41.509933       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 09:22:20.954888       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0509 09:23:56.315162       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0509 09:24:50.145414       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-8m59x7qb-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.168.225:6443: connect: connection refused\nE0509 09:32:11.241452       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-8m59x7qb-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.233.20:6443: connect: connection refused\n
May 09 10:24:06.302 E ns/openshift-multus pod/multus-additional-cni-plugins-trhsx node/ip-10-0-161-17.ec2.internal uid/492aafc3-3dbd-4431-ac91-b276d62b6943 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1788477340569833472junit9 days ago
May 09 09:34:34.401 E ns/openshift-machine-config-operator pod/machine-config-daemon-6qn2h node/ip-10-0-169-199.ec2.internal uid/f98b2425-7dac-4c63-b61c-06511c17d7b9 container/oauth-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 09:34:41.402 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-199.ec2.internal node/ip-10-0-169-199.ec2.internal uid/8926eb5b-ead5-41f2-aad6-6b62fa53f127 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 09:34:38.947751       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 09:34:38.948237       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715247278 cert, and key in /tmp/serving-cert-1975248576/serving-signer.crt, /tmp/serving-cert-1975248576/serving-signer.key\nI0509 09:34:39.593783       1 observer_polling.go:159] Starting file observer\nW0509 09:34:39.609618       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-169-199.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 09:34:39.609746       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 09:34:39.648100       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1975248576/tls.crt::/tmp/serving-cert-1975248576/tls.key"\nF0509 09:34:40.341987       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 09:34:43.586 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-199.ec2.internal node/ip-10-0-169-199.ec2.internal uid/8926eb5b-ead5-41f2-aad6-6b62fa53f127 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 09:34:38.947751       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 09:34:38.948237       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715247278 cert, and key in /tmp/serving-cert-1975248576/serving-signer.crt, /tmp/serving-cert-1975248576/serving-signer.key\nI0509 09:34:39.593783       1 observer_polling.go:159] Starting file observer\nW0509 09:34:39.609618       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-169-199.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 09:34:39.609746       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 09:34:39.648100       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1975248576/tls.crt::/tmp/serving-cert-1975248576/tls.key"\nF0509 09:34:40.341987       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788463413651836928junit9 days ago
May 09 08:33:22.630 E ns/e2e-k8s-sig-apps-daemonset-upgrade-9350 pod/ds1-gmmfl node/ip-10-0-203-109.us-west-2.compute.internal uid/c9f8cd2d-4625-4813-b913-73976e2219dc container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 08:33:27.264 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-203-109.us-west-2.compute.internal node/ip-10-0-203-109.us-west-2.compute.internal uid/68bf99d9-614e-40a5-96ad-c373e0ca3ec2 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 08:33:25.392277       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 08:33:25.398141       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715243605 cert, and key in /tmp/serving-cert-475192449/serving-signer.crt, /tmp/serving-cert-475192449/serving-signer.key\nI0509 08:33:25.645155       1 observer_polling.go:159] Starting file observer\nW0509 08:33:25.667019       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-203-109.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 08:33:25.667288       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 08:33:25.671015       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-475192449/tls.crt::/tmp/serving-cert-475192449/tls.key"\nF0509 08:33:26.015018       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 08:33:31.374 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-203-109.us-west-2.compute.internal node/ip-10-0-203-109.us-west-2.compute.internal uid/68bf99d9-614e-40a5-96ad-c373e0ca3ec2 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 08:33:25.392277       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 08:33:25.398141       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715243605 cert, and key in /tmp/serving-cert-475192449/serving-signer.crt, /tmp/serving-cert-475192449/serving-signer.key\nI0509 08:33:25.645155       1 observer_polling.go:159] Starting file observer\nW0509 08:33:25.667019       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-203-109.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 08:33:25.667288       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 08:33:25.671015       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-475192449/tls.crt::/tmp/serving-cert-475192449/tls.key"\nF0509 08:33:26.015018       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788446275297873920junit9 days ago
May 09 07:37:44.352 E ns/openshift-sdn pod/sdn-controller-cxrsk node/ip-10-0-211-10.us-west-2.compute.internal uid/50a4da49-4e75-4036-b36b-447c441452d3 container/sdn-controller reason/ContainerExit code/2 cause/Error -upgrade-2752"\nI0509 06:57:26.901799       1 vnids.go:105] Allocated netid 10830698 for namespace "e2e-k8s-service-load-balancer-with-pdb-new-7611"\nI0509 06:57:26.927931       1 vnids.go:105] Allocated netid 14953545 for namespace "e2e-check-for-deletes-3952"\nI0509 06:57:27.128963       1 vnids.go:105] Allocated netid 626227 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-1303"\nI0509 06:57:27.334600       1 vnids.go:105] Allocated netid 7573697 for namespace "e2e-k8s-sig-apps-deployment-upgrade-1936"\nI0509 06:57:27.525441       1 vnids.go:105] Allocated netid 9589344 for namespace "e2e-image-pulls-are-fast-5265"\nI0509 06:57:27.728466       1 vnids.go:105] Allocated netid 9084308 for namespace "e2e-prometheus-metrics-available-after-upgrade-9671"\nI0509 06:57:28.157322       1 vnids.go:127] Released netid 2917700 for namespace "e2e-test-scheduling-pod-check-r8mrs"\nI0509 06:57:28.368288       1 vnids.go:127] Released netid 13814769 for namespace "e2e-test-schema-status-check-82zdd"\nI0509 06:57:28.534287       1 vnids.go:127] Released netid 13192296 for namespace "e2e-test-scheduling-pod-check-fb5ls"\nI0509 06:57:29.313749       1 vnids.go:127] Released netid 9122139 for namespace "e2e-test-prometheus-zgkm8"\nI0509 06:57:29.580096       1 vnids.go:127] Released netid 4404968 for namespace "e2e-test-job-names-jkwrr"\nI0509 06:57:29.680152       1 vnids.go:127] Released netid 1355486 for namespace "e2e-test-scheduling-pod-check-lrd8d"\nI0509 06:57:30.550824       1 vnids.go:127] Released netid 1778361 for namespace "e2e-test-scheduling-pod-check-fhgb2"\nI0509 06:57:30.558685       1 vnids.go:127] Released netid 6808190 for namespace "e2e-test-scheduling-pod-check-jkbb4"\nI0509 06:57:31.317609       1 vnids.go:127] Released netid 16201050 for namespace "e2e-test-scheduling-pod-check-zkgpb"\nI0509 06:57:31.663652       1 vnids.go:127] Released netid 3719399 for namespace "e2e-test-scheduling-pod-check-srrzj"\nI0509 06:57:31.752926       1 vnids.go:127] Released netid 623615 for namespace "e2e-test-scheduling-pod-check-nt4t6"\n
May 09 07:37:58.286 E ns/openshift-sdn pod/sdn-controller-8sxsh node/ip-10-0-190-50.us-west-2.compute.internal uid/ec8717fc-51d2-48cb-8012-59027c9736cb container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 06:19:39.118110       1 server.go:27] Starting HTTP metrics server\nI0509 06:19:39.118263       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 06:27:34.225379       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-dyc3z1d1-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.192.255:6443: connect: connection refused\nE0509 06:28:07.309684       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-dyc3z1d1-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.142.237:6443: connect: connection refused\nE0509 06:28:35.357574       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-dyc3z1d1-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.142.237:6443: connect: connection refused\n
May 09 07:37:59.728 E ns/openshift-console pod/console-6994ddcc65-k44hx node/ip-10-0-162-77.us-west-2.compute.internal uid/036583fc-f273-4806-9dd0-d8560c3fe72d container/console reason/ContainerExit code/2 cause/Error W0509 06:39:25.328056       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0509 06:39:25.328219       1 main.go:364] cookies are secure!\nI0509 06:39:25.383045       1 main.go:798] Binding to [::]:8443...\nI0509 06:39:25.383201       1 main.go:800] using TLS\n
#1788446275297873920junit9 days ago
May 09 07:54:21.437 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-brhfm node/ip-10-0-162-77.us-west-2.compute.internal uid/1d308862-873f-4920-ac07-96e0df68ad36 container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 07:54:28.182 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-162-77.us-west-2.compute.internal node/ip-10-0-162-77.us-west-2.compute.internal uid/593252ad-d893-49e6-8968-f8ac41713084 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 07:54:26.479608       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 07:54:26.480447       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715241266 cert, and key in /tmp/serving-cert-461278527/serving-signer.crt, /tmp/serving-cert-461278527/serving-signer.key\nI0509 07:54:26.935694       1 observer_polling.go:159] Starting file observer\nW0509 07:54:26.955718       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-162-77.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 07:54:26.955932       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 07:54:26.970673       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-461278527/tls.crt::/tmp/serving-cert-461278527/tls.key"\nF0509 07:54:27.739865       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 07:54:31.346 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-162-77.us-west-2.compute.internal node/ip-10-0-162-77.us-west-2.compute.internal uid/593252ad-d893-49e6-8968-f8ac41713084 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 07:54:26.479608       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 07:54:26.480447       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715241266 cert, and key in /tmp/serving-cert-461278527/serving-signer.crt, /tmp/serving-cert-461278527/serving-signer.key\nI0509 07:54:26.935694       1 observer_polling.go:159] Starting file observer\nW0509 07:54:26.955718       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-162-77.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 07:54:26.955932       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 07:54:26.970673       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-461278527/tls.crt::/tmp/serving-cert-461278527/tls.key"\nF0509 07:54:27.739865       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788355478883930112junit9 days ago
May 09 01:25:46.166 E ns/e2e-k8s-sig-apps-daemonset-upgrade-7300 pod/ds1-prmjp node/ip-10-0-188-190.us-west-2.compute.internal uid/0e451a5c-979d-45cd-8d87-3286eb407cb5 container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 01:25:46.184 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-188-190.us-west-2.compute.internal node/ip-10-0-188-190.us-west-2.compute.internal uid/84752596-92c3-493e-960d-bd101c4dc2e6 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 01:25:44.590862       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 01:25:44.599070       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715217944 cert, and key in /tmp/serving-cert-648032125/serving-signer.crt, /tmp/serving-cert-648032125/serving-signer.key\nI0509 01:25:45.662128       1 observer_polling.go:159] Starting file observer\nW0509 01:25:45.680430       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-188-190.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 01:25:45.680594       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 01:25:45.703440       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-648032125/tls.crt::/tmp/serving-cert-648032125/tls.key"\nF0509 01:25:45.869876       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 01:25:50.176 E ns/openshift-dns pod/dns-default-vjz9r node/ip-10-0-188-190.us-west-2.compute.internal uid/21e156f8-88fa-4133-87bd-b5d2f676738d container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1788355478883930112junit9 days ago
May 09 01:25:50.191 E ns/openshift-multus pod/network-metrics-daemon-7xjhq node/ip-10-0-188-190.us-west-2.compute.internal uid/a1f34f3e-75ec-4e88-b321-09eeed5a643a container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 01:25:50.227 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-188-190.us-west-2.compute.internal node/ip-10-0-188-190.us-west-2.compute.internal uid/84752596-92c3-493e-960d-bd101c4dc2e6 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 01:25:44.590862       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 01:25:44.599070       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715217944 cert, and key in /tmp/serving-cert-648032125/serving-signer.crt, /tmp/serving-cert-648032125/serving-signer.key\nI0509 01:25:45.662128       1 observer_polling.go:159] Starting file observer\nW0509 01:25:45.680430       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-188-190.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 01:25:45.680594       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 01:25:45.703440       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-648032125/tls.crt::/tmp/serving-cert-648032125/tls.key"\nF0509 01:25:45.869876       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 01:25:53.213 E ns/openshift-e2e-loki pod/loki-promtail-m59zd node/ip-10-0-188-190.us-west-2.compute.internal uid/35acb862-7592-4f9f-89ff-58b02a27716d container/prod-bearer-token reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1788385777525198848junit9 days ago
May 09 03:18:02.162 E ns/openshift-multus pod/network-metrics-daemon-9whk4 node/ip-10-0-173-56.ec2.internal uid/31378787-6d81-4e64-972e-2eb3f6eee096 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 03:18:05.183 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-173-56.ec2.internal node/ip-10-0-173-56.ec2.internal uid/d49a8a86-3f19-4719-bc38-c1a60cc93667 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 03:18:03.654875       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 03:18:03.655288       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715224683 cert, and key in /tmp/serving-cert-3484663938/serving-signer.crt, /tmp/serving-cert-3484663938/serving-signer.key\nI0509 03:18:04.052039       1 observer_polling.go:159] Starting file observer\nW0509 03:18:04.087964       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-173-56.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 03:18:04.088121       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 03:18:04.106249       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3484663938/tls.crt::/tmp/serving-cert-3484663938/tls.key"\nF0509 03:18:04.269435       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 03:18:05.307 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is degraded

... 2 lines not shown

#1788371147264364544junit9 days ago
May 09 02:20:10.885 E ns/openshift-dns pod/dns-default-rj4sf node/ip-10-0-132-113.us-east-2.compute.internal uid/825669a0-5aa9-4bba-8965-0a829c38cf93 container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 02:20:14.713 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-113.us-east-2.compute.internal node/ip-10-0-132-113.us-east-2.compute.internal uid/6dbebbbf-0ed9-4e19-93f1-ecef700f7c17 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 02:20:13.299719       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 02:20:13.312074       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715221213 cert, and key in /tmp/serving-cert-2451932833/serving-signer.crt, /tmp/serving-cert-2451932833/serving-signer.key\nI0509 02:20:13.842022       1 observer_polling.go:159] Starting file observer\nW0509 02:20:13.851436       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-132-113.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 02:20:13.851569       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 02:20:13.864169       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2451932833/tls.crt::/tmp/serving-cert-2451932833/tls.key"\nF0509 02:20:14.347239       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 02:20:19.750 E ns/openshift-e2e-loki pod/loki-promtail-4n7bl node/ip-10-0-132-113.us-east-2.compute.internal uid/df04ac2a-c041-43b1-9de0-01c4cab600d1 container/promtail reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1788371147264364544junit9 days ago
May 09 02:20:19.750 E ns/openshift-e2e-loki pod/loki-promtail-4n7bl node/ip-10-0-132-113.us-east-2.compute.internal uid/df04ac2a-c041-43b1-9de0-01c4cab600d1 container/oauth-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 02:20:19.837 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-113.us-east-2.compute.internal node/ip-10-0-132-113.us-east-2.compute.internal uid/6dbebbbf-0ed9-4e19-93f1-ecef700f7c17 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 02:20:13.299719       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 02:20:13.312074       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715221213 cert, and key in /tmp/serving-cert-2451932833/serving-signer.crt, /tmp/serving-cert-2451932833/serving-signer.key\nI0509 02:20:13.842022       1 observer_polling.go:159] Starting file observer\nW0509 02:20:13.851436       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-132-113.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 02:20:13.851569       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 02:20:13.864169       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2451932833/tls.crt::/tmp/serving-cert-2451932833/tls.key"\nF0509 02:20:14.347239       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 02:20:20.781 E ns/openshift-multus pod/multus-additional-cni-plugins-sttnx node/ip-10-0-132-113.us-east-2.compute.internal uid/ad555692-e5f0-4d22-a2fd-cbd6cc6f55e3 container/kube-multus-additional-cni-plugins reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1788400994225229824junit9 days ago
May 09 04:05:40.669 E ns/openshift-multus pod/multus-additional-cni-plugins-l4ppp node/ip-10-0-207-192.us-east-2.compute.internal uid/3167257d-bc43-4d18-81f9-76b91d57853b container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 09 04:05:55.745 E ns/openshift-sdn pod/sdn-controller-zjlxm node/ip-10-0-255-173.us-east-2.compute.internal uid/ae536d2d-ead3-4dbf-bdf0-bdf7ca5edf4c container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 03:03:05.604615       1 server.go:27] Starting HTTP metrics server\nI0509 03:03:05.604821       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 03:03:05.621219       1 leaderelection.go:334] error initially creating leader election record: configmaps "openshift-network-controller" already exists\nE0509 03:10:49.354009       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2qwbwk1c-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.223.54:6443: connect: connection refused\nE0509 03:11:44.584979       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2qwbwk1c-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.223.54:6443: connect: connection refused\n
May 09 04:06:02.010 E ns/openshift-multus pod/multus-admission-controller-v5pf4 node/ip-10-0-163-227.us-east-2.compute.internal uid/dc0a736a-e0a7-4ff5-b63e-1b26f49b3174 container/multus-admission-controller reason/ContainerExit code/137 cause/Error
#1788400994225229824junit9 days ago
May 09 04:06:05.178 E ns/openshift-network-diagnostics pod/network-check-target-sj7xs node/ip-10-0-137-132.us-east-2.compute.internal uid/bedced9c-7ef7-4014-8720-5a3032a0a8cc container/network-check-target-container reason/ContainerExit code/2 cause/Error
May 09 04:06:14.208 E ns/openshift-sdn pod/sdn-controller-wnldf node/ip-10-0-137-132.us-east-2.compute.internal uid/ee0a28c0-4043-4287-83a7-0c0a8d6b2073 container/sdn-controller reason/ContainerExit code/2 cause/Error I0509 03:03:11.017144       1 server.go:27] Starting HTTP metrics server\nI0509 03:03:11.017336       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0509 03:10:42.021990       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2qwbwk1c-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.223.54:6443: connect: connection refused\nE0509 03:11:39.218033       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2qwbwk1c-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.143.161:6443: connect: connection refused\n
May 09 04:06:27.505 E ns/openshift-multus pod/multus-additional-cni-plugins-c75kg node/ip-10-0-143-137.us-east-2.compute.internal uid/ecb5fbf0-d9e9-4d73-9b33-e4c350f3a06a container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1788342622121627648junit10 days ago
May 09 00:45:44.176 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-qv9mz node/ip-10-0-148-93.ec2.internal uid/5d68fbf9-4347-4046-9afa-c8b1b6407a25 container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 00:45:48.415 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-93.ec2.internal node/ip-10-0-148-93.ec2.internal uid/be92cae5-b8e8-4c71-97d2-6b449bf4e6e1 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 00:45:46.902600       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 00:45:46.909568       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715215546 cert, and key in /tmp/serving-cert-3102198651/serving-signer.crt, /tmp/serving-cert-3102198651/serving-signer.key\nI0509 00:45:47.779449       1 observer_polling.go:159] Starting file observer\nW0509 00:45:47.807592       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-148-93.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 00:45:47.807782       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0509 00:45:47.818080       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3102198651/tls.crt::/tmp/serving-cert-3102198651/tls.key"\nF0509 00:45:48.251641       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 00:45:48.451 E ns/openshift-network-diagnostics pod/network-check-target-nq6jn node/ip-10-0-148-93.ec2.internal uid/c30e0ccb-052b-4dbf-96fb-8650f1a2649a container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

#1788296086457159680junit10 days ago
May 08 21:20:24.684 E ns/openshift-network-diagnostics pod/network-check-target-c2cfl node/ip-10-0-168-144.us-west-1.compute.internal uid/653406fe-5bfe-4800-85a8-530cfdc1ac72 container/network-check-target-container reason/ContainerExit code/2 cause/Error
May 08 21:20:27.720 E ns/openshift-sdn pod/sdn-controller-hkdbz node/ip-10-0-249-110.us-west-1.compute.internal uid/0710f0b1-9d13-4f57-a755-c6e46a226559 container/sdn-controller reason/ContainerExit code/2 cause/Error I0508 20:11:19.233271       1 server.go:27] Starting HTTP metrics server\nI0508 20:11:19.233397       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0508 20:19:11.651450       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2y8pnhrx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.164.217:6443: connect: connection refused\nE0508 20:27:06.489078       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-2y8pnhrx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.164.217:6443: connect: connection refused\n
May 08 21:20:34.640 E ns/openshift-multus pod/multus-additional-cni-plugins-wvdxb node/ip-10-0-168-144.us-west-1.compute.internal uid/56ea51bd-eae5-44c5-9590-474dc1a33258 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1788296086457159680junit10 days ago
May 08 21:37:32.617 - 12s   E clusteroperator/etcd condition/Degraded status/True reason/ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:10771128909377656150 name:"ip-10-0-138-69.us-west-1.compute.internal" peerURLs:"https://10.0.138.69:2380" clientURLs:"https://10.0.138.69:2379"  Healthy:true Took:1.057026ms Error:<nil>} {Member:ID:14242864354697615227 name:"ip-10-0-164-17.us-west-1.compute.internal" peerURLs:"https://10.0.164.17:2380" clientURLs:"https://10.0.164.17:2379"  Healthy:true Took:796.379µs Error:<nil>} {Member:ID:16400966750666929002 name:"ip-10-0-249-110.us-west-1.compute.internal" peerURLs:"https://10.0.249.110:2380" clientURLs:"https://10.0.249.110:2379"  Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.249.110:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-249-110.us-west-1.compute.internal is unhealthy
May 08 21:37:34.813 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-249-110.us-west-1.compute.internal node/ip-10-0-249-110.us-west-1.compute.internal uid/6332bbab-22bf-4ffe-b9fd-0f7e8fd02d87 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 21:37:33.470162       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 21:37:33.474023       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715204253 cert, and key in /tmp/serving-cert-61511805/serving-signer.crt, /tmp/serving-cert-61511805/serving-signer.key\nI0508 21:37:34.182730       1 observer_polling.go:159] Starting file observer\nW0508 21:37:34.195080       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-249-110.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 21:37:34.195221       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 21:37:34.213013       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-61511805/tls.crt::/tmp/serving-cert-61511805/tls.key"\nF0508 21:37:34.639050       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 21:37:35.822 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-249-110.us-west-1.compute.internal node/ip-10-0-249-110.us-west-1.compute.internal uid/6332bbab-22bf-4ffe-b9fd-0f7e8fd02d87 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 21:37:33.470162       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 21:37:33.474023       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715204253 cert, and key in /tmp/serving-cert-61511805/serving-signer.crt, /tmp/serving-cert-61511805/serving-signer.key\nI0508 21:37:34.182730       1 observer_polling.go:159] Starting file observer\nW0508 21:37:34.195080       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-249-110.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 21:37:34.195221       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 21:37:34.213013       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-61511805/tls.crt::/tmp/serving-cert-61511805/tls.key"\nF0508 21:37:34.639050       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 2 lines not shown

#1788310936252059648junit10 days ago
May 08 22:27:48.519 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-236-83.us-east-2.compute.internal uid/cc1d45fe-b3f0-4fd2-a15c-78ad7fa18460 container/alertmanager reason/ContainerExit code/1 cause/Error ts=2024-05-08T22:27:44.020Z caller=main.go:231 level=info msg="Starting Alertmanager" version="(version=0.24.0, branch=rhaos-4.12-rhel-8, revision=2742bd3a55bd39fee12a11daa9025fcae839c7dc)"\nts=2024-05-08T22:27:44.020Z caller=main.go:232 level=info build_context="(go=go1.19.13 X:strictfipsruntime, user=root@2e06525f00f2, date=20240425-02:16:31)"\nts=2024-05-08T22:27:44.038Z caller=cluster.go:680 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s\nts=2024-05-08T22:27:44.072Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/config_out/alertmanager.env.yaml\nts=2024-05-08T22:27:44.072Z caller=coordinator.go:118 level=error component=configuration msg="Loading configuration file failed" file=/etc/alertmanager/config_out/alertmanager.env.yaml err="open /etc/alertmanager/config_out/alertmanager.env.yaml: no such file or directory"\nts=2024-05-08T22:27:44.072Z caller=cluster.go:689 level=info component=cluster msg="gossip not settled but continuing anyway" polls=0 elapsed=33.778685ms\n
May 08 22:27:49.704 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-69.us-east-2.compute.internal node/ip-10-0-130-69.us-east-2.compute.internal uid/138ae7e2-507e-4aa6-aa31-f33709e7e5ed container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 22:27:48.021626       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 22:27:48.030464       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715207268 cert, and key in /tmp/serving-cert-271952767/serving-signer.crt, /tmp/serving-cert-271952767/serving-signer.key\nI0508 22:27:48.345325       1 observer_polling.go:159] Starting file observer\nW0508 22:27:48.361874       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-130-69.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 22:27:48.362008       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 22:27:48.370538       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-271952767/tls.crt::/tmp/serving-cert-271952767/tls.key"\nF0508 22:27:48.783747       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 22:27:52.805 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-69.us-east-2.compute.internal node/ip-10-0-130-69.us-east-2.compute.internal uid/138ae7e2-507e-4aa6-aa31-f33709e7e5ed container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 22:27:48.021626       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 22:27:48.030464       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715207268 cert, and key in /tmp/serving-cert-271952767/serving-signer.crt, /tmp/serving-cert-271952767/serving-signer.key\nI0508 22:27:48.345325       1 observer_polling.go:159] Starting file observer\nW0508 22:27:48.361874       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-130-69.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 22:27:48.362008       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 22:27:48.370538       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-271952767/tls.crt::/tmp/serving-cert-271952767/tls.key"\nF0508 22:27:48.783747       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788325402087788544junit10 days ago
May 08 23:25:35.786 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-j6x45 node/ip-10-0-209-174.us-west-2.compute.internal uid/988b01f7-4291-4657-b912-10ae3e8d5e7b container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 08 23:25:40.686 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-209-174.us-west-2.compute.internal node/ip-10-0-209-174.us-west-2.compute.internal uid/0535b69a-d5e3-4252-b033-1c7b5eb13eda container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 23:25:39.630004       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 23:25:39.635738       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715210739 cert, and key in /tmp/serving-cert-969514833/serving-signer.crt, /tmp/serving-cert-969514833/serving-signer.key\nI0508 23:25:40.124620       1 observer_polling.go:159] Starting file observer\nW0508 23:25:40.165324       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-209-174.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 23:25:40.165539       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 23:25:40.184665       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-969514833/tls.crt::/tmp/serving-cert-969514833/tls.key"\nF0508 23:25:40.477172       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 23:25:42.725 E ns/e2e-k8s-sig-apps-daemonset-upgrade-6061 pod/ds1-52ltx node/ip-10-0-209-174.us-west-2.compute.internal uid/3942287e-a125-4a5a-aed3-241634cd8f2f container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

#1788281582230966272junit10 days ago
May 08 20:13:24.825 E ns/openshift-multus pod/multus-admission-controller-d4wfq node/ip-10-0-191-231.us-west-1.compute.internal uid/08b98cf9-42bc-40dc-a12e-95efaf9b5ad9 container/multus-admission-controller reason/ContainerExit code/137 cause/Error
May 08 20:13:25.645 E ns/openshift-sdn pod/sdn-controller-kpxln node/ip-10-0-212-77.us-west-1.compute.internal uid/40adab86-e7a2-4a03-a653-593e51eaf20b container/sdn-controller reason/ContainerExit code/2 cause/Error I0508 19:09:05.505376       1 server.go:27] Starting HTTP metrics server\nI0508 19:09:05.505736       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0508 19:17:55.280424       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-czdiyvhh-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.245.165:6443: connect: connection refused\nE0508 19:18:32.879538       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-czdiyvhh-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.181.100:6443: connect: connection refused\n
May 08 20:13:30.802 E ns/openshift-console pod/console-6984947f8d-g9ffq node/ip-10-0-191-231.us-west-1.compute.internal uid/874d56b5-6789-4e8b-aa5c-ea9467ea8545 container/console reason/ContainerExit code/2 cause/Error W0508 19:25:17.339379       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0508 19:25:17.339406       1 main.go:364] cookies are secure!\nI0508 19:25:17.371914       1 main.go:798] Binding to [::]:8443...\nI0508 19:25:17.371948       1 main.go:800] using TLS\n
#1788281582230966272junit10 days ago
May 08 20:29:57.265 - 1s    E ns/openshift-authentication route/oauth-openshift disruption/ingress-to-oauth-server connection/new reason/DisruptionBegan ns/openshift-authentication route/oauth-openshift disruption/ingress-to-oauth-server connection/new stopped responding to GET requests over new connections: Get "https://oauth-openshift.apps.ci-op-czdiyvhh-c43d5.aws-2.ci.openshift.org/healthz": read tcp 10.130.171.36:40228->52.52.224.64:443: read: connection reset by peer
May 08 20:29:59.375 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-191-231.us-west-1.compute.internal node/ip-10-0-191-231.us-west-1.compute.internal uid/d5598ff7-22e5-4ba2-9e8a-620f60a04809 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 20:29:57.810535       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 20:29:57.822066       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715200197 cert, and key in /tmp/serving-cert-2316024872/serving-signer.crt, /tmp/serving-cert-2316024872/serving-signer.key\nI0508 20:29:58.192111       1 observer_polling.go:159] Starting file observer\nW0508 20:29:58.215882       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-191-231.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 20:29:58.216192       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 20:29:58.226064       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2316024872/tls.crt::/tmp/serving-cert-2316024872/tls.key"\nF0508 20:29:58.692396       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 20:30:03.598 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-191-231.us-west-1.compute.internal node/ip-10-0-191-231.us-west-1.compute.internal uid/d5598ff7-22e5-4ba2-9e8a-620f60a04809 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 20:29:57.810535       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 20:29:57.822066       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715200197 cert, and key in /tmp/serving-cert-2316024872/serving-signer.crt, /tmp/serving-cert-2316024872/serving-signer.key\nI0508 20:29:58.192111       1 observer_polling.go:159] Starting file observer\nW0508 20:29:58.215882       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-191-231.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 20:29:58.216192       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 20:29:58.226064       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2316024872/tls.crt::/tmp/serving-cert-2316024872/tls.key"\nF0508 20:29:58.692396       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788266101478526976junit10 days ago
May 08 19:21:15.023 - 13s   E clusteroperator/etcd condition/Degraded status/True reason/ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:3476794607491973181 name:"ip-10-0-236-175.us-east-2.compute.internal" peerURLs:"https://10.0.236.175:2380" clientURLs:"https://10.0.236.175:2379"  Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.236.175:2379]: context deadline exceeded} {Member:ID:6262960062041590701 name:"ip-10-0-130-152.us-east-2.compute.internal" peerURLs:"https://10.0.130.152:2380" clientURLs:"https://10.0.130.152:2379"  Healthy:true Took:799.204µs Error:<nil>} {Member:ID:13568457898363743403 name:"ip-10-0-137-2.us-east-2.compute.internal" peerURLs:"https://10.0.137.2:2380" clientURLs:"https://10.0.137.2:2379"  Healthy:true Took:891.46µs Error:<nil>}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-236-175.us-east-2.compute.internal is unhealthy
May 08 19:21:19.276 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-236-175.us-east-2.compute.internal node/ip-10-0-236-175.us-east-2.compute.internal uid/b49593fa-61d9-4bf5-a9e7-d9e01dc58e61 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 19:21:17.465128       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 19:21:17.465411       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715196077 cert, and key in /tmp/serving-cert-2253447019/serving-signer.crt, /tmp/serving-cert-2253447019/serving-signer.key\nI0508 19:21:17.887225       1 observer_polling.go:159] Starting file observer\nW0508 19:21:17.906799       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-236-175.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 19:21:17.907084       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 19:21:17.916170       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2253447019/tls.crt::/tmp/serving-cert-2253447019/tls.key"\nF0508 19:21:18.241012       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 19:21:22.502 E ns/e2e-k8s-sig-apps-daemonset-upgrade-1454 pod/ds1-5kwrd node/ip-10-0-236-175.us-east-2.compute.internal uid/02638080-0d2f-45ef-a0f0-28d5ed378b7e container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

#1788251404884774912junit10 days ago
May 08 18:25:57.071 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-lqlm8 node/ip-10-0-171-158.us-east-2.compute.internal uid/31986ef1-ad6c-4249-ad30-3acb85a385c1 container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 08 18:26:02.816 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-171-158.us-east-2.compute.internal node/ip-10-0-171-158.us-east-2.compute.internal uid/cd26ee14-ae6e-4060-aff3-9a363ee96d86 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 18:26:01.136263       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 18:26:01.146541       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715192761 cert, and key in /tmp/serving-cert-326683240/serving-signer.crt, /tmp/serving-cert-326683240/serving-signer.key\nI0508 18:26:01.854797       1 observer_polling.go:159] Starting file observer\nW0508 18:26:01.867050       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-171-158.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 18:26:01.867174       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 18:26:01.875544       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-326683240/tls.crt::/tmp/serving-cert-326683240/tls.key"\nF0508 18:26:02.040708       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 18:26:04.835 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-171-158.us-east-2.compute.internal node/ip-10-0-171-158.us-east-2.compute.internal uid/cd26ee14-ae6e-4060-aff3-9a363ee96d86 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 18:26:01.136263       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 18:26:01.146541       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715192761 cert, and key in /tmp/serving-cert-326683240/serving-signer.crt, /tmp/serving-cert-326683240/serving-signer.key\nI0508 18:26:01.854797       1 observer_polling.go:159] Starting file observer\nW0508 18:26:01.867050       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-171-158.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 18:26:01.867174       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 18:26:01.875544       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-326683240/tls.crt::/tmp/serving-cert-326683240/tls.key"\nF0508 18:26:02.040708       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788203507615010816junit10 days ago
May 08 15:17:20.629 E ns/openshift-multus pod/network-metrics-daemon-g6kkl node/ip-10-0-133-58.us-west-2.compute.internal uid/16876c12-b586-41db-ae1a-b834883f34c0 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 08 15:17:21.648 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-58.us-west-2.compute.internal node/ip-10-0-133-58.us-west-2.compute.internal uid/9303c22d-3363-4880-93d5-38540150ae9a container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 15:17:19.707466       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 15:17:19.707771       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715181439 cert, and key in /tmp/serving-cert-1375568974/serving-signer.crt, /tmp/serving-cert-1375568974/serving-signer.key\nI0508 15:17:20.342244       1 observer_polling.go:159] Starting file observer\nW0508 15:17:20.381319       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-133-58.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 15:17:20.381457       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 15:17:20.436525       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1375568974/tls.crt::/tmp/serving-cert-1375568974/tls.key"\nF0508 15:17:21.133118       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 15:17:22.638 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-58.us-west-2.compute.internal node/ip-10-0-133-58.us-west-2.compute.internal uid/9303c22d-3363-4880-93d5-38540150ae9a container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 15:17:19.707466       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 15:17:19.707771       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715181439 cert, and key in /tmp/serving-cert-1375568974/serving-signer.crt, /tmp/serving-cert-1375568974/serving-signer.key\nI0508 15:17:20.342244       1 observer_polling.go:159] Starting file observer\nW0508 15:17:20.381319       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-133-58.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 15:17:20.381457       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 15:17:20.436525       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1375568974/tls.crt::/tmp/serving-cert-1375568974/tls.key"\nF0508 15:17:21.133118       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 2 lines not shown

#1788042867810242560junit10 days ago
May 08 04:24:40.147 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-g84mw node/ip-10-0-168-202.us-west-1.compute.internal uid/b42ceded-5ec6-441a-821a-ab567b45fe91 container/csi-driver reason/ContainerExit code/2 cause/Error
May 08 04:24:46.929 E ns/openshift-sdn pod/sdn-controller-2d7bd node/ip-10-0-177-28.us-west-1.compute.internal uid/7d3bc0ea-5001-4173-88a3-d4ec5ae8c3a5 container/sdn-controller reason/ContainerExit code/2 cause/Error I0508 03:19:18.867665       1 server.go:27] Starting HTTP metrics server\nI0508 03:19:18.868065       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0508 03:25:59.099935       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-9z5m4xlp-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.148.84:6443: connect: connection refused\nE0508 03:26:34.142358       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-9z5m4xlp-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.148.84:6443: connect: connection refused\nE0508 03:35:01.534434       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-9z5m4xlp-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.148.84:6443: connect: connection refused\n
May 08 04:24:48.682 E ns/openshift-multus pod/multus-additional-cni-plugins-d6k2p node/ip-10-0-149-49.us-west-1.compute.internal uid/4656e219-e391-4620-9307-714b39b38752 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1788042867810242560junit10 days ago
May 08 04:41:31.653 - 9s    E clusteroperator/etcd condition/Degraded status/True reason/ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:5497887020690436657 name:"ip-10-0-202-227.us-west-1.compute.internal" peerURLs:"https://10.0.202.227:2380" clientURLs:"https://10.0.202.227:2379"  Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.202.227:2379]: context deadline exceeded} {Member:ID:16219458650570458768 name:"ip-10-0-144-101.us-west-1.compute.internal" peerURLs:"https://10.0.144.101:2380" clientURLs:"https://10.0.144.101:2379"  Healthy:true Took:839.276µs Error:<nil>} {Member:ID:17186637752312173887 name:"ip-10-0-177-28.us-west-1.compute.internal" peerURLs:"https://10.0.177.28:2380" clientURLs:"https://10.0.177.28:2379"  Healthy:true Took:792.465µs Error:<nil>}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-202-227.us-west-1.compute.internal is unhealthy
May 08 04:41:31.842 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-227.us-west-1.compute.internal node/ip-10-0-202-227.us-west-1.compute.internal uid/be814f7c-fe6b-47e1-b9c6-cf8684c6afb0 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 04:41:29.890117       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 04:41:29.897289       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715143289 cert, and key in /tmp/serving-cert-1573107049/serving-signer.crt, /tmp/serving-cert-1573107049/serving-signer.key\nI0508 04:41:30.523151       1 observer_polling.go:159] Starting file observer\nW0508 04:41:30.546665       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-202-227.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 04:41:30.546784       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 04:41:30.561423       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1573107049/tls.crt::/tmp/serving-cert-1573107049/tls.key"\nF0508 04:41:30.956275       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 04:41:33.887 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-227.us-west-1.compute.internal node/ip-10-0-202-227.us-west-1.compute.internal uid/be814f7c-fe6b-47e1-b9c6-cf8684c6afb0 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 04:41:29.890117       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 04:41:29.897289       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715143289 cert, and key in /tmp/serving-cert-1573107049/serving-signer.crt, /tmp/serving-cert-1573107049/serving-signer.key\nI0508 04:41:30.523151       1 observer_polling.go:159] Starting file observer\nW0508 04:41:30.546665       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-202-227.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 04:41:30.546784       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 04:41:30.561423       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1573107049/tls.crt::/tmp/serving-cert-1573107049/tls.key"\nF0508 04:41:30.956275       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788189025413107712junit10 days ago
May 08 14:04:04.577 E ns/openshift-console pod/console-68dd6c7bc-s7wc8 node/ip-10-0-132-229.us-west-2.compute.internal uid/2b718c61-b96d-43ef-aa7f-6f8753fa39e2 container/console reason/ContainerExit code/2 cause/Error W0508 13:17:23.235723       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0508 13:17:23.235747       1 main.go:364] cookies are secure!\nI0508 13:17:23.277076       1 main.go:798] Binding to [::]:8443...\nI0508 13:17:23.277112       1 main.go:800] using TLS\n
May 08 14:04:07.648 E ns/openshift-sdn pod/sdn-controller-b49nd node/ip-10-0-167-5.us-west-2.compute.internal uid/a825cfe2-899f-47e1-b48a-db4388e22d3b container/sdn-controller reason/ContainerExit code/2 cause/Error I0508 13:02:10.333225       1 server.go:27] Starting HTTP metrics server\nI0508 13:02:10.333356       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0508 13:10:22.737412       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-yi7rs39c-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.251.163:6443: connect: connection refused\nE0508 13:10:59.174532       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-yi7rs39c-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.173.201:6443: connect: connection refused\nE0508 13:11:33.760630       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-yi7rs39c-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.251.163:6443: connect: connection refused\nE0508 13:13:36.866233       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-yi7rs39c-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.251.163:6443: connect: connection refused\n
May 08 14:04:13.257 E ns/openshift-console pod/console-68dd6c7bc-qg6br node/ip-10-0-229-201.us-west-2.compute.internal uid/ddcb4603-4042-42a5-985f-51bdd5456664 container/console reason/ContainerExit code/2 cause/Error W0508 13:16:57.873029       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0508 13:16:57.873057       1 main.go:364] cookies are secure!\nI0508 13:16:57.910473       1 main.go:798] Binding to [::]:8443...\nI0508 13:16:57.910518       1 main.go:800] using TLS\n
#1788189025413107712junit10 days ago
May 08 14:04:15.685 E ns/openshift-network-diagnostics pod/network-check-target-pnxbs node/ip-10-0-132-229.us-west-2.compute.internal uid/952383dd-c9bc-4e53-8663-3803af985b7f container/network-check-target-container reason/ContainerExit code/2 cause/Error
May 08 14:04:19.326 E ns/openshift-sdn pod/sdn-controller-8qscd node/ip-10-0-229-201.us-west-2.compute.internal uid/00563fed-7725-4ab6-b6c8-ae4563d87733 container/sdn-controller reason/ContainerExit code/2 cause/Error I0508 13:02:10.472270       1 server.go:27] Starting HTTP metrics server\nI0508 13:02:10.472837       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0508 13:10:50.539345       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-yi7rs39c-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.251.163:6443: connect: connection refused\nE0508 13:11:24.022764       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-yi7rs39c-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.173.201:6443: connect: connection refused\n
May 08 14:04:28.833 E ns/openshift-sdn pod/sdn-nvdx5 node/ip-10-0-175-82.us-west-2.compute.internal uid/bdfc4d78-8e15-4d22-b58a-c5f60f3ead87 container/kube-rbac-proxy reason/ContainerExit code/137 cause/ContainerStatusUnknown The container could not be located when the pod was deleted.  The container used to be Running
#1788072500387647488junit10 days ago
May 08 06:34:44.647 E ns/openshift-network-diagnostics pod/network-check-target-7kv98 node/ip-10-0-172-19.us-west-2.compute.internal uid/43bd624e-7a1d-45d3-b4a6-e0e71f4a0ffb container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 08 06:34:45.586 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-19.us-west-2.compute.internal node/ip-10-0-172-19.us-west-2.compute.internal uid/28bba5ed-0b9e-4076-b869-3e2fc13e2d45 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 06:34:44.559714       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 06:34:44.568667       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715150084 cert, and key in /tmp/serving-cert-1223259842/serving-signer.crt, /tmp/serving-cert-1223259842/serving-signer.key\nI0508 06:34:45.219093       1 observer_polling.go:159] Starting file observer\nW0508 06:34:45.230242       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-172-19.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 06:34:45.230360       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 06:34:45.236573       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1223259842/tls.crt::/tmp/serving-cert-1223259842/tls.key"\nF0508 06:34:45.421917       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 06:34:52.619 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-19.us-west-2.compute.internal node/ip-10-0-172-19.us-west-2.compute.internal uid/28bba5ed-0b9e-4076-b869-3e2fc13e2d45 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 06:34:44.559714       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 06:34:44.568667       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715150084 cert, and key in /tmp/serving-cert-1223259842/serving-signer.crt, /tmp/serving-cert-1223259842/serving-signer.key\nI0508 06:34:45.219093       1 observer_polling.go:159] Starting file observer\nW0508 06:34:45.230242       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-172-19.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 06:34:45.230360       1 builder.go:271] check-endpoints version 4.12.0-202405070915.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0508 06:34:45.236573       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1223259842/tls.crt::/tmp/serving-cert-1223259842/tls.key"\nF0508 06:34:45.421917       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1787229366690779136junit13 days ago
May 05 22:29:50.327 E ns/openshift-multus pod/multus-additional-cni-plugins-vpfxz node/ip-10-0-131-145.us-east-2.compute.internal uid/892d9cf6-0623-48ed-be51-9e7c38f2805f container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 05 22:30:05.827 E ns/openshift-sdn pod/sdn-controller-qqtht node/ip-10-0-238-243.us-east-2.compute.internal uid/1d712260-abf2-43de-b778-9eac90b6fd2a container/sdn-controller reason/ContainerExit code/2 cause/Error I0505 21:26:13.647292       1 server.go:27] Starting HTTP metrics server\nI0505 21:26:13.647421       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0505 21:33:56.446751       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0505 21:35:31.355034       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0505 21:36:36.634411       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-fbirdrlx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.252.32:6443: connect: connection refused\n
May 05 22:30:06.040 E ns/openshift-console pod/console-757565c74-hm6ms node/ip-10-0-134-196.us-east-2.compute.internal uid/0c91a4ee-2188-4f8c-873b-75bc014dcdd4 container/console reason/ContainerExit code/2 cause/Error W0505 21:41:21.993277       1 main.go:220] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0505 21:41:21.993299       1 main.go:364] cookies are secure!\nI0505 21:41:22.031965       1 main.go:798] Binding to [::]:8443...\nI0505 21:41:22.032008       1 main.go:800] using TLS\n
#1787229366690779136junit13 days ago
May 05 22:30:14.828 E ns/openshift-network-diagnostics pod/network-check-target-p9cmv node/ip-10-0-203-75.us-east-2.compute.internal uid/500337a9-8683-4027-8591-fdb276bf29b0 container/network-check-target-container reason/ContainerExit code/2 cause/Error
May 05 22:30:16.121 E ns/openshift-sdn pod/sdn-controller-wpng7 node/ip-10-0-134-196.us-east-2.compute.internal uid/ce2b1957-59ac-476c-b04a-bcd99847be01 container/sdn-controller reason/ContainerExit code/2 cause/Error I0505 21:34:50.176511       1 server.go:27] Starting HTTP metrics server\nI0505 21:34:50.176681       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0505 21:34:50.179526       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-fbirdrlx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.252.32:6443: connect: connection refused\nE0505 21:35:26.524877       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-fbirdrlx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.158.145:6443: connect: connection refused\nE0505 21:36:19.074021       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-fbirdrlx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.158.145:6443: connect: connection refused\nE0505 21:37:05.717415       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-fbirdrlx-c43d5.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.158.145:6443: connect: connection refused\n
May 05 22:30:16.121 E ns/openshift-sdn pod/sdn-controller-wpng7 node/ip-10-0-134-196.us-east-2.compute.internal uid/ce2b1957-59ac-476c-b04a-bcd99847be01 container/sdn-controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

Found in 92.31% of runs (320.00% of failures) across 52 total runs and 1 jobs (28.85% failed) in 178ms - clear search | chart view - source code located on github