Job:
#OCPBUGS-32375issue10 days agoUnsuccessful cluster installation with 4.15 nightlies on s390x using ABI CLOSED
Issue 15945005: Unsuccessful cluster installation with 4.15 nightlies on s390x using ABI
Description: When used the latest s390x release builds in 4.15 nightly stream for Agent Based Installation of SNO on IBM Z KVM, installation is failing at the end while watching cluster operators even though the DNS and HA Proxy configurations are perfect as the same setup is working with 4.15.x stable release image builds 
 
 Below is the error encountered multiple times when used "release:s390x-latest" image while booting the cluster. This image is used during the boot through OPENSHIFT_INSATLL_RELEASE_IMAGE_OVERRIDE while the binary is fetched using the latest stable builds from here : [https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/latest/] for which the version would be around 4.15.x 
 
 *release-image:*
 {code:java}
 registry.build01.ci.openshift.org/ci-op-cdkdqnqn/release@sha256:c6eb4affa5c44d2ad220d7064e92270a30df5f26d221e35664f4d5547a835617
 {code}
  ** 
 
 *PROW CI Build :* [https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_release/47965/rehearse-47965-periodic-ci-openshift-multiarch-master-nightly-4.15-e2e-agent-ibmz-sno/1780162365824700416] 
 
 *Error:* 
 {code:java}
 '/root/agent-sno/openshift-install wait-for install-complete --dir /root/agent-sno/ --log-level debug'
 Warning: Permanently added '128.168.142.71' (ED25519) to the list of known hosts.
 level=debug msg=OpenShift Installer 4.15.8
 level=debug msg=Built from commit f4f5d0ee0f7591fd9ddf03ac337c804608102919
 level=debug msg=Loading Install Config...
 level=debug msg=  Loading SSH Key...
 level=debug msg=  Loading Base Domain...
 level=debug msg=    Loading Platform...
 level=debug msg=  Loading Cluster Name...
 level=debug msg=    Loading Base Domain...
 level=debug msg=    Loading Platform...
 level=debug msg=  Loading Pull Secret...
 level=debug msg=  Loading Platform...
 level=debug msg=Loading Agent Config...
 level=debug msg=Using Agent Config loaded from state file
 level=warning msg=An agent configuration was detected but this command is not the agent wait-for command
 level=info msg=Waiting up to 40m0s (until 10:15AM UTC) for the cluster at https://api.agent-sno.abi-ci.com:6443 to initialize...
 W0416 09:35:51.793770    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:51.793827    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:35:53.127917    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:53.127946    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:35:54.760896    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:54.761058    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:00.790136    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:00.790175    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:08.516333    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:08.516445    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:31.442291    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:31.442336    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:37:03.033971    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:37:03.034049    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:37:42.025487    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:37:42.025538    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:38:32.148607    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:38:32.148677    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:39:27.680156    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:39:27.680194    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:40:23.290839    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:40:23.290988    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:41:22.298200    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:41:22.298338    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:42:01.197417    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:42:01.197465    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:42:36.739577    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:42:36.739937    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:43:07.331029    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:43:07.331154    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:44:04.008310    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:44:04.008381    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:44:40.882938    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:44:40.882973    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:45:18.975189    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:45:18.975307    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:45:49.753584    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:45:49.753614    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:46:41.148207    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:46:41.148347    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:47:12.882965    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:47:12.883075    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:47:53.636491    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:47:53.636538    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:48:31.792077    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:48:31.792165    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:49:29.117579    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:49:29.117657    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:50:02.802033    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:50:02.802167    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:50:33.826705    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:50:33.826859    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:51:16.045403    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:51:16.045447    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:51:53.795710    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:51:53.795745    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:52:52.741141    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:52:52.741289    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:53:52.621642    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:53:52.621687    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:54:35.809906    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:54:35.810054    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:55:24.249298    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:55:24.249418    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:56:12.717328    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:56:12.717372    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:56:51.172375    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:56:51.172439    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:57:42.242226    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:57:42.242292    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:58:17.663810    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:58:17.663849    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:59:13.319754    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:59:13.319889    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:00:03.188117    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:00:03.188166    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:00:54.590362    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:00:54.590494    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:01:35.673592    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:01:35.673633    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:02:11.552079    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:02:11.552133    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:02:51.110525    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:02:51.110663    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:03:31.251376    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:03:31.251494    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:04:21.566895    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:04:21.566931    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:04:52.754047    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:04:52.754221    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:05:24.673675    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:05:24.673724    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:06:17.608482    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:06:17.608598    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:06:58.215116    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:06:58.215262    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:07:46.578262    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:07:46.578392    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:08:18.239710    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:08:18.239830    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:09:06.947178    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:09:06.947239    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:10:00.261401    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:10:00.261486    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:10:59.363041    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:10:59.363113    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:11:32.205551    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:11:32.205612    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:12:24.956052    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:12:24.956147    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:12:55.353860    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:12:55.354004    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:13:39.223095    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:13:39.223170    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:14:25.018278    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:14:25.018404    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:15:17.227351    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:15:17.227424    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 level=error msg=Attempted to gather ClusterOperator status after wait failure: listing ClusterOperator objects: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 10.244.64.4:6443: connect: connection refused
 level=error msg=Cluster initialization failed because one or more operators are not functioning properly.
 level=error msg=The cluster should be accessible for troubleshooting as detailed in the documentation linked below,
 level=error msg=https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html
 level=error msg=The 'wait-for install-complete' subcommand can then be used to continue the installation
 level=error msg=failed to initialize the cluster: timed out waiting for the condition
 {"component":"entrypoint","error":"wrapped process failed: exit status 6","file":"k8s.io/test-infra/prow/entrypoint/run.go:84","func":"k8s.io/test-infra/prow/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-04-16T10:15:51Z"}
 error: failed to execute wrapped command: exit status 6 {code}
Status: CLOSED
#OCPBUGS-32517issue40 hours agoMissing worker nodes on metal Verified
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[12603]: Unpause all baremetal hosts
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.630867   18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.631351   18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused

... 4 lines not shown

#OCPBUGS-31763issue10 days agogcp install cluster creation fails after 30-40 minutes New
Issue 15921939: gcp install cluster creation fails after 30-40 minutes
Description: Component Readiness has found a potential regression in install should succeed: overall.  I see this on various different platforms, but I started digging into GCP failures.  No installer log bundle is created, which seriously hinders my ability to dig further.
 
 Bootstrap succeeds, and then 30 minutes after waiting for cluster creation, it dies.
 
 From [https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-gcp-sdn-serial/1775871000018161664]
 
 search.ci tells me this affects nearly 10% of jobs on GCP:
 
 [https://search.dptools.openshift.org/?search=Attempted+to+gather+ClusterOperator+status+after+installation+failure%3A+listing+ClusterOperator+objects.*connection+refused&maxAge=168h&context=1&type=bug%2Bissue%2Bjunit&name=.*4.16.*gcp.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job]
 
  
 {code:java}
 time="2024-04-04T13:27:50Z" level=info msg="Waiting up to 40m0s (until 2:07PM UTC) for the cluster at https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443 to initialize..."
 time="2024-04-04T14:07:50Z" level=error msg="Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get \"https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/config.openshift.io/v1/clusteroperators\": dial tcp 35.238.130.20:6443: connect: connection refused"
 time="2024-04-04T14:07:50Z" level=error msg="Cluster initialization failed because one or more operators are not functioning properly.\nThe cluster should be accessible for troubleshooting as detailed in the documentation linked below,\nhttps://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html\nThe 'wait-for install-complete' subcommand can then be used to continue the installation"
 time="2024-04-04T14:07:50Z" level=error msg="failed to initialize the cluster: timed out waiting for the condition" {code}
  
 
 Probability of significant regression: 99.44%
 
 Sample (being evaluated) Release: 4.16
 Start Time: 2024-03-29T00:00:00Z
 End Time: 2024-04-04T23:59:59Z
 Success Rate: 68.75%
 Successes: 11
 Failures: 5
 Flakes: 0
 
 Base (historical) Release: 4.15
 Start Time: 2024-02-01T00:00:00Z
 End Time: 2024-02-28T23:59:59Z
 Success Rate: 96.30%
 Successes: 52
 Failures: 2
 Flakes: 0
 
 View the test details report at [https://sippy.dptools.openshift.org/sippy-ng/component_readiness/test_details?arch=amd64&arch=amd64&baseEndTime=2024-02-28%2023%3A59%3A59&baseRelease=4.15&baseStartTime=2024-02-01%2000%3A00%3A00&capability=Other&component=Installer%20%2F%20openshift-installer&confidence=95&environment=sdn%20upgrade-micro%20amd64%20gcp%20standard&excludeArches=arm64%2Cheterogeneous%2Cppc64le%2Cs390x&excludeClouds=openstack%2Cibmcloud%2Clibvirt%2Covirt%2Cunknown&excludeVariants=hypershift%2Cosd%2Cmicroshift%2Ctechpreview%2Csingle-node%2Cassisted%2Ccompact&groupBy=cloud%2Carch%2Cnetwork&ignoreDisruption=true&ignoreMissing=false&minFail=3&network=sdn&network=sdn&pity=5&platform=gcp&platform=gcp&sampleEndTime=2024-04-04%2023%3A59%3A59&sampleRelease=4.16&sampleStartTime=2024-03-29%2000%3A00%3A00&testId=cluster%20install%3A0cb1bb27e418491b1ffdacab58c5c8c0&testName=install%20should%20succeed%3A%20overall&upgrade=upgrade-micro&upgrade=upgrade-micro&variant=standard&variant=standard]
Status: New
#OCPBUGS-27755issue9 days agoopenshift-kube-apiserver down and is not being restarted New
Issue 15736514: openshift-kube-apiserver down and is not being restarted
Description: Description of problem:
 {code:none}
 SNO cluster, this is the second time that the issue happens. 
 
 Error like the following are reported:
 
 ~~~
 failed to fetch token: Post "https://api-int.<cluster>:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp <ip>:6443: connect: connection refused
 ~~~
 
 Checking the pods logs, kube-apiserver pod is terminated and is not being restarted again:
 
 ~~~
 2024-01-13T09:41:40.931716166Z I0113 09:41:40.931584       1 main.go:213] Received signal terminated. Forwarding to sub-process "hyperkube".
 ~~~{code}
 Version-Release number of selected component (if applicable):
 {code:none}
    4.13.13 {code}
 How reproducible:
 {code:none}
     Not reproducible but has happened twice{code}
 Steps to Reproduce:
 {code:none}
     1.
     2.
     3.
     {code}
 Actual results:
 {code:none}
     API is not available and kube-apiserver is not being restarted{code}
 Expected results:
 {code:none}
     We would expect to see kube-apiserver restarts{code}
 Additional info:
 {code:none}
    {code}
Status: New
#OCPBUGS-33157issue40 hours agoIPv6 metal-ipi jobs: master-bmh-update loosing access to API Verified
Issue 15978085: IPv6 metal-ipi jobs: master-bmh-update loosing access to API
Description: The last 4 IPv6 jobs are failing on the same error
 
 https://prow.ci.openshift.org/job-history/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6
 master-bmh-update.log looses access to the the API when trying to get/update the BMH details
 
 https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6/1785492737169035264
 
 
 
 {noformat}
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[4663]: Waiting for 3 masters to become provisioned
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531242   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531808   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533281   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533630   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.535180   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: The connection to the server api-int.ostest.test.metalkube.org:6443 was refused - did you specify the right host or port?
 {noformat}
Status: Verified
{noformat}
May 01 02:49:40 localhost.localdomain master-bmh-update.sh[12448]: E0501 02:49:40.429468   12448 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
{noformat}
#OCPBUGS-17183issue2 days ago[BUG] Assisted installer fails to create bond with active backup for single node installation New
Issue 15401516: [BUG] Assisted installer fails to create bond with active backup for single node installation
Description: Description of problem:
 {code:none}
 The assisted installer will always fail to create bond with active backup using nmstate yaml and the errors are : 
 
 ~~~ 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_URL's https endpoint at https://xx.xx.32.40:6443/version
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Checking validity of <hostname> of type API_INT_URL 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Successfully resolved API_INT_URL <hostname> 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_INT_URL's https endpoint at https://xx.xx.32.40:6443/versionJul 26 07:12:23 <hostname> bootkube.sh[12960]: Still waiting for the Kubernetes API: 
 Get "https://localhost:6443/readyz": dial tcp [::1]:6443: connect: connection refusedJul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? 
 Jul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? 
  ~~~ 
 
 Where, <hostname> is the actual hostname of the node. 
 
 Adding sosreport and nmstate yaml file here : https://drive.google.com/drive/u/0/folders/19dNzKUPIMmnUls2pT_stuJxr2Dxdi5eb{code}
 Version-Release number of selected component (if applicable):
 {code:none}
 4.12 
 Dell 16g Poweredge R660{code}
 How reproducible:
 {code:none}
 Always at customer side{code}
 Steps to Reproduce:
 {code:none}
 1. Open Assisted installer UI (console.redhat.com -> assisted installer) 
 2. Add the network configs as below for host1  
 
 -----------
 interfaces:
 - name: bond99
   type: bond
   state: up
   ipv4:
     address:
     - ip: xx.xx.32.40
       prefix-length: 24
     enabled: true
   link-aggregation:
     mode: active-backup
     options:
       miimon: '140'
     port:
     - eno12399
     - eno12409
 dns-resolver:
   config:
     search:
     - xxxx
     server:
     - xx.xx.xx.xx
 routes:
   config:
     - destination: 0.0.0.0/0
       metric: 150
       next-hop-address: xx.xx.xx.xx
       next-hop-interface: bond99
       table-id: 254    
 -----------
 
 3. Enter the mac addresses of interfaces in the fields. 
 4. Generate the iso and boot the node. The node will not be able to ping/ssh. This happen everytime and reproducible.
 5. As there was no way to check (due to ssh not working) what is happening on the node, we reset root password and can see that ip address was present on bond, still ping/ssh does not work.
 6. After multiple reboots, customer was able to ssh/ping and provided sosreport and we could see above mentioned error in the journal logs in sosreport.  
  {code}
 Actual results:
 {code:none}
 Fails to install. Seems there is some issue with networking.{code}
 Expected results:
 {code:none}
 Able to proceed with installation without above mentioned issues{code}
 Additional info:
 {code:none}
 - The installation works with round robbin bond mode in 4.12. 
 - Also, the installation works with active-backup 4.10. 
 - Active-backup bond with 4.12 is failing.{code}
Status: New
#OCPBUGS-30631issue2 weeks agoSNO (RT kernel) sosreport crash the SNO node CLOSED
Issue 15865131: SNO (RT kernel) sosreport crash the SNO node
Description: Description of problem:
 {code:none}
 sosreport collection causes SNO XR11 node crash.
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 - RHOCP    : 4.12.30
 - kernel   : 4.18.0-372.69.1.rt7.227.el8_6.x86_64
 - platform : x86_64{code}
 How reproducible:
 {code:none}
 sh-4.4# chrt -rr 99 toolbox
 .toolboxrc file detected, overriding defaults...
 Checking if there is a newer version of ocpdalmirror.xxx.yyy:8443/rhel8/support-tools-zzz-feb available...
 Container 'toolbox-root' already exists. Trying to start...
 (To remove the container and start with a fresh toolbox, run: sudo podman rm 'toolbox-root')
 toolbox-root
 Container started successfully. To exit, type 'exit'.
 [root@node /]# which sos
 /usr/sbin/sos
 logger: socket /dev/log: No such file or directory
 [root@node /]# taskset -c 29-31,61-63 sos report --batch -n networking,kernel,processor -k crio.all=on -k crio.logs=on -k podman.all=on -kpodman.logs=on
 
 sosreport (version 4.5.6)
 
 This command will collect diagnostic and configuration information from
 this Red Hat CoreOS system.
 
 An archive containing the collected information will be generated in
 /host/var/tmp/sos.c09e4f7z and may be provided to a Red Hat support
 representative.
 
 Any information provided to Red Hat will be treated in accordance with
 the published support policies at:
 
         Distribution Website : https://www.redhat.com/
         Commercial Support   : https://access.redhat.com/
 
 The generated archive may contain data considered sensitive and its
 content should be reviewed by the originating organization before being
 passed to any third party.
 
 No changes will be made to system configuration.
 
 
  Setting up archive ...
  Setting up plugins ...
 [plugin:auditd] Could not open conf file /etc/audit/auditd.conf: [Errno 2] No such file or directory: '/etc/audit/auditd.conf'
 caught exception in plugin method "system.setup()"
 writing traceback to sos_logs/system-plugin-errors.txt
 [plugin:systemd] skipped command 'resolvectl status': required services missing: systemd-resolved.
 [plugin:systemd] skipped command 'resolvectl statistics': required services missing: systemd-resolved.
  Running plugins. Please wait ...
 
   Starting 1/91  alternatives    [Running: alternatives]
   Starting 2/91  atomichost      [Running: alternatives atomichost]
   Starting 3/91  auditd          [Running: alternatives atomichost auditd]
   Starting 4/91  block           [Running: alternatives atomichost auditd block]
   Starting 5/91  boot            [Running: alternatives auditd block boot]
   Starting 6/91  cgroups         [Running: auditd block boot cgroups]
   Starting 7/91  chrony          [Running: auditd block cgroups chrony]
   Starting 8/91  cifs            [Running: auditd block cgroups cifs]
   Starting 9/91  conntrack       [Running: auditd block cgroups conntrack]
   Starting 10/91 console         [Running: block cgroups conntrack console]
   Starting 11/91 container_log   [Running: block cgroups conntrack container_log]
   Starting 12/91 containers_common [Running: block cgroups conntrack containers_common]
   Starting 13/91 crio            [Running: block cgroups conntrack crio]
   Starting 14/91 crypto          [Running: cgroups conntrack crio crypto]
   Starting 15/91 date            [Running: cgroups conntrack crio date]
   Starting 16/91 dbus            [Running: cgroups conntrack crio dbus]
   Starting 17/91 devicemapper    [Running: cgroups conntrack crio devicemapper]
   Starting 18/91 devices         [Running: cgroups conntrack crio devices]
   Starting 19/91 dracut          [Running: cgroups conntrack crio dracut]
   Starting 20/91 ebpf            [Running: cgroups conntrack crio ebpf]
   Starting 21/91 etcd            [Running: cgroups crio ebpf etcd]
   Starting 22/91 filesys         [Running: cgroups crio ebpf filesys]
   Starting 23/91 firewall_tables [Running: cgroups crio filesys firewall_tables]
   Starting 24/91 fwupd           [Running: cgroups crio filesys fwupd]
   Starting 25/91 gluster         [Running: cgroups crio filesys gluster]
   Starting 26/91 grub2           [Running: cgroups crio filesys grub2]
   Starting 27/91 gssproxy        [Running: cgroups crio grub2 gssproxy]
   Starting 28/91 hardware        [Running: cgroups crio grub2 hardware]
   Starting 29/91 host            [Running: cgroups crio hardware host]
   Starting 30/91 hts             [Running: cgroups crio hardware hts]
   Starting 31/91 i18n            [Running: cgroups crio hardware i18n]
   Starting 32/91 iscsi           [Running: cgroups crio hardware iscsi]
   Starting 33/91 jars            [Running: cgroups crio hardware jars]
   Starting 34/91 kdump           [Running: cgroups crio hardware kdump]
   Starting 35/91 kernelrt        [Running: cgroups crio hardware kernelrt]
   Starting 36/91 keyutils        [Running: cgroups crio hardware keyutils]
   Starting 37/91 krb5            [Running: cgroups crio hardware krb5]
   Starting 38/91 kvm             [Running: cgroups crio hardware kvm]
   Starting 39/91 ldap            [Running: cgroups crio kvm ldap]
   Starting 40/91 libraries       [Running: cgroups crio kvm libraries]
   Starting 41/91 libvirt         [Running: cgroups crio kvm libvirt]
   Starting 42/91 login           [Running: cgroups crio kvm login]
   Starting 43/91 logrotate       [Running: cgroups crio kvm logrotate]
   Starting 44/91 logs            [Running: cgroups crio kvm logs]
   Starting 45/91 lvm2            [Running: cgroups crio logs lvm2]
   Starting 46/91 md              [Running: cgroups crio logs md]
   Starting 47/91 memory          [Running: cgroups crio logs memory]
   Starting 48/91 microshift_ovn  [Running: cgroups crio logs microshift_ovn]
   Starting 49/91 multipath       [Running: cgroups crio logs multipath]
   Starting 50/91 networkmanager  [Running: cgroups crio logs networkmanager]
 
 Removing debug pod ...
 error: unable to delete the debug pod "ransno1ransnomavdallabcom-debug": Delete "https://api.ransno.mavdallab.com:6443/api/v1/namespaces/openshift-debug-mt82m/pods/ransno1ransnomavdallabcom-debug": dial tcp 10.71.136.144:6443: connect: connection refused
 {code}
 Steps to Reproduce:
 {code:none}
 Launch a debug pod and the procedure above and it crash the node{code}
 Actual results:
 {code:none}
 Node crash{code}
 Expected results:
 {code:none}
 Node does not crash{code}
 Additional info:
 {code:none}
 We have two vmcore on the associated SFDC ticket.
 This system use a RT kernel.
 Using an out of tree ice driver 1.13.7 (probably from 22 dec 2023)
 
 [  103.681608] ice: module unloaded
 [  103.830535] ice: loading out-of-tree module taints kernel.
 [  103.831106] ice: module verification failed: signature and/or required key missing - tainting kernel
 [  103.841005] ice: Intel(R) Ethernet Connection E800 Series Linux Driver - version 1.13.7
 [  103.841017] ice: Copyright (C) 2018-2023 Intel Corporation
 
 
 With the following kernel command line 
 
 Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/vmlinuz-4.18.0-372.69.1.rt7.227.el8_6.x86_64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/0 root=UUID=3e8bda80-5cf4-4c46-b139-4c84cb006354 rw rootflags=prjquota boot=UUID=1d0512c2-3f92-42c5-b26d-709ff9350b81 intel_iommu=on iommu=pt firmware_class.path=/var/lib/firmware skew_tick=1 nohz=on rcu_nocbs=3-31,35-63 tuned.non_isolcpus=00000007,00000007 systemd.cpu_affinity=0,1,2,32,33,34 intel_iommu=on iommu=pt isolcpus=managed_irq,3-31,35-63 nohz_full=3-31,35-63 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off rcutree.kthread_prio=11 default_hugepagesz=1G rcupdate.rcu_normal_after_boot=0 efi=runtime module_blacklist=irdma intel_pstate=passive intel_idle.max_cstate=0 crashkernel=256M
 
 
 
 vmcore1 show issue with the ice driver 
 
 crash vmcore tmp/vmlinux
 
 
       KERNEL: tmp/vmlinux  [TAINTED]
     DUMPFILE: vmcore  [PARTIAL DUMP]
         CPUS: 64
         DATE: Thu Mar  7 17:16:57 CET 2024
       UPTIME: 02:44:28
 LOAD AVERAGE: 24.97, 25.47, 25.46
        TASKS: 5324
     NODENAME: aaa.bbb.ccc
      RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64
      VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023
      MACHINE: x86_64  (1500 Mhz)
       MEMORY: 127.3 GB
        PANIC: "Kernel panic - not syncing:"
          PID: 693
      COMMAND: "khungtaskd"
         TASK: ff4d1890260d4000  [THREAD_INFO: ff4d1890260d4000]
          CPU: 0
        STATE: TASK_RUNNING (PANIC)
 
 crash> ps|grep sos                                                                                                                                                                                                                                                                                                           
   449071  363440  31  ff4d189005f68000  IN   0.2  506428 314484  sos                                                                                                                                                                                                                                                         
   451043  363440  63  ff4d188943a9c000  IN   0.2  506428 314484  sos                                                                                                                                                                                                                                                         
   494099  363440  29  ff4d187f941f4000  UN   0.2  506428 314484  sos     
 
  8457.517696] ------------[ cut here ]------------
 [ 8457.517698] NETDEV WATCHDOG: ens3f1 (ice): transmit queue 35 timed out
 [ 8457.517711] WARNING: CPU: 33 PID: 349 at net/sched/sch_generic.c:472 dev_watchdog+0x270/0x300
 [ 8457.517718] Modules linked in: binfmt_misc macvlan pci_pf_stub iavf vfio_pci vfio_virqfd vfio_iommu_type1 vfio vhost_net vhost vhost_iotlb tap tun xt_addrtype nf_conntrack_netlink ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_nat xt_CT tcp_diag inet_diag ip6t_MASQUERADE xt_mark ice(OE) xt_conntrack ipt_MASQUERADE nft_counter xt_comment nft_compat veth nft_chain_nat nf_tables overlay bridge 8021q garp mrp stp llc nfnetlink_cttimeout nfnetlink openvswitch nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ext4 mbcache jbd2 intel_rapl_msr iTCO_wdt iTCO_vendor_support dell_smbios wmi_bmof dell_wmi_descriptor dcdbas kvm_intel kvm irqbypass intel_rapl_common i10nm_edac nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp rapl ipmi_ssif intel_cstate intel_uncore dm_thin_pool pcspkr isst_if_mbox_pci dm_persistent_data dm_bio_prison dm_bufio isst_if_mmio isst_if_common mei_me i2c_i801 joydev mei intel_pmt wmi acpi_ipmi ipmi_si acpi_power_meter sctp ip6_udp_tunnel
 [ 8457.517770]  udp_tunnel ip_tables xfs libcrc32c i40e sd_mod t10_pi sg bnxt_re ib_uverbs ib_core crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel bnxt_en ahci libahci libata dm_multipath dm_mirror dm_region_hash dm_log dm_mod ipmi_devintf ipmi_msghandler fuse [last unloaded: ice]
 [ 8457.517784] Red Hat flags: eBPF/rawtrace
 [ 8457.517787] CPU: 33 PID: 349 Comm: ktimers/33 Kdump: loaded Tainted: G           OE    --------- -  - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1
 [ 8457.517789] Hardware name: Dell Inc. PowerEdge XR11/0P2RNT, BIOS 1.12.1 09/13/2023
 [ 8457.517790] RIP: 0010:dev_watchdog+0x270/0x300
 [ 8457.517793] Code: 17 00 e9 f0 fe ff ff 4c 89 e7 c6 05 c6 03 34 01 01 e8 14 43 fa ff 89 d9 4c 89 e6 48 c7 c7 90 37 98 9a 48 89 c2 e8 1d be 88 ff <0f> 0b eb ad 65 8b 05 05 13 fb 65 89 c0 48 0f a3 05 1b ab 36 01 73
 [ 8457.517795] RSP: 0018:ff7aeb55c73c7d78 EFLAGS: 00010286
 [ 8457.517797] RAX: 0000000000000000 RBX: 0000000000000023 RCX: 0000000000000001
 [ 8457.517798] RDX: 0000000000000000 RSI: ffffffff9a908557 RDI: 00000000ffffffff
 [ 8457.517799] RBP: 0000000000000021 R08: ffffffff9ae6b3a0 R09: 00080000000000ff
 [ 8457.517800] R10: 000000006443a462 R11: 0000000000000036 R12: ff4d187f4d1f4000
 [ 8457.517801] R13: ff4d187f4d20df00 R14: ff4d187f4d1f44a0 R15: 0000000000000080
 [ 8457.517803] FS:  0000000000000000(0000) GS:ff4d18967a040000(0000) knlGS:0000000000000000
 [ 8457.517804] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 [ 8457.517805] CR2: 00007fc47c649974 CR3: 00000019a441a005 CR4: 0000000000771ea0
 [ 8457.517806] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
 [ 8457.517807] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
 [ 8457.517808] PKRU: 55555554
 [ 8457.517810] Call Trace:
 [ 8457.517813]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517816]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517818]  call_timer_fn+0x32/0x1d0
 [ 8457.517822]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517825]  run_timer_softirq+0x1fc/0x640
 [ 8457.517828]  ? _raw_spin_unlock_irq+0x1d/0x60
 [ 8457.517833]  ? finish_task_switch+0xea/0x320
 [ 8457.517836]  ? __switch_to+0x10c/0x4d0
 [ 8457.517840]  __do_softirq+0xa5/0x33f
 [ 8457.517844]  run_timersd+0x61/0xb0
 [ 8457.517848]  smpboot_thread_fn+0x1c1/0x2b0
 [ 8457.517851]  ? smpboot_register_percpu_thread_cpumask+0x140/0x140
 [ 8457.517853]  kthread+0x151/0x170
 [ 8457.517856]  ? set_kthread_struct+0x50/0x50
 [ 8457.517858]  ret_from_fork+0x1f/0x40
 [ 8457.517861] ---[ end trace 0000000000000002 ]---
 [ 8458.520445] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0x99, HW_HEAD: 0x14, NTU: 0x15, INT: 0x0
 [ 8458.520451] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35
 [ 8506.139246] ice 0000:8a:00.1: PTP reset successful
 [ 8506.437047] ice 0000:8a:00.1: VSI rebuilt. VSI index 0, type ICE_VSI_PF
 [ 8506.445482] ice 0000:8a:00.1: VSI rebuilt. VSI index 1, type ICE_VSI_CTRL
 [ 8540.459707] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0xe3, HW_HEAD: 0xe7, NTU: 0xe8, INT: 0x0
 [ 8540.459714] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35
 [ 8563.891356] ice 0000:8a:00.1: PTP reset successful
 ~~~
 
 Second vmcore on the same node show issue with the SSD drive
 
 $ crash vmcore-2 tmp/vmlinux
 
       KERNEL: tmp/vmlinux  [TAINTED]
     DUMPFILE: vmcore-2  [PARTIAL DUMP]
         CPUS: 64
         DATE: Thu Mar  7 14:29:31 CET 2024
       UPTIME: 1 days, 07:19:52
 LOAD AVERAGE: 25.55, 26.42, 28.30
        TASKS: 5409
     NODENAME: aaa.bbb.ccc
      RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64
      VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023
      MACHINE: x86_64  (1500 Mhz)
       MEMORY: 127.3 GB
        PANIC: "Kernel panic - not syncing:"
          PID: 696
      COMMAND: "khungtaskd"
         TASK: ff2b35ed48d30000  [THREAD_INFO: ff2b35ed48d30000]
          CPU: 34
        STATE: TASK_RUNNING (PANIC)
 
 crash> ps |grep sos
   719784  718369  62  ff2b35ff00830000  IN   0.4 1215636 563388  sos
   721740  718369  61  ff2b3605579f8000  IN   0.4 1215636 563388  sos
   721742  718369  63  ff2b35fa5eb9c000  IN   0.4 1215636 563388  sos
   721744  718369  30  ff2b3603367fc000  IN   0.4 1215636 563388  sos
   721746  718369  29  ff2b360557944000  IN   0.4 1215636 563388  sos
   743356  718369  62  ff2b36042c8e0000  IN   0.4 1215636 563388  sos
   743818  718369  29  ff2b35f6186d0000  IN   0.4 1215636 563388  sos
   748518  718369  61  ff2b3602cfb84000  IN   0.4 1215636 563388  sos
   748884  718369  62  ff2b360713418000  UN   0.4 1215636 563388  sos
 
 crash> dmesg
 
 [111871.309883] ata3.00: exception Emask 0x0 SAct 0x3ff8 SErr 0x0 action 0x6 frozen
 [111871.309889] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309891] ata3.00: cmd 61/40:18:28:47:4b/00:00:00:00:00/40 tag 3 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309895] ata3.00: status: { DRDY }
 [111871.309897] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309904] ata3.00: cmd 61/40:20:68:47:4b/00:00:00:00:00/40 tag 4 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309908] ata3.00: status: { DRDY }
 [111871.309909] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309910] ata3.00: cmd 61/40:28:a8:47:4b/00:00:00:00:00/40 tag 5 ncq dma 32768 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309913] ata3.00: status: { DRDY }
 [111871.309914] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309915] ata3.00: cmd 61/40:30:e8:47:4b/00:00:00:00:00/40 tag 6 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309918] ata3.00: status: { DRDY }
 [111871.309919] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309919] ata3.00: cmd 61/70:38:48:37:2b/00:00:1c:00:00/40 tag 7 ncq dma 57344 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309922] ata3.00: status: { DRDY }
 [111871.309923] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309924] ata3.00: cmd 61/20:40:78:29:0c/00:00:19:00:00/40 tag 8 ncq dma 16384 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309927] ata3.00: status: { DRDY }
 [111871.309928] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309929] ata3.00: cmd 61/08:48:08:0c:c0/00:00:1c:00:00/40 tag 9 ncq dma 4096 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309932] ata3.00: status: { DRDY }
 [111871.309933] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309934] ata3.00: cmd 61/40:50:28:48:4b/00:00:00:00:00/40 tag 10 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309937] ata3.00: status: { DRDY }
 [111871.309938] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309939] ata3.00: cmd 61/40:58:68:48:4b/00:00:00:00:00/40 tag 11 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309942] ata3.00: status: { DRDY }
 [111871.309943] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309944] ata3.00: cmd 61/40:60:a8:48:4b/00:00:00:00:00/40 tag 12 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309946] ata3.00: status: { DRDY }
 [111871.309947] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309948] ata3.00: cmd 61/40:68:e8:48:4b/00:00:00:00:00/40 tag 13 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309951] ata3.00: status: { DRDY }
 [111871.309953] ata3: hard resetting link
 ...
 ...
 ...
 [112789.787310] INFO: task sos:748884 blocked for more than 600 seconds.                                                                                                                                                                                                                                                     
 [112789.787314]       Tainted: G           OE    --------- -  - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1                                                                                                                                                                                                                      
 [112789.787316] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.                                                                                                                                                                                                                                    
 [112789.787316] task:sos             state:D stack:    0 pid:748884 ppid:718369 flags:0x00084080                                                                                                                                                                                                                             
 [112789.787320] Call Trace:                                                                                                                                                                                                                                                                                                  
 [112789.787323]  __schedule+0x37b/0x8e0                                                                                                                                                                                                                                                                                      
 [112789.787330]  schedule+0x6c/0x120                                                                                                                                                                                                                                                                                         
 [112789.787333]  schedule_timeout+0x2b7/0x410                                                                                                                                                                                                                                                                                
 [112789.787336]  ? enqueue_entity+0x130/0x790                                                                                                                                                                                                                                                                                
 [112789.787340]  wait_for_completion+0x84/0xf0                                                                                                                                                                                                                                                                               
 [112789.787343]  flush_work+0x120/0x1d0                                                                                                                                                                                                                                                                                      
 [112789.787347]  ? flush_workqueue_prep_pwqs+0x130/0x130                                                                                                                                                                                                                                                                     
 [112789.787350]  schedule_on_each_cpu+0xa7/0xe0                                                                                                                                                                                                                                                                              
 [112789.787353]  vmstat_refresh+0x22/0xa0                                                                                                                                                                                                                                                                                    
 [112789.787357]  proc_sys_call_handler+0x174/0x1d0                                                                                                                                                                                                                                                                           
 [112789.787361]  vfs_read+0x91/0x150                                                                                                                                                                                                                                                                                         
 [112789.787364]  ksys_read+0x52/0xc0                                                                                                                                                                                                                                                                                         
 [112789.787366]  do_syscall_64+0x87/0x1b0                                                                                                                                                                                                                                                                                    
 [112789.787369]  entry_SYSCALL_64_after_hwframe+0x61/0xc6                                                                                                                                                                                                                                                                    
 [112789.787372] RIP: 0033:0x7f2dca8c2ab4                                                                                                                                                                                                                                                                                     
 [112789.787378] Code: Unable to access opcode bytes at RIP 0x7f2dca8c2a8a.                                                                                                                                                                                                                                                   
 [112789.787378] RSP: 002b:00007f2dbbffc5e0 EFLAGS: 00000246 ORIG_RAX: 0000000000000000                                                                                                                                                                                                                                       
 [112789.787380] RAX: ffffffffffffffda RBX: 0000000000000008 RCX: 00007f2dca8c2ab4                                                                                                                                                                                                                                            
 [112789.787382] RDX: 0000000000004000 RSI: 00007f2db402b5a0 RDI: 0000000000000008                                                                                                                                                                                                                                            
 [112789.787383] RBP: 00007f2db402b5a0 R08: 0000000000000000 R09: 00007f2dcace27bb                                                                                                                                                                                                                                            
 [112789.787383] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000004000                                                                                                                                                                                                                                            
 [112789.787384] R13: 0000000000000008 R14: 00007f2db402b5a0 R15: 00007f2da4001a90                                                                                                                                                                                                                                            
 [112789.787418] NMI backtrace for cpu 34    {code}
Status: CLOSED
#OCPBUGS-32091issue4 weeks agoCAPI-Installer leaks processes during unsuccessful installs MODIFIED
ERROR Attempted to gather debug logs after installation failure: failed to create SSH client: ssh: handshake failed: ssh: disconnect, reason 2: Too many authentication failures
ERROR Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.134.9.157:6443: connect: connection refused
ERROR Bootstrap failed to complete: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/version": dial tcp 18.222.8.23:6443: connect: connection refused

... 1 lines not shown

periodic-ci-openshift-release-master-ci-4.13-upgrade-from-stable-4.12-e2e-aws-ovn-upgrade (all) - 32 runs, 16% failed, 620% of failures match = 97% impact
#1791711138501627904junit16 hours ago
May 18 07:45:13.355 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-ccrrv node/ip-10-0-182-41.us-east-2.compute.internal uid/1cf72eae-9989-4ff9-9bc2-ba6f9b8edc89 container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 18 07:45:16.410 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-41.us-east-2.compute.internal node/ip-10-0-182-41.us-east-2.compute.internal uid/bf1e2bcf-025f-41c0-9d44-065344eb0811 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0518 07:45:14.584882       1 cmd.go:216] Using insecure, self-signed certificates\nI0518 07:45:14.596352       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716018314 cert, and key in /tmp/serving-cert-770252321/serving-signer.crt, /tmp/serving-cert-770252321/serving-signer.key\nI0518 07:45:15.054214       1 observer_polling.go:159] Starting file observer\nW0518 07:45:15.068756       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-182-41.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0518 07:45:15.068903       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0518 07:45:15.077699       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-770252321/tls.crt::/tmp/serving-cert-770252321/tls.key"\nF0518 07:45:15.301997       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 18 07:45:22.702 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-41.us-east-2.compute.internal node/ip-10-0-182-41.us-east-2.compute.internal uid/bf1e2bcf-025f-41c0-9d44-065344eb0811 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0518 07:45:14.584882       1 cmd.go:216] Using insecure, self-signed certificates\nI0518 07:45:14.596352       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716018314 cert, and key in /tmp/serving-cert-770252321/serving-signer.crt, /tmp/serving-cert-770252321/serving-signer.key\nI0518 07:45:15.054214       1 observer_polling.go:159] Starting file observer\nW0518 07:45:15.068756       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-182-41.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0518 07:45:15.068903       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0518 07:45:15.077699       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-770252321/tls.crt::/tmp/serving-cert-770252321/tls.key"\nF0518 07:45:15.301997       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1791604102434656256junit24 hours ago
May 18 00:39:18.278 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-g5l4s node/ip-10-0-217-218.us-west-2.compute.internal uid/dda477d7-ce2f-419c-a285-0a85110e5321 container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 18 00:39:21.280 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-217-218.us-west-2.compute.internal node/ip-10-0-217-218.us-west-2.compute.internal uid/3e4ef19b-7094-405c-aae7-35b57deaf2ee container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0518 00:39:20.349334       1 cmd.go:216] Using insecure, self-signed certificates\nI0518 00:39:20.349958       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715992760 cert, and key in /tmp/serving-cert-1653931237/serving-signer.crt, /tmp/serving-cert-1653931237/serving-signer.key\nI0518 00:39:20.632010       1 observer_polling.go:159] Starting file observer\nW0518 00:39:20.643356       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-217-218.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0518 00:39:20.643556       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0518 00:39:20.666121       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1653931237/tls.crt::/tmp/serving-cert-1653931237/tls.key"\nF0518 00:39:20.851318       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 18 00:39:24.385 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-217-218.us-west-2.compute.internal node/ip-10-0-217-218.us-west-2.compute.internal uid/3e4ef19b-7094-405c-aae7-35b57deaf2ee container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0518 00:39:20.349334       1 cmd.go:216] Using insecure, self-signed certificates\nI0518 00:39:20.349958       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715992760 cert, and key in /tmp/serving-cert-1653931237/serving-signer.crt, /tmp/serving-cert-1653931237/serving-signer.key\nI0518 00:39:20.632010       1 observer_polling.go:159] Starting file observer\nW0518 00:39:20.643356       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-217-218.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0518 00:39:20.643556       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0518 00:39:20.666121       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1653931237/tls.crt::/tmp/serving-cert-1653931237/tls.key"\nF0518 00:39:20.851318       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1791244571968016384junit47 hours ago
May 17 00:46:52.540 E ns/openshift-dns pod/dns-default-27kgz node/ip-10-0-208-1.ec2.internal uid/b945d7e0-28e4-44ed-9a97-42c663b09acd container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 17 00:46:52.602 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-208-1.ec2.internal node/ip-10-0-208-1.ec2.internal uid/89612859-89b7-4b95-a019-a44f719f0bb6 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0517 00:46:50.628314       1 cmd.go:216] Using insecure, self-signed certificates\nI0517 00:46:50.628546       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715906810 cert, and key in /tmp/serving-cert-3257288017/serving-signer.crt, /tmp/serving-cert-3257288017/serving-signer.key\nI0517 00:46:51.243843       1 observer_polling.go:159] Starting file observer\nW0517 00:46:51.249389       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-208-1.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0517 00:46:51.249568       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0517 00:46:51.278638       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3257288017/tls.crt::/tmp/serving-cert-3257288017/tls.key"\nF0517 00:46:51.579764       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 17 00:46:52.631 E ns/e2e-k8s-sig-apps-daemonset-upgrade-4138 pod/ds1-lfqvd node/ip-10-0-208-1.ec2.internal uid/d1f7f81a-abbd-4553-a840-5d37380658d7 container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1791244571968016384junit47 hours ago
May 17 00:46:52.659 E ns/openshift-multus pod/network-metrics-daemon-vnlrd node/ip-10-0-208-1.ec2.internal uid/4e168f7c-bf5f-4a2e-9c05-30e416cc5786 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 17 00:46:53.566 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-208-1.ec2.internal node/ip-10-0-208-1.ec2.internal uid/89612859-89b7-4b95-a019-a44f719f0bb6 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0517 00:46:50.628314       1 cmd.go:216] Using insecure, self-signed certificates\nI0517 00:46:50.628546       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715906810 cert, and key in /tmp/serving-cert-3257288017/serving-signer.crt, /tmp/serving-cert-3257288017/serving-signer.key\nI0517 00:46:51.243843       1 observer_polling.go:159] Starting file observer\nW0517 00:46:51.249389       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-208-1.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0517 00:46:51.249568       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0517 00:46:51.278638       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3257288017/tls.crt::/tmp/serving-cert-3257288017/tls.key"\nF0517 00:46:51.579764       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 17 00:47:01.614 E ns/openshift-e2e-loki pod/loki-promtail-d7cxg node/ip-10-0-208-1.ec2.internal uid/bbb52659-c8a6-403a-bdf5-871e6968fed9 container/prod-bearer-token reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1791553314895171584junit26 hours ago
May 17 21:25:06.519 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-gnfcp node/ip-10-0-186-208.ec2.internal uid/101cc05a-fccc-4e14-9c56-08bb23f4fe33 container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 17 21:25:07.565 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-186-208.ec2.internal node/ip-10-0-186-208.ec2.internal uid/76ef1d1d-cf9c-4bc9-89c9-b02a2533451b container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0517 21:25:06.082710       1 cmd.go:216] Using insecure, self-signed certificates\nI0517 21:25:06.083099       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715981106 cert, and key in /tmp/serving-cert-2469072517/serving-signer.crt, /tmp/serving-cert-2469072517/serving-signer.key\nI0517 21:25:06.764655       1 observer_polling.go:159] Starting file observer\nW0517 21:25:06.779888       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-186-208.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0517 21:25:06.780164       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0517 21:25:06.786005       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2469072517/tls.crt::/tmp/serving-cert-2469072517/tls.key"\nF0517 21:25:07.100434       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 17 21:25:08.809 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-186-208.ec2.internal node/ip-10-0-186-208.ec2.internal uid/76ef1d1d-cf9c-4bc9-89c9-b02a2533451b container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0517 21:25:06.082710       1 cmd.go:216] Using insecure, self-signed certificates\nI0517 21:25:06.083099       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715981106 cert, and key in /tmp/serving-cert-2469072517/serving-signer.crt, /tmp/serving-cert-2469072517/serving-signer.key\nI0517 21:25:06.764655       1 observer_polling.go:159] Starting file observer\nW0517 21:25:06.779888       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-186-208.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0517 21:25:06.780164       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0517 21:25:06.786005       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2469072517/tls.crt::/tmp/serving-cert-2469072517/tls.key"\nF0517 21:25:07.100434       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1791474166596112384junit32 hours ago
May 17 16:13:10.000 - 1s    E ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new reason/DisruptionBegan ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new stopped responding to GET requests over new connections: Get "https://test-disruption-new-openshift-image-registry.apps.ci-op-xdyxx7qh-99751.aws-2.ci.openshift.org/healthz": read tcp 10.130.216.25:35456->44.224.135.119:443: read: connection reset by peer
May 17 16:13:10.273 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-61.us-west-2.compute.internal node/ip-10-0-202-61.us-west-2.compute.internal uid/7a988184-22e4-4495-9164-1db42b5d1e06 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0517 16:13:08.712125       1 cmd.go:216] Using insecure, self-signed certificates\nI0517 16:13:08.716145       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715962388 cert, and key in /tmp/serving-cert-3405272214/serving-signer.crt, /tmp/serving-cert-3405272214/serving-signer.key\nI0517 16:13:09.118146       1 observer_polling.go:159] Starting file observer\nW0517 16:13:09.126493       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-202-61.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0517 16:13:09.126601       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0517 16:13:09.131673       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3405272214/tls.crt::/tmp/serving-cert-3405272214/tls.key"\nF0517 16:13:09.509012       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 17 16:13:11.279 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-61.us-west-2.compute.internal node/ip-10-0-202-61.us-west-2.compute.internal uid/7a988184-22e4-4495-9164-1db42b5d1e06 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0517 16:13:08.712125       1 cmd.go:216] Using insecure, self-signed certificates\nI0517 16:13:08.716145       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715962388 cert, and key in /tmp/serving-cert-3405272214/serving-signer.crt, /tmp/serving-cert-3405272214/serving-signer.key\nI0517 16:13:09.118146       1 observer_polling.go:159] Starting file observer\nW0517 16:13:09.126493       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-202-61.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0517 16:13:09.126601       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0517 16:13:09.131673       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3405272214/tls.crt::/tmp/serving-cert-3405272214/tls.key"\nF0517 16:13:09.509012       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1791156885408190464junit2 days ago
May 16 19:06:44.766 E ns/openshift-ovn-kubernetes pod/ovnkube-master-7sqvh node/ip-10-0-130-139.us-west-1.compute.internal uid/a61bfaa4-aff3-4b61-9d25-bb747c030df3 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 16 19:06:51.750 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-139.us-west-1.compute.internal node/ip-10-0-130-139.us-west-1.compute.internal uid/1880f67c-739e-4a5e-a9d2-ce4ed9075dec container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0516 19:06:50.869685       1 cmd.go:216] Using insecure, self-signed certificates\nI0516 19:06:50.870162       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715886410 cert, and key in /tmp/serving-cert-2507494848/serving-signer.crt, /tmp/serving-cert-2507494848/serving-signer.key\nI0516 19:06:51.096410       1 observer_polling.go:159] Starting file observer\nW0516 19:06:51.105557       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-130-139.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0516 19:06:51.106014       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0516 19:06:51.114354       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2507494848/tls.crt::/tmp/serving-cert-2507494848/tls.key"\nF0516 19:06:51.454654       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 16 19:06:52.794 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-139.us-west-1.compute.internal node/ip-10-0-130-139.us-west-1.compute.internal uid/1880f67c-739e-4a5e-a9d2-ce4ed9075dec container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0516 19:06:50.869685       1 cmd.go:216] Using insecure, self-signed certificates\nI0516 19:06:50.870162       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715886410 cert, and key in /tmp/serving-cert-2507494848/serving-signer.crt, /tmp/serving-cert-2507494848/serving-signer.key\nI0516 19:06:51.096410       1 observer_polling.go:159] Starting file observer\nW0516 19:06:51.105557       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-130-139.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0516 19:06:51.106014       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0516 19:06:51.114354       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2507494848/tls.crt::/tmp/serving-cert-2507494848/tls.key"\nF0516 19:06:51.454654       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1791424584424099840junit35 hours ago
May 17 12:47:19.209 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-8tg2w node/ip-10-0-226-202.ec2.internal uid/ed052ed9-14bf-4475-bc20-256453e1d598 container/csi-liveness-probe reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 17 12:47:20.240 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-226-202.ec2.internal node/ip-10-0-226-202.ec2.internal uid/6e13a2cf-e7ba-4a5c-991b-a2044959c2d2 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0517 12:47:19.284217       1 cmd.go:216] Using insecure, self-signed certificates\nI0517 12:47:19.284436       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715950039 cert, and key in /tmp/serving-cert-30958820/serving-signer.crt, /tmp/serving-cert-30958820/serving-signer.key\nI0517 12:47:19.630900       1 observer_polling.go:159] Starting file observer\nW0517 12:47:19.643611       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-226-202.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0517 12:47:19.643752       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0517 12:47:19.661981       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-30958820/tls.crt::/tmp/serving-cert-30958820/tls.key"\nF0517 12:47:20.046603       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 17 12:47:21.248 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-226-202.ec2.internal node/ip-10-0-226-202.ec2.internal uid/6e13a2cf-e7ba-4a5c-991b-a2044959c2d2 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0517 12:47:19.284217       1 cmd.go:216] Using insecure, self-signed certificates\nI0517 12:47:19.284436       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715950039 cert, and key in /tmp/serving-cert-30958820/serving-signer.crt, /tmp/serving-cert-30958820/serving-signer.key\nI0517 12:47:19.630900       1 observer_polling.go:159] Starting file observer\nW0517 12:47:19.643611       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-226-202.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0517 12:47:19.643752       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0517 12:47:19.661981       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-30958820/tls.crt::/tmp/serving-cert-30958820/tls.key"\nF0517 12:47:20.046603       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1791042149316300800junit2 days ago
May 16 11:50:20.855 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-nz27l node/ip-10-0-234-160.us-east-2.compute.internal uid/976f56ab-525a-4458-b944-560a9bced3d1 container/csi-liveness-probe reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 16 11:50:23.859 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-234-160.us-east-2.compute.internal node/ip-10-0-234-160.us-east-2.compute.internal uid/8e21b3fe-4a6e-44e5-adb2-73fe9f0b37fa container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0516 11:50:22.842580       1 cmd.go:216] Using insecure, self-signed certificates\nI0516 11:50:22.856978       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715860222 cert, and key in /tmp/serving-cert-100143736/serving-signer.crt, /tmp/serving-cert-100143736/serving-signer.key\nI0516 11:50:23.203273       1 observer_polling.go:159] Starting file observer\nW0516 11:50:23.225288       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-234-160.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0516 11:50:23.225467       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0516 11:50:23.239470       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-100143736/tls.crt::/tmp/serving-cert-100143736/tls.key"\nF0516 11:50:23.456626       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 16 11:50:30.114 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-234-160.us-east-2.compute.internal node/ip-10-0-234-160.us-east-2.compute.internal uid/8e21b3fe-4a6e-44e5-adb2-73fe9f0b37fa container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0516 11:50:22.842580       1 cmd.go:216] Using insecure, self-signed certificates\nI0516 11:50:22.856978       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715860222 cert, and key in /tmp/serving-cert-100143736/serving-signer.crt, /tmp/serving-cert-100143736/serving-signer.key\nI0516 11:50:23.203273       1 observer_polling.go:159] Starting file observer\nW0516 11:50:23.225288       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-234-160.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0516 11:50:23.225467       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0516 11:50:23.239470       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-100143736/tls.crt::/tmp/serving-cert-100143736/tls.key"\nF0516 11:50:23.456626       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1790949650673438720junit2 days ago
May 16 05:22:02.353 E ns/openshift-image-registry pod/node-ca-4hsqh node/ip-10-0-187-206.ec2.internal uid/acda89fa-56b2-43cc-8d44-063a99ed9ea3 container/node-ca reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 16 05:22:08.348 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-187-206.ec2.internal node/ip-10-0-187-206.ec2.internal uid/b5fefdaa-0dac-478d-9d8e-3ed65d670c6d container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0516 05:22:07.012333       1 cmd.go:216] Using insecure, self-signed certificates\nI0516 05:22:07.019989       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715836927 cert, and key in /tmp/serving-cert-3290939542/serving-signer.crt, /tmp/serving-cert-3290939542/serving-signer.key\nI0516 05:22:07.357885       1 observer_polling.go:159] Starting file observer\nW0516 05:22:07.361856       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-187-206.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0516 05:22:07.362037       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0516 05:22:07.366571       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3290939542/tls.crt::/tmp/serving-cert-3290939542/tls.key"\nF0516 05:22:07.752550       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 16 05:22:08.363 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-qs4rw node/ip-10-0-187-206.ec2.internal uid/72bfbdef-4df7-4975-9f32-129f419a8272 container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1790949650673438720junit2 days ago
May 16 05:22:08.363 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-qs4rw node/ip-10-0-187-206.ec2.internal uid/72bfbdef-4df7-4975-9f32-129f419a8272 container/csi-liveness-probe reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 16 05:22:09.355 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-187-206.ec2.internal node/ip-10-0-187-206.ec2.internal uid/b5fefdaa-0dac-478d-9d8e-3ed65d670c6d container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0516 05:22:07.012333       1 cmd.go:216] Using insecure, self-signed certificates\nI0516 05:22:07.019989       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715836927 cert, and key in /tmp/serving-cert-3290939542/serving-signer.crt, /tmp/serving-cert-3290939542/serving-signer.key\nI0516 05:22:07.357885       1 observer_polling.go:159] Starting file observer\nW0516 05:22:07.361856       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-187-206.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0516 05:22:07.362037       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0516 05:22:07.366571       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3290939542/tls.crt::/tmp/serving-cert-3290939542/tls.key"\nF0516 05:22:07.752550       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 16 05:22:10.424 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-187-206.ec2.internal node/ip-10-0-187-206.ec2.internal uid/b5fefdaa-0dac-478d-9d8e-3ed65d670c6d container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0516 05:22:08.618061       1 cmd.go:216] Using insecure, self-signed certificates\nI0516 05:22:08.618492       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715836928 cert, and key in /tmp/serving-cert-2797202336/serving-signer.crt, /tmp/serving-cert-2797202336/serving-signer.key\nI0516 05:22:09.092103       1 observer_polling.go:159] Starting file observer\nW0516 05:22:09.096187       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-187-206.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0516 05:22:09.096414       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0516 05:22:09.096997       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2797202336/tls.crt::/tmp/serving-cert-2797202336/tls.key"\nF0516 05:22:09.546056       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1790642045522546688junit3 days ago
May 15 09:11:14.924 E ns/openshift-ovn-kubernetes pod/ovnkube-master-j754x node/ip-10-0-196-86.us-east-2.compute.internal uid/dbd8cab7-d59f-400b-9e2a-67757057b20c container/ovn-dbchecker reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 15 09:11:14.938 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-196-86.us-east-2.compute.internal node/ip-10-0-196-86.us-east-2.compute.internal uid/1c544ee3-bf13-4b75-8551-c164664b0e67 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0515 09:11:12.644167       1 cmd.go:216] Using insecure, self-signed certificates\nI0515 09:11:12.658415       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715764272 cert, and key in /tmp/serving-cert-500076158/serving-signer.crt, /tmp/serving-cert-500076158/serving-signer.key\nI0515 09:11:13.547472       1 observer_polling.go:159] Starting file observer\nW0515 09:11:13.567830       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-196-86.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0515 09:11:13.567941       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0515 09:11:13.589346       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-500076158/tls.crt::/tmp/serving-cert-500076158/tls.key"\nF0515 09:11:13.869634       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 15 09:11:15.923 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-196-86.us-east-2.compute.internal node/ip-10-0-196-86.us-east-2.compute.internal uid/1c544ee3-bf13-4b75-8551-c164664b0e67 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0515 09:11:12.644167       1 cmd.go:216] Using insecure, self-signed certificates\nI0515 09:11:12.658415       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715764272 cert, and key in /tmp/serving-cert-500076158/serving-signer.crt, /tmp/serving-cert-500076158/serving-signer.key\nI0515 09:11:13.547472       1 observer_polling.go:159] Starting file observer\nW0515 09:11:13.567830       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-196-86.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0515 09:11:13.567941       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0515 09:11:13.589346       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-500076158/tls.crt::/tmp/serving-cert-500076158/tls.key"\nF0515 09:11:13.869634       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1790738274180927488junit3 days ago
May 15 15:29:23.743 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-fc9b4 node/ip-10-0-172-180.us-east-2.compute.internal uid/f096527a-eb5a-4f81-957e-8a097b8a9513 container/csi-liveness-probe reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 15 15:29:26.755 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-180.us-east-2.compute.internal node/ip-10-0-172-180.us-east-2.compute.internal uid/1dd7cdb5-4838-4d48-97bb-8b10b03f068c container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0515 15:29:24.951465       1 cmd.go:216] Using insecure, self-signed certificates\nI0515 15:29:24.961545       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715786964 cert, and key in /tmp/serving-cert-3173301534/serving-signer.crt, /tmp/serving-cert-3173301534/serving-signer.key\nI0515 15:29:25.339569       1 observer_polling.go:159] Starting file observer\nW0515 15:29:25.350863       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-172-180.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0515 15:29:25.351064       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0515 15:29:25.365648       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3173301534/tls.crt::/tmp/serving-cert-3173301534/tls.key"\nF0515 15:29:25.692057       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 15 15:29:32.087 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-180.us-east-2.compute.internal node/ip-10-0-172-180.us-east-2.compute.internal uid/1dd7cdb5-4838-4d48-97bb-8b10b03f068c container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0515 15:29:24.951465       1 cmd.go:216] Using insecure, self-signed certificates\nI0515 15:29:24.961545       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715786964 cert, and key in /tmp/serving-cert-3173301534/serving-signer.crt, /tmp/serving-cert-3173301534/serving-signer.key\nI0515 15:29:25.339569       1 observer_polling.go:159] Starting file observer\nW0515 15:29:25.350863       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-172-180.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0515 15:29:25.351064       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0515 15:29:25.365648       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3173301534/tls.crt::/tmp/serving-cert-3173301534/tls.key"\nF0515 15:29:25.692057       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1790589768115949568junit3 days ago
May 15 06:01:21.974 E ns/openshift-ovn-kubernetes pod/ovnkube-master-hwt5m node/ip-10-0-213-230.ec2.internal uid/6409c7b1-16e8-4453-b96a-09624509240c container/sbdb reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 15 06:01:23.996 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-213-230.ec2.internal node/ip-10-0-213-230.ec2.internal uid/4c96160e-d2a2-4f4e-ae88-8df4c5a8a9b3 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0515 06:01:22.055868       1 cmd.go:216] Using insecure, self-signed certificates\nI0515 06:01:22.070385       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715752882 cert, and key in /tmp/serving-cert-3818421745/serving-signer.crt, /tmp/serving-cert-3818421745/serving-signer.key\nI0515 06:01:22.410919       1 observer_polling.go:159] Starting file observer\nW0515 06:01:22.438699       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-213-230.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0515 06:01:22.438837       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0515 06:01:22.454790       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3818421745/tls.crt::/tmp/serving-cert-3818421745/tls.key"\nF0515 06:01:22.972528       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 15 06:01:29.300 E ns/openshift-network-diagnostics pod/network-check-target-lztds node/ip-10-0-213-230.ec2.internal uid/39888642-48fb-49ba-8427-9877af052456 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 2 lines not shown

#1790498916517548032junit4 days ago
May 14 23:28:18.582 E ns/openshift-network-diagnostics pod/network-check-target-647nd node/ip-10-0-240-149.us-east-2.compute.internal uid/b4113d0d-183d-4cd6-86e6-026ff348c250 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 14 23:28:19.755 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-240-149.us-east-2.compute.internal node/ip-10-0-240-149.us-east-2.compute.internal uid/7db1f765-4ecc-47ae-828b-366e1b2c2fa6 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 23:28:18.252133       1 cmd.go:216] Using insecure, self-signed certificates\nI0514 23:28:18.253218       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715729298 cert, and key in /tmp/serving-cert-3339336277/serving-signer.crt, /tmp/serving-cert-3339336277/serving-signer.key\nI0514 23:28:18.533633       1 observer_polling.go:159] Starting file observer\nW0514 23:28:18.554678       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-240-149.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 23:28:18.554921       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0514 23:28:18.569578       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3339336277/tls.crt::/tmp/serving-cert-3339336277/tls.key"\nF0514 23:28:18.850487       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 14 23:28:19.781 E ns/openshift-multus pod/network-metrics-daemon-2cb9k node/ip-10-0-240-149.us-east-2.compute.internal uid/c007ceee-169c-496d-a07c-ab5aa5364999 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1790498916517548032junit4 days ago
May 14 23:28:19.819 E ns/openshift-dns pod/dns-default-wc48k node/ip-10-0-240-149.us-east-2.compute.internal uid/008e998f-2b46-41aa-860e-a6c7da83ba5e container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 14 23:28:20.806 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-240-149.us-east-2.compute.internal node/ip-10-0-240-149.us-east-2.compute.internal uid/7db1f765-4ecc-47ae-828b-366e1b2c2fa6 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 23:28:18.252133       1 cmd.go:216] Using insecure, self-signed certificates\nI0514 23:28:18.253218       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715729298 cert, and key in /tmp/serving-cert-3339336277/serving-signer.crt, /tmp/serving-cert-3339336277/serving-signer.key\nI0514 23:28:18.533633       1 observer_polling.go:159] Starting file observer\nW0514 23:28:18.554678       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-240-149.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 23:28:18.554921       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0514 23:28:18.569578       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3339336277/tls.crt::/tmp/serving-cert-3339336277/tls.key"\nF0514 23:28:18.850487       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 14 23:28:23.815 E ns/openshift-e2e-loki pod/loki-promtail-qgzhn node/ip-10-0-240-149.us-east-2.compute.internal uid/c97dcd68-4be4-4279-9d63-0d9515355d22 container/prod-bearer-token reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1790317749742866432junit4 days ago
May 14 11:26:20.727 E ns/e2e-k8s-sig-apps-daemonset-upgrade-3448 pod/ds1-rlsg8 node/ip-10-0-196-134.ec2.internal uid/777fc8d5-f57d-4b56-afea-53090f1919a9 container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 14 11:26:20.778 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-196-134.ec2.internal node/ip-10-0-196-134.ec2.internal uid/53dc8688-1caf-4c45-92bf-8983af1f64b4 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 11:26:19.363954       1 cmd.go:216] Using insecure, self-signed certificates\nI0514 11:26:19.366381       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715685979 cert, and key in /tmp/serving-cert-4089334101/serving-signer.crt, /tmp/serving-cert-4089334101/serving-signer.key\nI0514 11:26:19.760764       1 observer_polling.go:159] Starting file observer\nW0514 11:26:19.764468       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-196-134.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 11:26:19.764677       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0514 11:26:19.774083       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4089334101/tls.crt::/tmp/serving-cert-4089334101/tls.key"\nF0514 11:26:19.951968       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 14 11:26:21.760 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-196-134.ec2.internal node/ip-10-0-196-134.ec2.internal uid/53dc8688-1caf-4c45-92bf-8983af1f64b4 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 11:26:19.363954       1 cmd.go:216] Using insecure, self-signed certificates\nI0514 11:26:19.366381       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715685979 cert, and key in /tmp/serving-cert-4089334101/serving-signer.crt, /tmp/serving-cert-4089334101/serving-signer.key\nI0514 11:26:19.760764       1 observer_polling.go:159] Starting file observer\nW0514 11:26:19.764468       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-196-134.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 11:26:19.764677       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0514 11:26:19.774083       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4089334101/tls.crt::/tmp/serving-cert-4089334101/tls.key"\nF0514 11:26:19.951968       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1790408688024948736junit4 days ago
May 14 17:39:15.247 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-mbvrf node/ip-10-0-226-205.ec2.internal uid/fa8a7570-052c-44ee-a505-0daba7cc84a5 container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 14 17:39:20.305 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-226-205.ec2.internal node/ip-10-0-226-205.ec2.internal uid/cf011637-8ef7-4ced-886a-dd34290b113e container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 17:39:18.733712       1 cmd.go:216] Using insecure, self-signed certificates\nI0514 17:39:18.764811       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715708358 cert, and key in /tmp/serving-cert-4246819319/serving-signer.crt, /tmp/serving-cert-4246819319/serving-signer.key\nI0514 17:39:19.256258       1 observer_polling.go:159] Starting file observer\nW0514 17:39:19.270981       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-226-205.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 17:39:19.271104       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0514 17:39:19.295962       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4246819319/tls.crt::/tmp/serving-cert-4246819319/tls.key"\nF0514 17:39:19.577428       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 14 17:39:21.306 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-226-205.ec2.internal node/ip-10-0-226-205.ec2.internal uid/cf011637-8ef7-4ced-886a-dd34290b113e container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 17:39:18.733712       1 cmd.go:216] Using insecure, self-signed certificates\nI0514 17:39:18.764811       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715708358 cert, and key in /tmp/serving-cert-4246819319/serving-signer.crt, /tmp/serving-cert-4246819319/serving-signer.key\nI0514 17:39:19.256258       1 observer_polling.go:159] Starting file observer\nW0514 17:39:19.270981       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-226-205.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 17:39:19.271104       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0514 17:39:19.295962       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4246819319/tls.crt::/tmp/serving-cert-4246819319/tls.key"\nF0514 17:39:19.577428       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1790128785861906432junit5 days ago
May 13 23:53:01.960 E ns/openshift-dns pod/node-resolver-8v9b8 node/ip-10-0-143-225.us-east-2.compute.internal uid/397cf46b-dd5e-4685-b235-37549f8285fd container/dns-node-resolver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 13 23:53:05.947 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-225.us-east-2.compute.internal node/ip-10-0-143-225.us-east-2.compute.internal uid/8d85027c-a757-4c89-810f-ee60409b4e0f container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0513 23:53:04.587078       1 cmd.go:216] Using insecure, self-signed certificates\nI0513 23:53:04.593628       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715644384 cert, and key in /tmp/serving-cert-3698074971/serving-signer.crt, /tmp/serving-cert-3698074971/serving-signer.key\nI0513 23:53:05.370939       1 observer_polling.go:159] Starting file observer\nW0513 23:53:05.375659       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-143-225.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0513 23:53:05.375794       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0513 23:53:05.383362       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3698074971/tls.crt::/tmp/serving-cert-3698074971/tls.key"\nF0513 23:53:05.604041       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 13 23:53:11.817 E ns/openshift-network-diagnostics pod/network-check-target-vhfxd node/ip-10-0-143-225.us-east-2.compute.internal uid/a9f07976-51c3-470b-a6cd-d503c93ed651 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 2 lines not shown

#1790039507349803008junit5 days ago
May 13 17:07:25.979 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-7vwnl node/ip-10-0-194-26.ec2.internal uid/2d9d5531-0205-48b4-a99d-6fd47d15493e container/csi-liveness-probe reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 13 17:07:27.028 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-194-26.ec2.internal node/ip-10-0-194-26.ec2.internal uid/3b6d37d6-1266-4c97-bb10-27783cd6c8d5 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0513 17:07:25.899427       1 cmd.go:216] Using insecure, self-signed certificates\nI0513 17:07:25.899822       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715620045 cert, and key in /tmp/serving-cert-2287266219/serving-signer.crt, /tmp/serving-cert-2287266219/serving-signer.key\nI0513 17:07:26.330121       1 observer_polling.go:159] Starting file observer\nW0513 17:07:26.348637       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-194-26.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0513 17:07:26.348830       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0513 17:07:26.374855       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2287266219/tls.crt::/tmp/serving-cert-2287266219/tls.key"\nF0513 17:07:26.577502       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 13 17:07:28.169 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-194-26.ec2.internal node/ip-10-0-194-26.ec2.internal uid/3b6d37d6-1266-4c97-bb10-27783cd6c8d5 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0513 17:07:25.899427       1 cmd.go:216] Using insecure, self-signed certificates\nI0513 17:07:25.899822       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715620045 cert, and key in /tmp/serving-cert-2287266219/serving-signer.crt, /tmp/serving-cert-2287266219/serving-signer.key\nI0513 17:07:26.330121       1 observer_polling.go:159] Starting file observer\nW0513 17:07:26.348637       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-194-26.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0513 17:07:26.348830       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0513 17:07:26.374855       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2287266219/tls.crt::/tmp/serving-cert-2287266219/tls.key"\nF0513 17:07:26.577502       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1789948963709784064junit5 days ago
May 13 11:07:11.929 E ns/openshift-ovn-kubernetes pod/ovnkube-master-knd9c node/ip-10-0-252-243.us-west-1.compute.internal uid/1b050b02-15f5-4adb-b142-3f680e687a52 container/northd reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 13 11:07:13.954 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-252-243.us-west-1.compute.internal node/ip-10-0-252-243.us-west-1.compute.internal uid/26a6d926-6f3e-4bb6-baea-903db3df8903 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0513 11:07:12.675450       1 cmd.go:216] Using insecure, self-signed certificates\nI0513 11:07:12.685558       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715598432 cert, and key in /tmp/serving-cert-656628181/serving-signer.crt, /tmp/serving-cert-656628181/serving-signer.key\nI0513 11:07:13.149857       1 observer_polling.go:159] Starting file observer\nW0513 11:07:13.175939       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-252-243.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0513 11:07:13.176073       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0513 11:07:13.199658       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-656628181/tls.crt::/tmp/serving-cert-656628181/tls.key"\nF0513 11:07:13.425129       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 13 11:07:14.941 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-252-243.us-west-1.compute.internal node/ip-10-0-252-243.us-west-1.compute.internal uid/26a6d926-6f3e-4bb6-baea-903db3df8903 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0513 11:07:12.675450       1 cmd.go:216] Using insecure, self-signed certificates\nI0513 11:07:12.685558       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715598432 cert, and key in /tmp/serving-cert-656628181/serving-signer.crt, /tmp/serving-cert-656628181/serving-signer.key\nI0513 11:07:13.149857       1 observer_polling.go:159] Starting file observer\nW0513 11:07:13.175939       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-252-243.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0513 11:07:13.176073       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0513 11:07:13.199658       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-656628181/tls.crt::/tmp/serving-cert-656628181/tls.key"\nF0513 11:07:13.425129       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 2 lines not shown

#1788899224604119040junit8 days ago
May 10 13:35:26.599 E ns/openshift-ovn-kubernetes pod/ovnkube-master-t2x4z node/ip-10-0-251-118.us-east-2.compute.internal uid/9b250479-70c6-4294-8293-856b0cc6cfea container/northd reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 10 13:35:27.564 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-251-118.us-east-2.compute.internal node/ip-10-0-251-118.us-east-2.compute.internal uid/975047af-b345-45f0-a278-b2512591c0bb container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 13:35:25.976996       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 13:35:25.977230       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715348125 cert, and key in /tmp/serving-cert-3295530936/serving-signer.crt, /tmp/serving-cert-3295530936/serving-signer.key\nI0510 13:35:26.244708       1 observer_polling.go:159] Starting file observer\nW0510 13:35:26.259087       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-251-118.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 13:35:26.259300       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0510 13:35:26.274612       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3295530936/tls.crt::/tmp/serving-cert-3295530936/tls.key"\nF0510 13:35:26.725885       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 10 13:35:28.640 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-251-118.us-east-2.compute.internal node/ip-10-0-251-118.us-east-2.compute.internal uid/975047af-b345-45f0-a278-b2512591c0bb container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 13:35:25.976996       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 13:35:25.977230       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715348125 cert, and key in /tmp/serving-cert-3295530936/serving-signer.crt, /tmp/serving-cert-3295530936/serving-signer.key\nI0510 13:35:26.244708       1 observer_polling.go:159] Starting file observer\nW0510 13:35:26.259087       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-251-118.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 13:35:26.259300       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0510 13:35:26.274612       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3295530936/tls.crt::/tmp/serving-cert-3295530936/tls.key"\nF0510 13:35:26.725885       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788946036559974400junit8 days ago
May 10 16:45:27.052 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-vmm9h node/ip-10-0-234-154.us-west-1.compute.internal uid/04aca442-82ba-4936-a122-39f54d06f40e container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 10 16:45:33.094 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-234-154.us-west-1.compute.internal node/ip-10-0-234-154.us-west-1.compute.internal uid/cecea07d-cfac-49ac-a551-5f219f8034d4 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 16:45:32.125306       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 16:45:32.125725       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715359532 cert, and key in /tmp/serving-cert-3190682069/serving-signer.crt, /tmp/serving-cert-3190682069/serving-signer.key\nI0510 16:45:32.327866       1 observer_polling.go:159] Starting file observer\nW0510 16:45:32.343908       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-234-154.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 16:45:32.344055       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0510 16:45:32.358848       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3190682069/tls.crt::/tmp/serving-cert-3190682069/tls.key"\nF0510 16:45:32.773825       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 10 16:45:34.108 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-234-154.us-west-1.compute.internal node/ip-10-0-234-154.us-west-1.compute.internal uid/cecea07d-cfac-49ac-a551-5f219f8034d4 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 16:45:32.125306       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 16:45:32.125725       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715359532 cert, and key in /tmp/serving-cert-3190682069/serving-signer.crt, /tmp/serving-cert-3190682069/serving-signer.key\nI0510 16:45:32.327866       1 observer_polling.go:159] Starting file observer\nW0510 16:45:32.343908       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-234-154.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 16:45:32.344055       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0510 16:45:32.358848       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3190682069/tls.crt::/tmp/serving-cert-3190682069/tls.key"\nF0510 16:45:32.773825       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788848856092381184junit8 days ago
May 10 10:26:27.433 E ns/openshift-dns pod/node-resolver-wmfw6 node/ip-10-0-129-20.ec2.internal uid/6fe61585-ab56-4e60-826b-28ee4201b8e7 container/dns-node-resolver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 10 10:26:32.079 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-20.ec2.internal node/ip-10-0-129-20.ec2.internal uid/6826b505-bdf1-452e-9c4a-822e39f8a8c6 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 10:26:30.658449       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 10:26:30.676386       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715336790 cert, and key in /tmp/serving-cert-1129697399/serving-signer.crt, /tmp/serving-cert-1129697399/serving-signer.key\nI0510 10:26:31.075574       1 observer_polling.go:159] Starting file observer\nW0510 10:26:31.100006       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-129-20.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 10:26:31.100138       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0510 10:26:31.138292       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1129697399/tls.crt::/tmp/serving-cert-1129697399/tls.key"\nF0510 10:26:31.458472       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 10 10:26:38.500 E ns/openshift-network-diagnostics pod/network-check-target-gtp8d node/ip-10-0-129-20.ec2.internal uid/ccaffac6-279b-4099-bb55-047868b470a5 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

#1788695372575543296junit9 days ago
May 10 00:04:32.951 E ns/openshift-dns pod/node-resolver-zr6kn node/ip-10-0-146-76.us-west-2.compute.internal uid/9e76e4b1-3cc7-4920-8b55-288ab837b97c container/dns-node-resolver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 10 00:04:35.965 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-76.us-west-2.compute.internal node/ip-10-0-146-76.us-west-2.compute.internal uid/4c8a4ea2-eb3a-49d5-8d00-1dd2db9f10cd container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 00:04:34.329481       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 00:04:34.336803       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715299474 cert, and key in /tmp/serving-cert-2584099023/serving-signer.crt, /tmp/serving-cert-2584099023/serving-signer.key\nI0510 00:04:34.616461       1 observer_polling.go:159] Starting file observer\nW0510 00:04:34.642474       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-146-76.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 00:04:34.642580       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0510 00:04:34.654896       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2584099023/tls.crt::/tmp/serving-cert-2584099023/tls.key"\nF0510 00:04:34.879025       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 10 00:04:38.027 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-vp2tt node/ip-10-0-146-76.us-west-2.compute.internal uid/7a6b5ee0-6d06-4c7f-9111-d101b3232806 container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1788695372575543296junit9 days ago
May 10 00:04:38.027 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-vp2tt node/ip-10-0-146-76.us-west-2.compute.internal uid/7a6b5ee0-6d06-4c7f-9111-d101b3232806 container/csi-liveness-probe reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 10 00:04:38.079 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-76.us-west-2.compute.internal node/ip-10-0-146-76.us-west-2.compute.internal uid/4c8a4ea2-eb3a-49d5-8d00-1dd2db9f10cd container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 00:04:34.329481       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 00:04:34.336803       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715299474 cert, and key in /tmp/serving-cert-2584099023/serving-signer.crt, /tmp/serving-cert-2584099023/serving-signer.key\nI0510 00:04:34.616461       1 observer_polling.go:159] Starting file observer\nW0510 00:04:34.642474       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-146-76.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 00:04:34.642580       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0510 00:04:34.654896       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2584099023/tls.crt::/tmp/serving-cert-2584099023/tls.key"\nF0510 00:04:34.879025       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 10 00:04:39.048 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-76.us-west-2.compute.internal node/ip-10-0-146-76.us-west-2.compute.internal uid/4c8a4ea2-eb3a-49d5-8d00-1dd2db9f10cd container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 00:04:37.263738       1 cmd.go:216] Using insecure, self-signed certificates\nI0510 00:04:37.263974       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715299477 cert, and key in /tmp/serving-cert-1935950559/serving-signer.crt, /tmp/serving-cert-1935950559/serving-signer.key\nI0510 00:04:37.815501       1 observer_polling.go:159] Starting file observer\nW0510 00:04:37.817063       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-146-76.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 00:04:37.817208       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0510 00:04:37.817637       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1935950559/tls.crt::/tmp/serving-cert-1935950559/tls.key"\nF0510 00:04:38.173250       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1788563757937463296junit9 days ago
May 09 15:23:20.665 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-b9km2 node/ip-10-0-164-47.ec2.internal uid/fcffa220-ac18-46de-9750-813194e44cc1 container/csi-liveness-probe reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 15:23:23.456 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-164-47.ec2.internal node/ip-10-0-164-47.ec2.internal uid/2638d78f-1b01-4136-9641-e6d6e6ee797e container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 15:23:21.875732       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 15:23:21.893564       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715268201 cert, and key in /tmp/serving-cert-578425212/serving-signer.crt, /tmp/serving-cert-578425212/serving-signer.key\nI0509 15:23:22.312606       1 observer_polling.go:159] Starting file observer\nW0509 15:23:22.329348       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-164-47.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 15:23:22.329502       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0509 15:23:22.349155       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-578425212/tls.crt::/tmp/serving-cert-578425212/tls.key"\nF0509 15:23:22.621109       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 15:23:28.492 E ns/e2e-k8s-sig-apps-daemonset-upgrade-8630 pod/ds1-w6mts node/ip-10-0-164-47.ec2.internal uid/488550ea-2d48-463d-94d9-1598904d61df container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 2 lines not shown

#1788371003773030400junit9 days ago
May 09 02:34:56.271 E ns/openshift-dns pod/node-resolver-9ckjl node/ip-10-0-248-45.us-east-2.compute.internal uid/c5715bf2-4757-47a6-92e2-b441ab440c91 container/dns-node-resolver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 02:35:00.714 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-248-45.us-east-2.compute.internal node/ip-10-0-248-45.us-east-2.compute.internal uid/61e52dd8-40ab-4e1f-af62-5ac25d5ffcf3 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 02:34:59.185148       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 02:34:59.185414       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715222099 cert, and key in /tmp/serving-cert-3382648083/serving-signer.crt, /tmp/serving-cert-3382648083/serving-signer.key\nI0509 02:34:59.802449       1 observer_polling.go:159] Starting file observer\nW0509 02:34:59.815023       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-248-45.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 02:34:59.815168       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0509 02:34:59.824254       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3382648083/tls.crt::/tmp/serving-cert-3382648083/tls.key"\nF0509 02:34:59.997222       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 02:35:00.733 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-n82nc node/ip-10-0-248-45.us-east-2.compute.internal uid/c862670f-6208-4bef-aa60-5e3f723d1039 container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1788371003773030400junit9 days ago
May 09 02:35:00.733 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-n82nc node/ip-10-0-248-45.us-east-2.compute.internal uid/c862670f-6208-4bef-aa60-5e3f723d1039 container/csi-liveness-probe reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 09 02:35:01.796 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-248-45.us-east-2.compute.internal node/ip-10-0-248-45.us-east-2.compute.internal uid/61e52dd8-40ab-4e1f-af62-5ac25d5ffcf3 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 02:34:59.185148       1 cmd.go:216] Using insecure, self-signed certificates\nI0509 02:34:59.185414       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715222099 cert, and key in /tmp/serving-cert-3382648083/serving-signer.crt, /tmp/serving-cert-3382648083/serving-signer.key\nI0509 02:34:59.802449       1 observer_polling.go:159] Starting file observer\nW0509 02:34:59.815023       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-248-45.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 02:34:59.815168       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0509 02:34:59.824254       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3382648083/tls.crt::/tmp/serving-cert-3382648083/tls.key"\nF0509 02:34:59.997222       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 09 02:35:07.108 E ns/openshift-ovn-kubernetes pod/ovnkube-master-zfm8z node/ip-10-0-248-45.us-east-2.compute.internal uid/000ba070-b013-43ce-9a7b-1bf277b2d36f container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1788185401119215616junit10 days ago
May 08 14:30:19.272 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-s47kt node/ip-10-0-199-207.us-west-2.compute.internal uid/92f9cc76-a57b-4e70-9f74-31d176a2a121 container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 08 14:30:22.285 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-199-207.us-west-2.compute.internal node/ip-10-0-199-207.us-west-2.compute.internal uid/476379b9-93d5-4ee4-a526-e41cdcec2ad1 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 14:30:20.295861       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 14:30:20.300844       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715178620 cert, and key in /tmp/serving-cert-1413441007/serving-signer.crt, /tmp/serving-cert-1413441007/serving-signer.key\nI0508 14:30:20.715937       1 observer_polling.go:159] Starting file observer\nW0508 14:30:20.728604       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-199-207.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 14:30:20.728726       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0508 14:30:20.742216       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1413441007/tls.crt::/tmp/serving-cert-1413441007/tls.key"\nF0508 14:30:21.247642       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 14:30:26.917 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-199-207.us-west-2.compute.internal node/ip-10-0-199-207.us-west-2.compute.internal uid/476379b9-93d5-4ee4-a526-e41cdcec2ad1 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 14:30:20.295861       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 14:30:20.300844       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715178620 cert, and key in /tmp/serving-cert-1413441007/serving-signer.crt, /tmp/serving-cert-1413441007/serving-signer.key\nI0508 14:30:20.715937       1 observer_polling.go:159] Starting file observer\nW0508 14:30:20.728604       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-199-207.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 14:30:20.728726       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0508 14:30:20.742216       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1413441007/tls.crt::/tmp/serving-cert-1413441007/tls.key"\nF0508 14:30:21.247642       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1787997194284961792junit10 days ago
May 08 01:57:17.366 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-tfhhd node/ip-10-0-187-136.us-west-2.compute.internal uid/5d4856ce-a184-4f84-baa0-a53df85ea434 container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 08 01:57:20.372 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-187-136.us-west-2.compute.internal node/ip-10-0-187-136.us-west-2.compute.internal uid/cd85f099-44f6-4866-a4ef-a340494cb984 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 01:57:19.310152       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 01:57:19.314152       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715133439 cert, and key in /tmp/serving-cert-2665935620/serving-signer.crt, /tmp/serving-cert-2665935620/serving-signer.key\nI0508 01:57:19.731955       1 observer_polling.go:159] Starting file observer\nW0508 01:57:19.746851       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-187-136.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 01:57:19.746956       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0508 01:57:19.767705       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2665935620/tls.crt::/tmp/serving-cert-2665935620/tls.key"\nF0508 01:57:20.035761       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 01:57:21.381 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-187-136.us-west-2.compute.internal node/ip-10-0-187-136.us-west-2.compute.internal uid/cd85f099-44f6-4866-a4ef-a340494cb984 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 01:57:19.310152       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 01:57:19.314152       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715133439 cert, and key in /tmp/serving-cert-2665935620/serving-signer.crt, /tmp/serving-cert-2665935620/serving-signer.key\nI0508 01:57:19.731955       1 observer_polling.go:159] Starting file observer\nW0508 01:57:19.746851       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-187-136.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 01:57:19.746956       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0508 01:57:19.767705       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2665935620/tls.crt::/tmp/serving-cert-2665935620/tls.key"\nF0508 01:57:20.035761       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 2 lines not shown

#1788094773119160320junit10 days ago
May 08 08:21:57.000 - 1s    E ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new reason/DisruptionBegan ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new stopped responding to GET requests over new connections: Get "https://test-disruption-new-openshift-image-registry.apps.ci-op-jpg4krw9-99751.aws-2.ci.openshift.org/healthz": read tcp 10.130.20.7:36018->34.210.170.210:443: read: connection reset by peer
May 08 08:21:57.076 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-212-55.us-west-2.compute.internal node/ip-10-0-212-55.us-west-2.compute.internal uid/62702625-6aad-4cc9-9dab-fd5875368072 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 08:21:55.654940       1 cmd.go:216] Using insecure, self-signed certificates\nI0508 08:21:55.660237       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715156515 cert, and key in /tmp/serving-cert-586698341/serving-signer.crt, /tmp/serving-cert-586698341/serving-signer.key\nI0508 08:21:56.249964       1 observer_polling.go:159] Starting file observer\nW0508 08:21:56.262263       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-212-55.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 08:21:56.262375       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0508 08:21:56.280868       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-586698341/tls.crt::/tmp/serving-cert-586698341/tls.key"\nF0508 08:21:56.545308       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 08 08:21:57.718 - 999ms E ns/openshift-console route/console disruption/ingress-to-console connection/new reason/DisruptionBegan ns/openshift-console route/console disruption/ingress-to-console connection/new stopped responding to GET requests over new connections: Get "https://console-openshift-console.apps.ci-op-jpg4krw9-99751.aws-2.ci.openshift.org/healthz": read tcp 10.130.20.7:36028->34.210.170.210:443: read: connection reset by peer

... 2 lines not shown

#1787870660219899904junit11 days ago
May 07 17:26:19.346 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-s8n8z node/ip-10-0-246-219.ec2.internal uid/d26747fb-a327-4b10-ba22-f602fa868faa container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 07 17:26:23.233 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-246-219.ec2.internal node/ip-10-0-246-219.ec2.internal uid/beefba60-4c01-4045-9ccd-aa620d4d6305 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0507 17:26:22.101439       1 cmd.go:216] Using insecure, self-signed certificates\nI0507 17:26:22.112001       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715102782 cert, and key in /tmp/serving-cert-845588008/serving-signer.crt, /tmp/serving-cert-845588008/serving-signer.key\nI0507 17:26:22.384781       1 observer_polling.go:159] Starting file observer\nW0507 17:26:22.421404       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-246-219.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0507 17:26:22.421603       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0507 17:26:22.443412       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-845588008/tls.crt::/tmp/serving-cert-845588008/tls.key"\nF0507 17:26:22.796617       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 07 17:26:23.252 E ns/openshift-ovn-kubernetes pod/ovnkube-master-66jbj node/ip-10-0-246-219.ec2.internal uid/3108f308-5e03-430c-bb3f-6eda50d769d8 container/ovn-dbchecker reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1787870660219899904junit11 days ago
May 07 17:26:23.252 E ns/openshift-ovn-kubernetes pod/ovnkube-master-66jbj node/ip-10-0-246-219.ec2.internal uid/3108f308-5e03-430c-bb3f-6eda50d769d8 container/ovnkube-master reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 07 17:26:29.425 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-246-219.ec2.internal node/ip-10-0-246-219.ec2.internal uid/beefba60-4c01-4045-9ccd-aa620d4d6305 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0507 17:26:22.101439       1 cmd.go:216] Using insecure, self-signed certificates\nI0507 17:26:22.112001       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715102782 cert, and key in /tmp/serving-cert-845588008/serving-signer.crt, /tmp/serving-cert-845588008/serving-signer.key\nI0507 17:26:22.384781       1 observer_polling.go:159] Starting file observer\nW0507 17:26:22.421404       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-246-219.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0507 17:26:22.421603       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0507 17:26:22.443412       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-845588008/tls.crt::/tmp/serving-cert-845588008/tls.key"\nF0507 17:26:22.796617       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 07 17:26:29.439 E ns/openshift-network-diagnostics pod/network-check-target-8wdq8 node/ip-10-0-246-219.ec2.internal uid/1590a242-e85e-466c-b187-74bc1a26eaf8 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1787564749307777024junit12 days ago
May 06 21:16:33.940 E ns/openshift-ovn-kubernetes pod/ovnkube-node-q65ws node/ip-10-0-251-120.us-west-1.compute.internal uid/e0a61a2a-8480-4783-9767-d79e74ec11e2 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 06 21:16:38.743 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-251-120.us-west-1.compute.internal node/ip-10-0-251-120.us-west-1.compute.internal uid/31451783-923f-4760-ae82-146135333e2d container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0506 21:16:37.432119       1 cmd.go:216] Using insecure, self-signed certificates\nI0506 21:16:37.432342       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715030197 cert, and key in /tmp/serving-cert-3758149001/serving-signer.crt, /tmp/serving-cert-3758149001/serving-signer.key\nI0506 21:16:38.102989       1 observer_polling.go:159] Starting file observer\nW0506 21:16:38.126030       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-251-120.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0506 21:16:38.126149       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0506 21:16:38.164595       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3758149001/tls.crt::/tmp/serving-cert-3758149001/tls.key"\nF0506 21:16:38.511458       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 06 21:16:40.824 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-251-120.us-west-1.compute.internal node/ip-10-0-251-120.us-west-1.compute.internal uid/31451783-923f-4760-ae82-146135333e2d container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0506 21:16:37.432119       1 cmd.go:216] Using insecure, self-signed certificates\nI0506 21:16:37.432342       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715030197 cert, and key in /tmp/serving-cert-3758149001/serving-signer.crt, /tmp/serving-cert-3758149001/serving-signer.key\nI0506 21:16:38.102989       1 observer_polling.go:159] Starting file observer\nW0506 21:16:38.126030       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-251-120.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0506 21:16:38.126149       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0506 21:16:38.164595       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3758149001/tls.crt::/tmp/serving-cert-3758149001/tls.key"\nF0506 21:16:38.511458       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1787768250126307328junit11 days ago
May 07 10:43:35.346 E ns/openshift-image-registry pod/node-ca-94lqg node/ip-10-0-145-8.us-west-2.compute.internal uid/25d7fe3b-8ac1-474a-9811-bda2945b843f container/node-ca reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 07 10:43:40.284 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-8.us-west-2.compute.internal node/ip-10-0-145-8.us-west-2.compute.internal uid/c389aa08-502f-4194-9b11-4de6ccbc767b container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0507 10:43:39.060085       1 cmd.go:216] Using insecure, self-signed certificates\nI0507 10:43:39.065041       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715078619 cert, and key in /tmp/serving-cert-3131211933/serving-signer.crt, /tmp/serving-cert-3131211933/serving-signer.key\nI0507 10:43:39.638620       1 observer_polling.go:159] Starting file observer\nW0507 10:43:39.647013       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-145-8.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0507 10:43:39.648424       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0507 10:43:39.656185       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3131211933/tls.crt::/tmp/serving-cert-3131211933/tls.key"\nF0507 10:43:39.969990       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 07 10:43:43.715 E ns/openshift-network-diagnostics pod/network-check-target-2fp2f node/ip-10-0-145-8.us-west-2.compute.internal uid/04ee6d89-c37e-4902-8fd0-28246edd8805 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 2 lines not shown

#1787466113487998976junit12 days ago
May 06 14:45:57.469 E ns/openshift-image-registry pod/node-ca-mk9wf node/ip-10-0-173-218.us-west-1.compute.internal uid/132df37e-2bad-4f84-a1f6-21ba972f23fa container/node-ca reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 06 14:46:01.541 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-173-218.us-west-1.compute.internal node/ip-10-0-173-218.us-west-1.compute.internal uid/93d143eb-0645-44b2-801b-135776ae6c3e container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0506 14:45:59.986505       1 cmd.go:216] Using insecure, self-signed certificates\nI0506 14:45:59.992291       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715006759 cert, and key in /tmp/serving-cert-2459467260/serving-signer.crt, /tmp/serving-cert-2459467260/serving-signer.key\nI0506 14:46:00.367432       1 observer_polling.go:159] Starting file observer\nW0506 14:46:00.408110       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-173-218.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0506 14:46:00.408421       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0506 14:46:00.428725       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2459467260/tls.crt::/tmp/serving-cert-2459467260/tls.key"\nF0506 14:46:00.849056       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 06 14:46:02.499 E ns/openshift-ovn-kubernetes pod/ovnkube-master-zdprx node/ip-10-0-173-218.us-west-1.compute.internal uid/e26ccfb4-8c66-46d7-8ac0-74326d38c1f1 container/nbdb reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1787466113487998976junit12 days ago
May 06 14:46:02.499 E ns/openshift-ovn-kubernetes pod/ovnkube-master-zdprx node/ip-10-0-173-218.us-west-1.compute.internal uid/e26ccfb4-8c66-46d7-8ac0-74326d38c1f1 container/ovnkube-master reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 06 14:46:02.514 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-173-218.us-west-1.compute.internal node/ip-10-0-173-218.us-west-1.compute.internal uid/93d143eb-0645-44b2-801b-135776ae6c3e container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0506 14:45:59.986505       1 cmd.go:216] Using insecure, self-signed certificates\nI0506 14:45:59.992291       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715006759 cert, and key in /tmp/serving-cert-2459467260/serving-signer.crt, /tmp/serving-cert-2459467260/serving-signer.key\nI0506 14:46:00.367432       1 observer_polling.go:159] Starting file observer\nW0506 14:46:00.408110       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-173-218.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0506 14:46:00.408421       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0506 14:46:00.428725       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2459467260/tls.crt::/tmp/serving-cert-2459467260/tls.key"\nF0506 14:46:00.849056       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 06 14:46:05.520 E ns/e2e-k8s-sig-apps-daemonset-upgrade-3049 pod/ds1-2x67g node/ip-10-0-173-218.us-west-1.compute.internal uid/9fb0107f-3d15-4532-b7cf-98c852f1344b container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

Found in 96.88% of runs (620.00% of failures) across 32 total runs and 1 jobs (15.62% failed) in 159ms - clear search | chart view - source code located on github