Job:
periodic-ci-openshift-multiarch-master-nightly-4.15-upgrade-from-stable-4.14-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 32 runs, 31% failed, 260% of failures match = 81% impact
#1786750049145851904junit18 minutes ago
May 04 15:15:25.453 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-81-251.ec2.internal" not ready since 2024-05-04 15:13:25 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 15:16:02.655 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-81-251.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 15:15:52.368464       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 15:15:52.368797       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714835752 cert, and key in /tmp/serving-cert-2476966904/serving-signer.crt, /tmp/serving-cert-2476966904/serving-signer.key\nStaticPodsDegraded: I0504 15:15:53.162949       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 15:15:53.171308       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-81-251.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 15:15:53.171419       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 15:15:53.189528       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2476966904/tls.crt::/tmp/serving-cert-2476966904/tls.key"\nStaticPodsDegraded: F0504 15:15:53.468070       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 15:21:29.061 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-8-54.ec2.internal" not ready since 2024-05-04 15:21:19 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786741133733269504junit44 minutes ago
May 04 14:53:14.150 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-18-43.us-west-2.compute.internal" not ready since 2024-05-04 14:53:03 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 14:53:27.762 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-18-43.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 14:53:18.333694       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 14:53:18.333981       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714834398 cert, and key in /tmp/serving-cert-3035073761/serving-signer.crt, /tmp/serving-cert-3035073761/serving-signer.key\nStaticPodsDegraded: I0504 14:53:18.937506       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 14:53:18.948639       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-18-43.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 14:53:18.948766       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 14:53:18.969186       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3035073761/tls.crt::/tmp/serving-cert-3035073761/tls.key"\nStaticPodsDegraded: F0504 14:53:19.201706       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786726306616971264junit2 hours ago
May 04 13:28:58.051 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-27-76.ec2.internal" not ready since 2024-05-04 13:28:45 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 13:29:04.713 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-27-76.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 13:28:56.598838       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 13:28:56.599184       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714829336 cert, and key in /tmp/serving-cert-1763504644/serving-signer.crt, /tmp/serving-cert-1763504644/serving-signer.key\nStaticPodsDegraded: I0504 13:28:56.804580       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 13:28:56.819675       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-76.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 13:28:56.819776       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 13:28:56.835016       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1763504644/tls.crt::/tmp/serving-cert-1763504644/tls.key"\nStaticPodsDegraded: F0504 13:28:57.317342       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 13:34:49.571 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-43-126.ec2.internal" not ready since 2024-05-04 13:34:24 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786719705214488576junit2 hours ago
May 04 13:17:33.077 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-34-11.us-west-1.compute.internal" not ready since 2024-05-04 13:17:21 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 13:17:43.298 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-34-11.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 13:17:34.294139       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 13:17:34.294474       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714828654 cert, and key in /tmp/serving-cert-2546665180/serving-signer.crt, /tmp/serving-cert-2546665180/serving-signer.key\nStaticPodsDegraded: I0504 13:17:34.717204       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 13:17:34.726919       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-34-11.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 13:17:34.727029       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 13:17:34.736152       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2546665180/tls.crt::/tmp/serving-cert-2546665180/tls.key"\nStaticPodsDegraded: F0504 13:17:35.039877       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786711881587625984junit3 hours ago
May 04 12:40:46.822 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-96-13.us-west-2.compute.internal" not ready since 2024-05-04 12:38:46 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 12:41:23.675 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-96-13.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 12:41:14.659012       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 12:41:14.659237       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714826474 cert, and key in /tmp/serving-cert-2853837185/serving-signer.crt, /tmp/serving-cert-2853837185/serving-signer.key\nStaticPodsDegraded: I0504 12:41:15.045535       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 12:41:15.046959       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-96-13.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 12:41:15.047089       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 12:41:15.047675       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2853837185/tls.crt::/tmp/serving-cert-2853837185/tls.key"\nStaticPodsDegraded: F0504 12:41:15.199436       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 04 12:47:17.102 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-60-137.us-west-2.compute.internal" not ready since 2024-05-04 12:47:05 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786695941592453120junit4 hours ago
May 04 11:33:44.944 - 5s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-26-176.ec2.internal" not ready since 2024-05-04 11:33:32 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 11:33:50.455 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-26-176.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 11:33:44.613628       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 11:33:44.613963       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714822424 cert, and key in /tmp/serving-cert-1231569481/serving-signer.crt, /tmp/serving-cert-1231569481/serving-signer.key\nStaticPodsDegraded: I0504 11:33:45.129001       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 11:33:45.139429       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-26-176.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 11:33:45.139533       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 11:33:45.160292       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1231569481/tls.crt::/tmp/serving-cert-1231569481/tls.key"\nStaticPodsDegraded: F0504 11:33:45.573212       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 04 11:39:49.140 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-90-83.ec2.internal" not ready since 2024-05-04 11:39:36 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786688255320657920junit5 hours ago
May 04 10:56:13.060 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-78-93.ec2.internal" not ready since 2024-05-04 10:54:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 10:56:46.671 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-78-93.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 10:56:36.983247       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 10:56:36.983630       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714820196 cert, and key in /tmp/serving-cert-2758716339/serving-signer.crt, /tmp/serving-cert-2758716339/serving-signer.key\nStaticPodsDegraded: I0504 10:56:37.556313       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 10:56:37.565606       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-78-93.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 10:56:37.565745       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 10:56:37.578192       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2758716339/tls.crt::/tmp/serving-cert-2758716339/tls.key"\nStaticPodsDegraded: F0504 10:56:38.152626       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 11:02:30.066 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-120-130.ec2.internal" not ready since 2024-05-04 11:02:21 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786680726964408320junit5 hours ago
May 04 10:37:23.341 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-230.us-west-1.compute.internal" not ready since 2024-05-04 10:37:14 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 10:37:39.389 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-100-230.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 10:37:28.654052       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 10:37:28.654349       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714819048 cert, and key in /tmp/serving-cert-2545753282/serving-signer.crt, /tmp/serving-cert-2545753282/serving-signer.key\nStaticPodsDegraded: I0504 10:37:29.254511       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 10:37:29.271101       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-230.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 10:37:29.271225       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 10:37:29.294102       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2545753282/tls.crt::/tmp/serving-cert-2545753282/tls.key"\nStaticPodsDegraded: F0504 10:37:29.426889       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 10:43:15.551 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-18-49.us-west-1.compute.internal" not ready since 2024-05-04 10:43:09 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786673610702721024junit5 hours ago
May 04 10:04:12.801 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-13-202.us-west-1.compute.internal" not ready since 2024-05-04 10:02:12 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 10:04:48.327 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-13-202.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 10:04:37.985861       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 10:04:37.986156       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714817077 cert, and key in /tmp/serving-cert-3946450971/serving-signer.crt, /tmp/serving-cert-3946450971/serving-signer.key\nStaticPodsDegraded: I0504 10:04:38.700878       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 10:04:38.717039       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-13-202.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 10:04:38.717214       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 10:04:38.751513       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3946450971/tls.crt::/tmp/serving-cert-3946450971/tls.key"\nStaticPodsDegraded: F0504 10:04:38.937860       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 10:10:19.754 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-4-44.us-west-1.compute.internal" not ready since 2024-05-04 10:08:19 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786666041863049216junit6 hours ago
May 04 09:34:26.504 - 17s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-54-2.us-west-2.compute.internal" not ready since 2024-05-04 09:34:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 09:34:44.007 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-54-2.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 09:34:32.590194       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 09:34:32.590427       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714815272 cert, and key in /tmp/serving-cert-3526726430/serving-signer.crt, /tmp/serving-cert-3526726430/serving-signer.key\nStaticPodsDegraded: I0504 09:34:33.045582       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 09:34:33.058010       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-54-2.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 09:34:33.058139       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 09:34:33.078036       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3526726430/tls.crt::/tmp/serving-cert-3526726430/tls.key"\nStaticPodsDegraded: F0504 09:34:33.392138       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 09:40:48.465 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-67-159.us-west-2.compute.internal" not ready since 2024-05-04 09:40:37 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786666041863049216junit6 hours ago
May 04 09:46:55.922 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-104-82.us-west-2.compute.internal" not ready since 2024-05-04 09:44:55 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 09:47:32.188 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-104-82.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 09:47:21.886117       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 09:47:21.886556       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714816041 cert, and key in /tmp/serving-cert-4092868569/serving-signer.crt, /tmp/serving-cert-4092868569/serving-signer.key\nStaticPodsDegraded: I0504 09:47:22.554046       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 09:47:22.563675       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-104-82.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 09:47:22.563781       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 09:47:22.578923       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4092868569/tls.crt::/tmp/serving-cert-4092868569/tls.key"\nStaticPodsDegraded: F0504 09:47:23.026529       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786643489702809600junit7 hours ago
May 04 08:09:20.940 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-84-212.us-west-1.compute.internal" not ready since 2024-05-04 08:07:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 08:09:55.941 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-84-212.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 08:09:47.995169       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 08:09:47.997290       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714810187 cert, and key in /tmp/serving-cert-2493682308/serving-signer.crt, /tmp/serving-cert-2493682308/serving-signer.key\nStaticPodsDegraded: I0504 08:09:48.571508       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 08:09:48.584585       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-84-212.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 08:09:48.584706       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 08:09:48.598747       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2493682308/tls.crt::/tmp/serving-cert-2493682308/tls.key"\nStaticPodsDegraded: F0504 08:09:48.682173       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 08:15:53.324 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-35-67.us-west-1.compute.internal" not ready since 2024-05-04 08:15:51 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786635943793397760junit8 hours ago
May 04 07:42:39.113 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-121-91.ec2.internal" not ready since 2024-05-04 07:40:39 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 07:43:11.443 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-121-91.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 07:43:01.771700       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 07:43:01.771997       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714808581 cert, and key in /tmp/serving-cert-707293551/serving-signer.crt, /tmp/serving-cert-707293551/serving-signer.key\nStaticPodsDegraded: I0504 07:43:02.322954       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 07:43:02.333235       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-91.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 07:43:02.333370       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 07:43:02.349931       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-707293551/tls.crt::/tmp/serving-cert-707293551/tls.key"\nStaticPodsDegraded: F0504 07:43:02.680314       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 07:48:57.482 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-75-15.ec2.internal" not ready since 2024-05-04 07:48:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786629174090272768junit8 hours ago
May 04 07:15:55.666 - 17s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-92-129.us-west-1.compute.internal" not ready since 2024-05-04 07:15:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 07:16:12.789 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-92-129.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 07:16:02.629772       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 07:16:02.630168       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714806962 cert, and key in /tmp/serving-cert-1985950698/serving-signer.crt, /tmp/serving-cert-1985950698/serving-signer.key\nStaticPodsDegraded: I0504 07:16:03.348703       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 07:16:03.371284       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-92-129.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 07:16:03.371402       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 07:16:03.399166       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1985950698/tls.crt::/tmp/serving-cert-1985950698/tls.key"\nStaticPodsDegraded: F0504 07:16:03.653657       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 04 07:22:10.395 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-84-80.us-west-1.compute.internal" not ready since 2024-05-04 07:22:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786613346800242688junit10 hours ago
May 04 05:56:53.529 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-88-186.us-east-2.compute.internal" not ready since 2024-05-04 05:54:53 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 05:57:22.998 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-88-186.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 05:57:13.721802       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 05:57:13.722121       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714802233 cert, and key in /tmp/serving-cert-3494793273/serving-signer.crt, /tmp/serving-cert-3494793273/serving-signer.key\nStaticPodsDegraded: I0504 05:57:14.031242       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 05:57:14.056426       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-88-186.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 05:57:14.056528       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 05:57:14.068773       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3494793273/tls.crt::/tmp/serving-cert-3494793273/tls.key"\nStaticPodsDegraded: F0504 05:57:14.354529       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 06:03:14.520 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-98-242.us-east-2.compute.internal" not ready since 2024-05-04 06:03:07 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786603749318332416junit10 hours ago
May 04 05:26:39.945 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-116-254.us-east-2.compute.internal" not ready since 2024-05-04 05:24:39 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 05:27:12.119 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-116-254.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 05:27:04.314323       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 05:27:04.314621       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714800424 cert, and key in /tmp/serving-cert-3054620096/serving-signer.crt, /tmp/serving-cert-3054620096/serving-signer.key\nStaticPodsDegraded: I0504 05:27:04.931699       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 05:27:04.959141       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-116-254.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 05:27:04.959264       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 05:27:04.977543       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3054620096/tls.crt::/tmp/serving-cert-3054620096/tls.key"\nStaticPodsDegraded: F0504 05:27:05.213875       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 05:33:05.530 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-60-92.us-east-2.compute.internal" not ready since 2024-05-04 05:32:54 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786587023507722240junit11 hours ago
May 04 04:35:55.274 - 40s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-5-102.us-west-2.compute.internal" not ready since 2024-05-04 04:33:55 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:36:36.176 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-5-102.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:36:25.008579       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:36:25.009205       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714797385 cert, and key in /tmp/serving-cert-1915659060/serving-signer.crt, /tmp/serving-cert-1915659060/serving-signer.key\nStaticPodsDegraded: I0504 04:36:25.631523       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:36:25.645234       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-5-102.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:36:25.645369       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 04:36:25.668299       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1915659060/tls.crt::/tmp/serving-cert-1915659060/tls.key"\nStaticPodsDegraded: F0504 04:36:25.893675       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786512934696914944junit16 hours ago
May 03 23:33:47.870 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-101.us-west-2.compute.internal" not ready since 2024-05-03 23:31:47 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 23:34:20.689 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-100-101.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 23:34:14.371281       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 23:34:14.371766       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714779254 cert, and key in /tmp/serving-cert-2369468444/serving-signer.crt, /tmp/serving-cert-2369468444/serving-signer.key\nStaticPodsDegraded: I0503 23:34:14.685680       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 23:34:14.695302       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-101.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 23:34:14.695504       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 23:34:14.707003       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2369468444/tls.crt::/tmp/serving-cert-2369468444/tls.key"\nStaticPodsDegraded: F0503 23:34:14.950495       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 23:40:26.442 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-120-238.us-west-2.compute.internal" not ready since 2024-05-03 23:40:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786488015359578112junit18 hours ago
May 03 21:55:27.887 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-127-221.us-west-1.compute.internal" not ready since 2024-05-03 21:53:27 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:56:03.799 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-127-221.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:55:53.795499       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:55:53.795808       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714773353 cert, and key in /tmp/serving-cert-1586555821/serving-signer.crt, /tmp/serving-cert-1586555821/serving-signer.key\nStaticPodsDegraded: I0503 21:55:54.356514       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:55:54.365059       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-127-221.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:55:54.365180       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 21:55:54.382596       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1586555821/tls.crt::/tmp/serving-cert-1586555821/tls.key"\nStaticPodsDegraded: F0503 21:55:54.659607       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 22:01:53.623 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-30-31.us-west-1.compute.internal" not ready since 2024-05-03 22:01:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786482260212453376junit18 hours ago
May 03 21:24:22.175 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-124-251.us-east-2.compute.internal" not ready since 2024-05-03 21:24:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:24:36.638 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-124-251.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:24:29.144049       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:24:29.144297       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714771469 cert, and key in /tmp/serving-cert-456652248/serving-signer.crt, /tmp/serving-cert-456652248/serving-signer.key\nStaticPodsDegraded: I0503 21:24:29.718626       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:24:29.730487       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-124-251.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:24:29.730607       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 21:24:29.749383       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-456652248/tls.crt::/tmp/serving-cert-456652248/tls.key"\nStaticPodsDegraded: F0503 21:24:29.837676       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 21:30:20.236 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-57-239.us-east-2.compute.internal" not ready since 2024-05-03 21:30:10 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786470701381718016junit19 hours ago
May 03 20:36:10.975 - 30s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-80-61.ec2.internal" not ready since 2024-05-03 20:34:10 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 20:36:41.819 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-80-61.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 20:36:37.444461       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 20:36:37.444862       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714768597 cert, and key in /tmp/serving-cert-1803571219/serving-signer.crt, /tmp/serving-cert-1803571219/serving-signer.key\nStaticPodsDegraded: I0503 20:36:37.880548       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 20:36:37.887974       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-80-61.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 20:36:37.888121       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 20:36:37.897958       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1803571219/tls.crt::/tmp/serving-cert-1803571219/tls.key"\nStaticPodsDegraded: F0503 20:36:38.072656       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 20:42:02.974 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-101-132.ec2.internal" not ready since 2024-05-03 20:40:02 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786462533897424896junit19 hours ago
May 03 20:13:46.201 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-49-89.us-west-2.compute.internal" not ready since 2024-05-03 20:13:37 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 20:14:01.223 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-49-89.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 20:13:51.917768       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 20:13:51.937707       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714767231 cert, and key in /tmp/serving-cert-3929112678/serving-signer.crt, /tmp/serving-cert-3929112678/serving-signer.key\nStaticPodsDegraded: I0503 20:13:52.361981       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 20:13:52.370549       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-49-89.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 20:13:52.370668       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 20:13:52.385710       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3929112678/tls.crt::/tmp/serving-cert-3929112678/tls.key"\nStaticPodsDegraded: F0503 20:13:52.690838       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 20:19:59.663 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-64-105.us-west-2.compute.internal" not ready since 2024-05-03 20:19:46 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786455010112966656junit20 hours ago
May 03 19:44:54.373 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-3-175.us-west-2.compute.internal" not ready since 2024-05-03 19:42:54 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:45:31.257 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-3-175.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:45:20.689977       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:45:20.690229       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714765520 cert, and key in /tmp/serving-cert-207798529/serving-signer.crt, /tmp/serving-cert-207798529/serving-signer.key\nStaticPodsDegraded: I0503 19:45:21.246385       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:45:21.257363       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-3-175.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:45:21.257496       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 19:45:21.271310       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-207798529/tls.crt::/tmp/serving-cert-207798529/tls.key"\nStaticPodsDegraded: F0503 19:45:21.540283       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 19:51:06.375 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-25-54.us-west-2.compute.internal" not ready since 2024-05-03 19:49:06 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786455010112966656junit20 hours ago
May 03 19:57:57.184 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-255.us-west-2.compute.internal" not ready since 2024-05-03 19:57:45 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:58:06.710 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-100-255.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:57:58.688441       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:57:58.688750       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714766278 cert, and key in /tmp/serving-cert-3981445254/serving-signer.crt, /tmp/serving-cert-3981445254/serving-signer.key\nStaticPodsDegraded: I0503 19:57:59.260926       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:57:59.275744       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-255.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:57:59.275852       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 19:57:59.288504       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3981445254/tls.crt::/tmp/serving-cert-3981445254/tls.key"\nStaticPodsDegraded: F0503 19:57:59.664478       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786448272399798272junit20 hours ago
May 03 19:27:10.700 - 38s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-52-191.us-west-2.compute.internal" not ready since 2024-05-03 19:25:10 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:27:49.335 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-52-191.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:27:39.943089       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:27:39.943529       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714764459 cert, and key in /tmp/serving-cert-1978184681/serving-signer.crt, /tmp/serving-cert-1978184681/serving-signer.key\nStaticPodsDegraded: I0503 19:27:40.531815       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:27:40.547250       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-52-191.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:27:40.547370       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 19:27:40.561558       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1978184681/tls.crt::/tmp/serving-cert-1978184681/tls.key"\nStaticPodsDegraded: F0503 19:27:40.908215       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786433142152761344junit21 hours ago
May 03 18:01:40.840 - 30s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-21-185.us-east-2.compute.internal" not ready since 2024-05-03 17:59:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 18:02:11.007 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-21-185.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 18:01:59.888530       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 18:01:59.891216       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714759319 cert, and key in /tmp/serving-cert-3271592179/serving-signer.crt, /tmp/serving-cert-3271592179/serving-signer.key\nStaticPodsDegraded: I0503 18:02:00.393380       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 18:02:00.405967       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-21-185.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 18:02:00.406110       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 18:02:00.444402       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3271592179/tls.crt::/tmp/serving-cert-3271592179/tls.key"\nStaticPodsDegraded: F0503 18:02:00.754512       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 18:07:32.832 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-71-215.us-east-2.compute.internal" not ready since 2024-05-03 18:07:26 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786432092150697984junit21 hours ago
May 03 18:18:16.420 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-118-193.us-west-1.compute.internal" not ready since 2024-05-03 18:18:03 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 18:18:28.158 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-118-193.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 18:18:17.397801       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 18:18:17.399609       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714760297 cert, and key in /tmp/serving-cert-1894567985/serving-signer.crt, /tmp/serving-cert-1894567985/serving-signer.key\nStaticPodsDegraded: I0503 18:18:17.957232       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 18:18:17.966540       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-193.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 18:18:17.967861       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 18:18:17.986927       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1894567985/tls.crt::/tmp/serving-cert-1894567985/tls.key"\nStaticPodsDegraded: F0503 18:18:18.410151       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786374004899057664junit25 hours ago
May 03 14:28:04.850 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-117-48.ec2.internal" not ready since 2024-05-03 14:27:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 14:28:11.782 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-117-48.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 14:28:05.575185       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 14:28:05.575471       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714746485 cert, and key in /tmp/serving-cert-1022605185/serving-signer.crt, /tmp/serving-cert-1022605185/serving-signer.key\nStaticPodsDegraded: I0503 14:28:06.164189       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 14:28:06.172604       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-117-48.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 14:28:06.172710       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 14:28:06.197291       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1022605185/tls.crt::/tmp/serving-cert-1022605185/tls.key"\nStaticPodsDegraded: F0503 14:28:06.445226       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
periodic-ci-openshift-release-master-ci-4.17-upgrade-from-stable-4.16-e2e-aws-ovn-upgrade (all) - 80 runs, 16% failed, 562% of failures match = 91% impact
#1786725278995714048junit17 minutes ago
May 04 13:48:25.167 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-20-60.us-west-2.compute.internal" not ready since 2024-05-04 13:48:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 13:48:35.488 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-20-60.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 13:48:28.641765       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 13:48:28.641970       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714830508 cert, and key in /tmp/serving-cert-2021202863/serving-signer.crt, /tmp/serving-cert-2021202863/serving-signer.key\nStaticPodsDegraded: I0504 13:48:28.925783       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 13:48:28.927148       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-20-60.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 13:48:28.927266       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0504 13:48:28.927899       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2021202863/tls.crt::/tmp/serving-cert-2021202863/tls.key"\nStaticPodsDegraded: F0504 13:48:29.150986       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786725278995714048junit17 minutes ago
I0504 12:21:43.138274       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714824985\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714824985\" (2024-05-04 11:16:24 +0000 UTC to 2025-05-04 11:16:24 +0000 UTC (now=2024-05-04 12:21:43.1382556 +0000 UTC))"
E0504 12:25:54.020243       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-28ybf2h1-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.11.244:6443: connect: connection refused
I0504 12:25:57.242172       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786725276365885440junit40 minutes ago
I0504 13:45:37.586661       1 observer_polling.go:159] Starting file observer
W0504 13:45:37.605111       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-222.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 13:45:37.605322       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786725276210696192junit41 minutes ago
May 04 13:44:41.757 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-78-247.us-west-2.compute.internal" not ready since 2024-05-04 13:42:41 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 13:45:18.814 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-78-247.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 13:45:09.906105       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 13:45:09.906423       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714830309 cert, and key in /tmp/serving-cert-155923444/serving-signer.crt, /tmp/serving-cert-155923444/serving-signer.key\nStaticPodsDegraded: I0504 13:45:10.292057       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 13:45:10.293664       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-78-247.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 13:45:10.293808       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0504 13:45:10.294416       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-155923444/tls.crt::/tmp/serving-cert-155923444/tls.key"\nStaticPodsDegraded: F0504 13:45:10.470350       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786725276210696192junit41 minutes ago
I0504 13:34:03.186376       1 observer_polling.go:159] Starting file observer
W0504 13:34:03.201397       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-51-226.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 13:34:03.201605       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1786725276172947456junit41 minutes ago
I0504 13:45:19.486533       1 observer_polling.go:159] Starting file observer
W0504 13:45:19.499821       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-11-175.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 13:45:19.499963       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786725276135198720junit42 minutes ago
May 04 13:26:04.667 - 39s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-34-110.us-west-2.compute.internal" not ready since 2024-05-04 13:24:04 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 13:26:43.844 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-34-110.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 13:26:33.922554       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 13:26:33.922749       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714829193 cert, and key in /tmp/serving-cert-3910693603/serving-signer.crt, /tmp/serving-cert-3910693603/serving-signer.key\nStaticPodsDegraded: I0504 13:26:34.287735       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 13:26:34.293386       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-34-110.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 13:26:34.293537       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0504 13:26:34.294338       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3910693603/tls.crt::/tmp/serving-cert-3910693603/tls.key"\nStaticPodsDegraded: F0504 13:26:34.501822       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 13:32:03.340 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-101-249.us-west-2.compute.internal" not ready since 2024-05-04 13:31:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786725276495908864junit44 minutes ago
May 04 13:39:28.118 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-104-197.ec2.internal" not ready since 2024-05-04 13:39:08 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 13:39:43.078 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-104-197.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 13:39:34.781243       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 13:39:34.781547       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714829974 cert, and key in /tmp/serving-cert-1706599145/serving-signer.crt, /tmp/serving-cert-1706599145/serving-signer.key\nStaticPodsDegraded: I0504 13:39:34.926745       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 13:39:34.928232       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-104-197.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 13:39:34.928375       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0504 13:39:34.929024       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1706599145/tls.crt::/tmp/serving-cert-1706599145/tls.key"\nStaticPodsDegraded: F0504 13:39:35.248637       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 13:45:03.569 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-73-164.ec2.internal" not ready since 2024-05-04 13:44:56 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786725276462354432junit51 minutes ago
I0504 13:19:53.391351       1 observer_polling.go:159] Starting file observer
W0504 13:19:53.408976       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-124-104.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 13:19:53.409100       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786725276432994304junit52 minutes ago
I0504 13:32:58.178405       1 observer_polling.go:159] Starting file observer
W0504 13:32:58.187160       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-124-217.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 13:32:58.187293       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786725276265222144junit54 minutes ago
I0504 13:33:18.567114       1 observer_polling.go:159] Starting file observer
W0504 13:33:18.596662       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-206.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 13:33:18.596784       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786725276311359488junitAbout an hour ago
I0504 13:37:21.682898       1 observer_polling.go:159] Starting file observer
W0504 13:37:21.701652       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-117-63.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 13:37:21.701801       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786634629051060224junit7 hours ago
I0504 07:42:26.536214       1 observer_polling.go:159] Starting file observer
W0504 07:42:26.545589       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-110-238.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 07:42:26.545793       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786634629088808960junit7 hours ago
I0504 06:28:29.210680       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0504 06:28:30.254139       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-ycf7cc33-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.90.16:6443: connect: connection refused
I0504 06:28:43.125392       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786634629088808960junit7 hours ago
I0504 07:39:21.636015       1 observer_polling.go:159] Starting file observer
W0504 07:39:21.658974       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-32-160.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 07:39:21.659126       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1786634629013311488junit7 hours ago
I0504 07:33:46.443369       1 observer_polling.go:159] Starting file observer
W0504 07:33:46.459824       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-17-235.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0504 07:33:46.460097       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786634628791013376junit7 hours ago
I0504 07:30:41.782782       1 observer_polling.go:159] Starting file observer
W0504 07:30:41.800089       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-116-174.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 07:30:41.800394       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786634628929425408junit7 hours ago
I0504 07:24:06.355208       1 observer_polling.go:159] Starting file observer
W0504 07:24:06.372084       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-237.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 07:24:06.372254       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786634628749070336junit7 hours ago
I0504 06:18:33.540492       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0504 06:18:40.665642       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-lhqp5jhr-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.85.199:6443: connect: connection refused
I0504 06:18:47.362905       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206

... 3 lines not shown

#1786634628971368448junit7 hours ago
I0504 07:28:59.003446       1 observer_polling.go:159] Starting file observer
W0504 07:28:59.015645       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-47-157.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0504 07:28:59.016181       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786634628845539328junit7 hours ago
May 04 07:26:07.697 - 24s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-29-183.ec2.internal" not ready since 2024-05-04 07:24:07 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 07:26:32.457 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-29-183.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 07:26:23.404626       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 07:26:23.404884       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714807583 cert, and key in /tmp/serving-cert-322737942/serving-signer.crt, /tmp/serving-cert-322737942/serving-signer.key\nStaticPodsDegraded: I0504 07:26:23.643355       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 07:26:23.644925       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-29-183.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 07:26:23.645052       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0504 07:26:23.645649       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-322737942/tls.crt::/tmp/serving-cert-322737942/tls.key"\nStaticPodsDegraded: F0504 07:26:23.956185       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786634628845539328junit7 hours ago
I0504 07:26:22.701193       1 observer_polling.go:159] Starting file observer
W0504 07:26:22.711050       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-29-183.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0504 07:26:22.711198       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1786634628891676672junit7 hours ago
May 04 07:27:08.561 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-19-1.ec2.internal" not ready since 2024-05-04 07:26:52 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 07:27:25.428 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-19-1.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 07:27:17.020958       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 07:27:17.021341       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714807637 cert, and key in /tmp/serving-cert-3993964824/serving-signer.crt, /tmp/serving-cert-3993964824/serving-signer.key\nStaticPodsDegraded: I0504 07:27:17.274562       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 07:27:17.275888       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-19-1.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 07:27:17.276001       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0504 07:27:17.276631       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3993964824/tls.crt::/tmp/serving-cert-3993964824/tls.key"\nStaticPodsDegraded: F0504 07:27:17.460037       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786634628891676672junit7 hours ago
I0504 07:27:15.139011       1 observer_polling.go:159] Starting file observer
W0504 07:27:15.157057       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-19-1.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0504 07:27:15.157178       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1786543096268328960junit13 hours ago
May 04 01:40:17.901 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-76-109.us-west-1.compute.internal" not ready since 2024-05-04 01:40:10 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 01:40:34.347 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-76-109.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 01:40:25.614945       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 01:40:25.615217       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714786825 cert, and key in /tmp/serving-cert-3091486771/serving-signer.crt, /tmp/serving-cert-3091486771/serving-signer.key\nStaticPodsDegraded: I0504 01:40:25.934267       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 01:40:25.935611       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-76-109.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 01:40:25.935703       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0504 01:40:25.936227       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3091486771/tls.crt::/tmp/serving-cert-3091486771/tls.key"\nStaticPodsDegraded: F0504 01:40:26.556158       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 01:45:34.792 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-83-201.us-west-1.compute.internal" not ready since 2024-05-04 01:43:34 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786543096268328960junit13 hours ago
I0504 01:35:02.897641       1 observer_polling.go:159] Starting file observer
W0504 01:35:02.906520       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-25-20.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 01:35:02.906635       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1786543096381575168junit13 hours ago
May 04 01:31:16.280 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-117-69.ec2.internal" not ready since 2024-05-04 01:29:16 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 01:31:48.835 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-117-69.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 01:31:40.399785       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 01:31:40.400007       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714786300 cert, and key in /tmp/serving-cert-2540522655/serving-signer.crt, /tmp/serving-cert-2540522655/serving-signer.key\nStaticPodsDegraded: I0504 01:31:40.889411       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 01:31:40.891188       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-117-69.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 01:31:40.891286       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0504 01:31:40.891848       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2540522655/tls.crt::/tmp/serving-cert-2540522655/tls.key"\nStaticPodsDegraded: F0504 01:31:41.235192       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 01:36:57.266 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-123-38.ec2.internal" not ready since 2024-05-04 01:36:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786543096381575168junit13 hours ago
cause/Error code/2 reason/ContainerExit jk07-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0504 00:15:11.796318       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-dfzqjk07-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.53.79:6443: connect: connection refused
I0504 00:15:41.239150       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786543096494821376junit13 hours ago
I0504 01:21:53.281389       1 observer_polling.go:159] Starting file observer
W0504 01:21:53.298365       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-17-57.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0504 01:21:53.298543       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786543096322854912junit13 hours ago
I0504 01:29:41.795815       1 observer_polling.go:159] Starting file observer
W0504 01:29:41.814885       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-0-216.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 01:29:41.815018       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786543096348020736junit13 hours ago
I0504 01:29:52.921084       1 observer_polling.go:159] Starting file observer
W0504 01:29:52.933000       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-14-60.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 01:29:52.933329       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786543096251551744junit13 hours ago
May 04 01:31:17.594 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-94-238.ec2.internal" not ready since 2024-05-04 01:30:59 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 01:31:32.785 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-94-238.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 01:31:24.028848       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 01:31:24.029136       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714786284 cert, and key in /tmp/serving-cert-3078344934/serving-signer.crt, /tmp/serving-cert-3078344934/serving-signer.key\nStaticPodsDegraded: I0504 01:31:24.348904       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 01:31:24.350357       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-94-238.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 01:31:24.350539       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0504 01:31:24.351118       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3078344934/tls.crt::/tmp/serving-cert-3078344934/tls.key"\nStaticPodsDegraded: F0504 01:31:24.580368       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 01:36:24.245 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-6-150.ec2.internal" not ready since 2024-05-04 01:36:21 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786543096461266944junit13 hours ago
I0504 01:19:44.579550       1 observer_polling.go:159] Starting file observer
W0504 01:19:44.592532       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-80.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 01:19:44.592657       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786543096536764416junit13 hours ago
I0504 01:27:57.890671       1 observer_polling.go:159] Starting file observer
W0504 01:27:57.914282       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-106-104.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 01:27:57.914388       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786543096570318848junit13 hours ago
I0504 01:24:34.229378       1 observer_polling.go:159] Starting file observer
W0504 01:24:34.244078       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-126-151.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0504 01:24:34.244186       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786543096423518208junit13 hours ago
I0504 01:17:16.373093       1 observer_polling.go:159] Starting file observer
W0504 01:17:16.398115       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-42-33.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 01:17:16.398261       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786453173137838080junit19 hours ago
cause/Error code/2 reason/ContainerExit rxp-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0503 18:23:32.827263       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-tb7ywrxp-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.126.169:6443: connect: connection refused
I0503 18:23:38.950101       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786453173137838080junit19 hours ago
I0503 18:23:39.740735       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 18:24:02.185385       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-tb7ywrxp-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.62.23:6443: connect: connection refused
I0503 18:31:10.536385       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786453173272055808junit19 hours ago
May 03 19:36:29.209 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-75-204.us-west-2.compute.internal" not ready since 2024-05-03 19:36:27 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:37:01.610 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-75-204.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:36:53.697015       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:36:53.697366       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714765013 cert, and key in /tmp/serving-cert-1533742372/serving-signer.crt, /tmp/serving-cert-1533742372/serving-signer.key\nStaticPodsDegraded: I0503 19:36:54.097682       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:36:54.099314       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-75-204.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:36:54.099449       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 19:36:54.100219       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1533742372/tls.crt::/tmp/serving-cert-1533742372/tls.key"\nStaticPodsDegraded: F0503 19:36:54.536415       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 19:42:43.321 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-119-237.us-west-2.compute.internal" not ready since 2024-05-03 19:42:21 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786453173083312128junit19 hours ago
I0503 18:33:52.696526       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0503 18:34:04.755743       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-bt4d7ijb-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.1.182:6443: connect: connection refused
#1786453173083312128junit19 hours ago
I0503 19:27:33.743190       1 observer_polling.go:159] Starting file observer
W0503 19:27:33.755331       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-106-164.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:27:33.755481       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786453174211579904junit19 hours ago
May 03 19:30:14.126 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-6-208.us-east-2.compute.internal" not ready since 2024-05-03 19:28:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:30:51.016 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-6-208.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:30:42.322595       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:30:42.333593       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714764642 cert, and key in /tmp/serving-cert-684319068/serving-signer.crt, /tmp/serving-cert-684319068/serving-signer.key\nStaticPodsDegraded: I0503 19:30:42.745512       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:30:42.746903       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-6-208.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:30:42.747034       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 19:30:42.747608       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-684319068/tls.crt::/tmp/serving-cert-684319068/tls.key"\nStaticPodsDegraded: F0503 19:30:42.883973       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 19:36:28.117 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-94-205.us-east-2.compute.internal" not ready since 2024-05-03 19:36:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786453174211579904junit19 hours ago
May 03 19:41:55.458 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-9-185.us-east-2.compute.internal" not ready since 2024-05-03 19:41:55 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:42:27.884 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-9-185.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:42:19.828517       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:42:19.828846       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714765339 cert, and key in /tmp/serving-cert-3599014428/serving-signer.crt, /tmp/serving-cert-3599014428/serving-signer.key\nStaticPodsDegraded: I0503 19:42:20.060848       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:42:20.062450       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-9-185.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:42:20.062601       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 19:42:20.063219       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3599014428/tls.crt::/tmp/serving-cert-3599014428/tls.key"\nStaticPodsDegraded: F0503 19:42:20.236522       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786453173351747584junit19 hours ago
I0503 19:34:11.669816       1 observer_polling.go:159] Starting file observer
W0503 19:34:11.684928       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-26-30.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:34:11.685059       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786453173179781120junit19 hours ago
cause/Error code/2 reason/ContainerExit rver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 18:15:51.629220       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-555qrcsb-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.68.114:6443: connect: connection refused
I0503 18:15:51.974475       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786453173179781120junit19 hours ago
I0503 18:25:53.241210       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 18:25:55.697155       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-555qrcsb-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.68.114:6443: connect: connection refused
I0503 18:25:59.908368       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786453172991037440junit19 hours ago
I0503 19:39:06.137154       1 observer_polling.go:159] Starting file observer
W0503 19:39:06.158564       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-109.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:39:06.159148       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786453173041369088junit19 hours ago
May 03 19:16:50.658 - 30s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-107-230.us-west-1.compute.internal" not ready since 2024-05-03 19:14:50 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:17:21.355 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-107-230.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:17:11.937758       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:17:11.938016       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714763831 cert, and key in /tmp/serving-cert-4207144753/serving-signer.crt, /tmp/serving-cert-4207144753/serving-signer.key\nStaticPodsDegraded: I0503 19:17:12.470271       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:17:12.472526       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-107-230.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:17:12.472674       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 19:17:12.473452       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4207144753/tls.crt::/tmp/serving-cert-4207144753/tls.key"\nStaticPodsDegraded: F0503 19:17:12.693512       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 19:22:41.658 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-123-142.us-west-1.compute.internal" not ready since 2024-05-03 19:20:41 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786453173041369088junit19 hours ago
I0503 19:17:10.548282       1 observer_polling.go:159] Starting file observer
W0503 19:17:10.562990       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-107-230.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:17:10.563118       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786453172949094400junit19 hours ago
I0503 19:22:10.475583       1 observer_polling.go:159] Starting file observer
W0503 19:22:10.485276       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-33-60.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:22:10.485584       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786453173322387456junit19 hours ago
May 03 19:31:41.708 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-32-182.us-east-2.compute.internal" not ready since 2024-05-03 19:31:21 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:31:53.426 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-32-182.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:31:46.277210       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:31:46.277654       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714764706 cert, and key in /tmp/serving-cert-3512061148/serving-signer.crt, /tmp/serving-cert-3512061148/serving-signer.key\nStaticPodsDegraded: I0503 19:31:46.937225       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:31:46.939853       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-32-182.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:31:46.939986       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 19:31:46.940621       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3512061148/tls.crt::/tmp/serving-cert-3512061148/tls.key"\nStaticPodsDegraded: F0503 19:31:47.115293       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 19:37:21.207 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-77-197.us-east-2.compute.internal" not ready since 2024-05-03 19:37:13 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786453173322387456junit19 hours ago
I0503 19:31:44.545701       1 observer_polling.go:159] Starting file observer
W0503 19:31:44.562661       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-32-182.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:31:44.562909       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786351897746083840junit26 hours ago
I0503 11:38:34.347026       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 11:41:58.885867       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-1js0zpj7-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.92.139:6443: connect: connection refused
I0503 11:42:11.576433       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786351897746083840junit26 hours ago
I0503 12:45:00.704542       1 observer_polling.go:159] Starting file observer
W0503 12:45:00.720438       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-41-131.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 12:45:00.720581       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786351897670586368junit26 hours ago
I0503 12:52:17.285145       1 observer_polling.go:159] Starting file observer
W0503 12:52:17.307295       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-17-126.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 12:52:17.307477       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786351897792221184junit26 hours ago
May 03 12:43:12.306 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-48-220.us-west-1.compute.internal" not ready since 2024-05-03 12:42:52 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 12:43:27.257 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-48-220.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 12:43:19.550491       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 12:43:19.550751       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714740199 cert, and key in /tmp/serving-cert-3014424643/serving-signer.crt, /tmp/serving-cert-3014424643/serving-signer.key\nStaticPodsDegraded: I0503 12:43:19.908749       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 12:43:19.910471       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-48-220.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 12:43:19.910573       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 12:43:19.911244       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3014424643/tls.crt::/tmp/serving-cert-3014424643/tls.key"\nStaticPodsDegraded: W0503 12:43:22.061178       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nStaticPodsDegraded: F0503 12:43:22.061245       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 12:49:05.301 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-55-2.us-west-1.compute.internal" not ready since 2024-05-03 12:47:05 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786351897708335104junit26 hours ago
I0503 12:43:10.805117       1 observer_polling.go:159] Starting file observer
W0503 12:43:10.815897       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-196.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 12:43:10.816047       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786351897842552832junit26 hours ago
I0503 12:54:07.506169       1 observer_polling.go:159] Starting file observer
W0503 12:54:07.524408       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-141.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 12:54:07.524568       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786351897976770560junit26 hours ago
May 03 12:48:21.582 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-108-92.ec2.internal" not ready since 2024-05-03 12:46:21 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 12:48:55.254 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-108-92.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 12:48:46.106853       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 12:48:46.107050       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714740526 cert, and key in /tmp/serving-cert-4268552757/serving-signer.crt, /tmp/serving-cert-4268552757/serving-signer.key\nStaticPodsDegraded: I0503 12:48:46.360398       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 12:48:46.361800       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-92.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 12:48:46.361939       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 12:48:46.362581       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4268552757/tls.crt::/tmp/serving-cert-4268552757/tls.key"\nStaticPodsDegraded: F0503 12:48:46.659421       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 12:54:53.717 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-51-6.ec2.internal" not ready since 2024-05-03 12:54:40 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786351897976770560junit26 hours ago
cause/Error code/2 reason/ContainerExit map_cafile_content.go:206
E0503 11:36:30.958104       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-5p6rx4qf-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.98.234:6443: connect: connection refused
I0503 11:36:31.399842       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786351897884495872junit26 hours ago
May 03 12:39:23.506 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-8-223.us-west-2.compute.internal" not ready since 2024-05-03 12:39:02 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 12:39:35.628 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-8-223.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 12:39:28.145295       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 12:39:28.145522       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714739968 cert, and key in /tmp/serving-cert-1593357202/serving-signer.crt, /tmp/serving-cert-1593357202/serving-signer.key\nStaticPodsDegraded: I0503 12:39:28.517795       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 12:39:28.519626       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-8-223.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 12:39:28.519802       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 12:39:28.520507       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1593357202/tls.crt::/tmp/serving-cert-1593357202/tls.key"\nStaticPodsDegraded: F0503 12:39:28.818841       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 12:45:22.737 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-91-98.us-west-2.compute.internal" not ready since 2024-05-03 12:45:02 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786351897884495872junit26 hours ago
May 03 12:50:49.052 - 38s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-87-186.us-west-2.compute.internal" not ready since 2024-05-03 12:48:49 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 12:51:27.726 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-87-186.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 12:51:20.049249       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 12:51:20.049490       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714740680 cert, and key in /tmp/serving-cert-3388985120/serving-signer.crt, /tmp/serving-cert-3388985120/serving-signer.key\nStaticPodsDegraded: I0503 12:51:20.415992       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 12:51:20.417843       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-87-186.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 12:51:20.418060       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 12:51:20.418939       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3388985120/tls.crt::/tmp/serving-cert-3388985120/tls.key"\nStaticPodsDegraded: F0503 12:51:20.616385       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786351897632837632junit26 hours ago
E0503 11:34:16.466665       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-bn4lyjtx-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0503 11:34:45.713882       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-bn4lyjtx-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.20.216:6443: connect: connection refused
I0503 11:34:51.691127       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786351897632837632junit26 hours ago
I0503 12:39:25.172778       1 observer_polling.go:159] Starting file observer
W0503 12:39:25.197783       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-50-60.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 12:39:25.197911       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786351897590894592junit26 hours ago
I0503 12:40:21.489300       1 observer_polling.go:159] Starting file observer
W0503 12:40:21.502170       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-123-8.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 12:40:21.502271       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786260441782030336junit32 hours ago
I0503 06:44:52.809079       1 observer_polling.go:159] Starting file observer
W0503 06:44:52.824573       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-152.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 06:44:52.824770       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786260441542955008junit32 hours ago
I0503 06:44:56.860355       1 observer_polling.go:159] Starting file observer
W0503 06:44:56.877127       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-40.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 06:44:56.877221       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786260441383571456junit32 hours ago
May 03 06:38:59.580 - 30s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-1-167.us-east-2.compute.internal" not ready since 2024-05-03 06:36:59 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 06:39:29.724 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-1-167.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 06:39:23.222666       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 06:39:23.222865       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714718363 cert, and key in /tmp/serving-cert-3282588068/serving-signer.crt, /tmp/serving-cert-3282588068/serving-signer.key\nStaticPodsDegraded: I0503 06:39:23.488755       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 06:39:23.490350       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-1-167.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 06:39:23.490521       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 06:39:23.491220       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3282588068/tls.crt::/tmp/serving-cert-3282588068/tls.key"\nStaticPodsDegraded: F0503 06:39:23.852030       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 06:44:27.571 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-85-3.us-east-2.compute.internal" not ready since 2024-05-03 06:42:27 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786260441383571456junit32 hours ago
I0503 06:39:22.098244       1 observer_polling.go:159] Starting file observer
W0503 06:39:22.112322       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-1-167.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 06:39:22.112461       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786260441639424000junit32 hours ago
May 03 06:50:01.337 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-125-90.us-west-2.compute.internal" not ready since 2024-05-03 06:49:41 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 06:50:15.627 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-125-90.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 06:50:05.262948       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 06:50:05.263589       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714719005 cert, and key in /tmp/serving-cert-3238393175/serving-signer.crt, /tmp/serving-cert-3238393175/serving-signer.key\nStaticPodsDegraded: I0503 06:50:05.990317       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 06:50:06.017572       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-125-90.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 06:50:06.017736       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 06:50:06.047993       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3238393175/tls.crt::/tmp/serving-cert-3238393175/tls.key"\nStaticPodsDegraded: F0503 06:50:06.358798       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 06:55:13.646 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-17-135.us-west-2.compute.internal" not ready since 2024-05-03 06:53:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786260441639424000junit32 hours ago
E0503 05:42:19.064868       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-660gzqp1-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": context deadline exceeded
E0503 05:43:19.213245       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-660gzqp1-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.98.53:6443: connect: connection refused
I0503 05:43:22.281598       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786260441698144256junit32 hours ago
I0503 06:29:32.241686       1 observer_polling.go:159] Starting file observer
W0503 06:29:32.255583       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-20-13.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 06:29:32.255718       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786260441505206272junit32 hours ago
E0503 05:25:49.333782       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-2xnf5nh5-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0503 05:26:18.159871       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-2xnf5nh5-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.57.27:6443: connect: connection refused
I0503 05:26:56.366419       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786260441505206272junit32 hours ago
I0503 06:38:26.329651       1 observer_polling.go:159] Starting file observer
W0503 06:38:26.344268       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-14-129.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 06:38:26.344443       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786260441341628416junit32 hours ago
May 03 06:30:54.296 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-210.us-east-2.compute.internal" not ready since 2024-05-03 06:30:42 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 06:31:04.962 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-10-210.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 06:30:56.706778       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 06:30:56.706987       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714717856 cert, and key in /tmp/serving-cert-3011741443/serving-signer.crt, /tmp/serving-cert-3011741443/serving-signer.key\nStaticPodsDegraded: I0503 06:30:57.085840       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 06:30:57.087460       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-210.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 06:30:57.087618       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 06:30:57.088277       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3011741443/tls.crt::/tmp/serving-cert-3011741443/tls.key"\nStaticPodsDegraded: F0503 06:30:57.551785       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 06:36:14.279 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-124-151.us-east-2.compute.internal" not ready since 2024-05-03 06:36:05 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786260441341628416junit32 hours ago
I0503 06:30:55.234234       1 observer_polling.go:159] Starting file observer
W0503 06:30:55.246267       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-210.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 06:30:55.246400       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786260441593286656junit32 hours ago
I0503 06:26:36.172947       1 observer_polling.go:159] Starting file observer
W0503 06:26:36.192402       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-201.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 06:26:36.192718       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786170760432193536junit37 hours ago
cause/Error code/2 reason/ContainerExit -client@1714692378\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714692378\" (2024-05-02 22:26:18 +0000 UTC to 2025-05-02 22:26:18 +0000 UTC (now=2024-05-02 23:31:32.162540533 +0000 UTC))"
E0502 23:33:33.775231       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-kg6f94iy-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.93.29:6443: connect: connection refused
I0502 23:35:30.555550       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786170760432193536junit37 hours ago
I0502 23:43:13.908660       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0502 23:43:16.321882       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-kg6f94iy-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.0.141:6443: connect: connection refused
I0502 23:43:17.851253       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786170755403223040junit38 hours ago
I0502 23:27:46.401089       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714692155\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714692155\" (2024-05-02 22:22:35 +0000 UTC to 2025-05-02 22:22:35 +0000 UTC (now=2024-05-02 23:27:46.401070052 +0000 UTC))"
E0502 23:32:00.093980       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-654d3z0k-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.36.93:6443: connect: connection refused
I0502 23:32:09.917261       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786170755403223040junit38 hours ago
I0503 00:55:00.147934       1 observer_polling.go:159] Starting file observer
W0503 00:55:00.169119       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-21-68.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:55:00.169300       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786170768837578752junit38 hours ago
I0503 00:46:39.561516       1 observer_polling.go:159] Starting file observer
W0503 00:46:39.575499       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-0-74.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:46:39.575697       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786170757919805440junit38 hours ago
I0502 23:35:56.564201       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714692619\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714692619\" (2024-05-02 22:30:19 +0000 UTC to 2025-05-02 22:30:19 +0000 UTC (now=2024-05-02 23:35:56.564183493 +0000 UTC))"
E0502 23:39:34.594336       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-9mtz4nx8-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.50.192:6443: connect: connection refused
E0502 23:40:09.198178       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-9mtz4nx8-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.86.41:6443: connect: connection refused

... 1 lines not shown

#1786170752051974144junit38 hours ago
I0503 00:46:24.286144       1 observer_polling.go:159] Starting file observer
W0503 00:46:24.302342       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-26-44.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:46:24.302460       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786170762109915136junit38 hours ago
May 03 00:50:07.021 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-86-248.ec2.internal" not ready since 2024-05-03 00:50:05 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 00:50:39.562 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-86-248.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 00:50:30.738939       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 00:50:30.739205       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714697430 cert, and key in /tmp/serving-cert-2394772640/serving-signer.crt, /tmp/serving-cert-2394772640/serving-signer.key\nStaticPodsDegraded: I0503 00:50:30.957513       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 00:50:30.959368       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-86-248.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 00:50:30.959578       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 00:50:30.960337       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2394772640/tls.crt::/tmp/serving-cert-2394772640/tls.key"\nStaticPodsDegraded: F0503 00:50:31.173645       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 00:55:45.116 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-98-61.ec2.internal" not ready since 2024-05-03 00:55:39 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786170763791831040junit38 hours ago
I0503 00:42:39.706021       1 observer_polling.go:159] Starting file observer
W0503 00:42:39.730045       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-0-248.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:42:39.730158       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786170771366744064junit38 hours ago
I0503 00:34:35.809854       1 observer_polling.go:159] Starting file observer
W0503 00:34:35.817232       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-31-14.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:34:35.817335       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786170766308413440junit38 hours ago
I0503 00:43:30.931140       1 observer_polling.go:159] Starting file observer
W0503 00:43:30.947872       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-116-98.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:43:30.948071       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786052766230122496junit45 hours ago
I0502 17:07:20.861394       1 observer_polling.go:159] Starting file observer
W0502 17:07:20.872703       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-104-188.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 17:07:20.872935       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786052774618730496junit45 hours ago
I0502 15:45:15.604596       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 15:52:31.322948       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-tbs0gsmn-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.75.95:6443: connect: connection refused
I0502 15:53:02.927379       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786052774618730496junit45 hours ago
I0502 16:56:09.232048       1 observer_polling.go:159] Starting file observer
W0502 16:56:09.253651       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-65-50.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:56:09.253756       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786052760379068416junit45 hours ago
I0502 16:48:25.586322       1 observer_polling.go:159] Starting file observer
W0502 16:48:25.600536       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-20-190.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:48:25.600670       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786052767899455488junit45 hours ago
I0502 15:50:04.915314       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0502 15:54:16.554685       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-7z55rqfg-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.1.151:6443: connect: connection refused
I0502 15:54:44.687898       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786052767899455488junit45 hours ago
I0502 16:56:43.506168       1 observer_polling.go:159] Starting file observer
W0502 16:56:43.534546       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-107-228.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:56:43.534679       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786052763713540096junit45 hours ago
I0502 16:59:12.325442       1 observer_polling.go:159] Starting file observer
W0502 16:59:12.337439       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-243.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:59:12.337598       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786052758718124032junit46 hours ago
May 02 16:59:46.168 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-57-215.us-east-2.compute.internal" not ready since 2024-05-02 16:59:27 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:59:59.147 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-57-215.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:59:50.882889       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:59:50.883093       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714669190 cert, and key in /tmp/serving-cert-3328798566/serving-signer.crt, /tmp/serving-cert-3328798566/serving-signer.key\nStaticPodsDegraded: I0502 16:59:51.106504       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:59:51.107977       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-57-215.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:59:51.108091       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 16:59:51.108710       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3328798566/tls.crt::/tmp/serving-cert-3328798566/tls.key"\nStaticPodsDegraded: F0502 16:59:51.465245       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786052758718124032junit46 hours ago
I0502 16:59:49.984727       1 observer_polling.go:159] Starting file observer
W0502 16:59:50.002296       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-57-215.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:59:50.002408       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786052771259092992junit46 hours ago
May 02 16:45:44.227 - 18s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-34-140.us-east-2.compute.internal" not ready since 2024-05-02 16:45:27 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:46:02.364 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-34-140.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:45:53.809127       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:45:53.809312       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714668353 cert, and key in /tmp/serving-cert-726400263/serving-signer.crt, /tmp/serving-cert-726400263/serving-signer.key\nStaticPodsDegraded: I0502 16:45:53.985811       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:45:53.988044       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-34-140.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:45:53.988212       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 16:45:53.988794       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-726400263/tls.crt::/tmp/serving-cert-726400263/tls.key"\nStaticPodsDegraded: F0502 16:45:54.256283       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 16:51:13.169 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-50-0.us-east-2.compute.internal" not ready since 2024-05-02 16:50:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786052771259092992junit46 hours ago
cause/Error code/2 reason/ContainerExit ent@1714664100\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714664100\" (2024-05-02 14:35:00 +0000 UTC to 2025-05-02 14:35:00 +0000 UTC (now=2024-05-02 15:39:35.611931706 +0000 UTC))"
E0502 15:44:01.493538       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-w3z72dqc-b0249.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.101.169:6443: connect: connection refused
I0502 15:44:12.783973       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786052769589760000junit46 hours ago
I0502 16:54:32.710295       1 observer_polling.go:159] Starting file observer
W0502 16:54:32.720745       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-34-106.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:54:32.720863       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

periodic-ci-openshift-release-master-nightly-4.17-e2e-aws-ovn-single-node-serial (all) - 11 runs, 55% failed, 183% of failures match = 100% impact
#1786736847347519488junit45 minutes ago
May 04 14:35:20.349 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: Patch "https://api-int.ci-op-ij12cyij-33479.aws-2.ci.openshift.org:6443/apis/apps/v1/namespaces/openshift-network-operator/daemonsets/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 10.0.125.201:6443: connect: connection refused (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)
May 04 14:35:20.349 - 4s    E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: Patch "https://api-int.ci-op-ij12cyij-33479.aws-2.ci.openshift.org:6443/apis/apps/v1/namespaces/openshift-network-operator/daemonsets/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 10.0.125.201:6443: connect: connection refused (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)

... 1 lines not shown

#1786701241405935616junit3 hours ago
2024-05-04T11:53:32Z node/ip-10-0-25-31.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mtltdv0c-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-31.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.85.103:6443: connect: connection refused
2024-05-04T11:53:32Z node/ip-10-0-25-31.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mtltdv0c-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-31.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.85.103:6443: connect: connection refused

... 14 lines not shown

#1786665474554073088junit6 hours ago
2024-05-04T09:34:44Z node/ip-10-0-91-173.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4zg8h9vc-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-173.ec2.internal?timeout=10s - dial tcp 10.0.104.45:6443: connect: connection refused
2024-05-04T09:34:44Z node/ip-10-0-91-173.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4zg8h9vc-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-173.ec2.internal?timeout=10s - dial tcp 10.0.51.66:6443: connect: connection refused

... 14 lines not shown

#1786579258110382080junit12 hours ago
I0504 02:38:55.582746       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0504 02:39:50.386449       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-hnd21sq0-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.46.238:6443: connect: connection refused
I0504 02:40:22.109870       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786579258110382080junit12 hours ago
2024-05-04T03:42:16Z node/ip-10-0-79-33.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hnd21sq0-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-33.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.99.15:6443: connect: connection refused
2024-05-04T03:42:16Z node/ip-10-0-79-33.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hnd21sq0-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-33.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.46.238:6443: connect: connection refused

... 14 lines not shown

#1786479716421603328junit18 hours ago
2024-05-03T21:40:54Z node/ip-10-0-40-213.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4j3ip8yp-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-213.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.45.8:6443: connect: connection refused
2024-05-03T21:40:54Z node/ip-10-0-40-213.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4j3ip8yp-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-213.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.70.168:6443: connect: connection refused

... 14 lines not shown

#1786371383077376000junit25 hours ago
2024-05-03T14:16:32Z node/ip-10-0-33-188.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-smifyy2g-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-188.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.53.238:6443: connect: connection refused
2024-05-03T14:16:32Z node/ip-10-0-33-188.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-smifyy2g-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-188.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.65.120:6443: connect: connection refused

... 14 lines not shown

#1786321216152276992junit29 hours ago
namespace/openshift-kube-controller-manager node/ip-10-0-97-26.us-east-2.compute.internal pod/kube-controller-manager-ip-10-0-97-26.us-east-2.compute.internal uid/d26e532b-37e4-48b2-bcae-cd9e4276d9cf container/kube-controller-manager mirror-uid/e284cddcbce173597f3688be316edfa2 restarted 1 times:
cause/Error code/1 reason/ContainerExit /cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: Get "https://api-int.ci-op-4xf793w6-33479.aws-2.ci.openshift.org:6443/apis/hello.example.com/v1alpha1/hellos?resourceVersion=59865": dial tcp 10.0.5.156:6443: connect: connection refused
E0503 10:48:26.432941       1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: Get "https://api-int.ci-op-4xf793w6-33479.aws-2.ci.openshift.org:6443/apis/hello.example.com/v1alpha1/hellos?resourceVersion=59865": dial tcp 10.0.5.156:6443: connect: connection refused

... 4 lines not shown

#1786282440352862208junit31 hours ago
May 03 08:26:08.065 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: Patch "https://api-int.ci-op-zqpp3mzg-33479.aws-2.ci.openshift.org:6443/apis/apps/v1/namespaces/openshift-network-operator/daemonsets/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 10.0.107.232:6443: connect: connection refused (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)
May 03 08:26:08.065 - 9s    E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: Patch "https://api-int.ci-op-zqpp3mzg-33479.aws-2.ci.openshift.org:6443/apis/apps/v1/namespaces/openshift-network-operator/daemonsets/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 10.0.107.232:6443: connect: connection refused (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)

... 1 lines not shown

#1786206927533903872junit36 hours ago
namespace/openshift-cloud-controller-manager node/ip-10-0-81-159.us-east-2.compute.internal pod/aws-cloud-controller-manager-6bc987b668-2hdt6 uid/d104ae7d-4c6c-4fa0-bd41-7932c5834121 container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit i-op-cty6ldwh-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.56.223:6443: connect: connection refused
E0503 02:03:48.897152       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-cty6ldwh-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.56.223:6443: connect: connection refused

... 1 lines not shown

#1786242415674265600junit33 hours ago
I0503 04:26:10.674745       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 04:27:22.726940       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-80nrjcjm-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.20.211:6443: connect: connection refused
I0503 04:27:50.854166       1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159
#1786242415674265600junit33 hours ago
namespace/openshift-kube-controller-manager node/ip-10-0-97-111.us-west-2.compute.internal pod/kube-controller-manager-ip-10-0-97-111.us-west-2.compute.internal uid/f4ed4ec3-6f66-43fd-96ff-8fa4386e7e20 container/kube-controller-manager mirror-uid/5f4c7458d3411211be402c1fd28bcbd7 restarted 1 times:
cause/Error code/1 reason/ContainerExit ://api-int.ci-op-80nrjcjm-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.20.211:6443: connect: connection refused
E0503 05:33:39.143345       1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-80nrjcjm-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.20.211:6443: connect: connection refused

... 3 lines not shown

#1786173191366905856junit38 hours ago
I0502 23:49:52.605139       1 event.go:376] "Event occurred" object="openshift-ingress/router-default" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
E0502 23:53:44.823168       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-5n5qjx9r-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.13.41:6443: connect: connection refused
I0502 23:54:09.168756       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786173191366905856junit38 hours ago
2024-05-03T01:02:30Z node/ip-10-0-59-60.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5n5qjx9r-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-60.ec2.internal?timeout=10s - dial tcp 10.0.13.41:6443: connect: connection refused
2024-05-03T01:02:30Z node/ip-10-0-59-60.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5n5qjx9r-33479.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-60.ec2.internal?timeout=10s - dial tcp 10.0.109.202:6443: connect: connection refused

... 14 lines not shown

periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade (all) - 4 runs, 50% failed, 200% of failures match = 100% impact
#1786716214475624448junit2 hours ago
May 04 13:03:58.147 E ns/e2e-k8s-sig-apps-daemonset-upgrade-890 pod/ds1-kjwww node/ip-10-0-173-110.us-west-1.compute.internal uid/ff5997c8-8f7e-4d99-a5f8-ac72b8a73fb0 container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 04 13:04:00.195 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-173-110.us-west-1.compute.internal node/ip-10-0-173-110.us-west-1.compute.internal uid/d8065b21-3de4-4857-a010-bbf8469dac68 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0504 13:03:58.639303       1 cmd.go:216] Using insecure, self-signed certificates\nI0504 13:03:58.710183       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714827838 cert, and key in /tmp/serving-cert-4069761952/serving-signer.crt, /tmp/serving-cert-4069761952/serving-signer.key\nI0504 13:03:59.546651       1 observer_polling.go:159] Starting file observer\nW0504 13:03:59.588826       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-173-110.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0504 13:03:59.589037       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0504 13:03:59.589641       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4069761952/tls.crt::/tmp/serving-cert-4069761952/tls.key"\nF0504 13:03:59.933247       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 04 13:04:01.339 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-173-110.us-west-1.compute.internal node/ip-10-0-173-110.us-west-1.compute.internal uid/d8065b21-3de4-4857-a010-bbf8469dac68 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0504 13:03:58.639303       1 cmd.go:216] Using insecure, self-signed certificates\nI0504 13:03:58.710183       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714827838 cert, and key in /tmp/serving-cert-4069761952/serving-signer.crt, /tmp/serving-cert-4069761952/serving-signer.key\nI0504 13:03:59.546651       1 observer_polling.go:159] Starting file observer\nW0504 13:03:59.588826       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-173-110.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0504 13:03:59.589037       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0504 13:03:59.589641       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4069761952/tls.crt::/tmp/serving-cert-4069761952/tls.key"\nF0504 13:03:59.933247       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1786526166857814016junit14 hours ago
May 04 00:13:56.579 E ns/openshift-network-diagnostics pod/network-check-target-wlzkz node/ip-10-0-166-8.ec2.internal uid/802d0dc9-2871-4860-b3f9-dc735a59f06f container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 04 00:13:57.840 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-166-8.ec2.internal node/ip-10-0-166-8.ec2.internal uid/80594141-fb94-4f12-966c-8cf3a678cf94 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0504 00:13:51.115438       1 cmd.go:216] Using insecure, self-signed certificates\nI0504 00:13:51.132591       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714781631 cert, and key in /tmp/serving-cert-3046182475/serving-signer.crt, /tmp/serving-cert-3046182475/serving-signer.key\nI0504 00:13:51.590500       1 observer_polling.go:159] Starting file observer\nW0504 00:13:51.599452       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-166-8.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0504 00:13:51.599612       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0504 00:13:51.605586       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3046182475/tls.crt::/tmp/serving-cert-3046182475/tls.key"\nW0504 00:13:56.277741       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0504 00:13:56.277798       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
May 04 00:13:57.924 E ns/openshift-dns pod/dns-default-nt64d node/ip-10-0-166-8.ec2.internal uid/43f35906-9a20-4edb-a169-f4a83375b7b3 container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1786526166857814016junit14 hours ago
May 04 00:13:58.994 E ns/openshift-multus pod/network-metrics-daemon-dw2fd node/ip-10-0-166-8.ec2.internal uid/4ce455a6-fcf3-430b-8c46-41a9ebec1c42 container/network-metrics-daemon reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 04 00:14:01.198 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-166-8.ec2.internal node/ip-10-0-166-8.ec2.internal uid/80594141-fb94-4f12-966c-8cf3a678cf94 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0504 00:13:51.115438       1 cmd.go:216] Using insecure, self-signed certificates\nI0504 00:13:51.132591       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714781631 cert, and key in /tmp/serving-cert-3046182475/serving-signer.crt, /tmp/serving-cert-3046182475/serving-signer.key\nI0504 00:13:51.590500       1 observer_polling.go:159] Starting file observer\nW0504 00:13:51.599452       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-166-8.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0504 00:13:51.599612       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0504 00:13:51.605586       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3046182475/tls.crt::/tmp/serving-cert-3046182475/tls.key"\nW0504 00:13:56.277741       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0504 00:13:56.277798       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
May 04 00:14:01.283 E ns/e2e-k8s-sig-apps-daemonset-upgrade-2214 pod/ds1-dklgw node/ip-10-0-166-8.ec2.internal uid/fca8f194-7355-4c72-9aad-80c5940c85d2 container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1786480167808405504junit17 hours ago
May 03 21:26:22.369 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-dbhgj node/ip-10-0-185-46.us-east-2.compute.internal uid/67a51948-9681-4555-b7d6-9e41aa5ab627 container/csi-driver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 21:26:28.454 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-185-46.us-east-2.compute.internal node/ip-10-0-185-46.us-east-2.compute.internal uid/b689776e-5579-4ca5-83f3-fc801f142065 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 21:26:26.886509       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 21:26:26.919603       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714771586 cert, and key in /tmp/serving-cert-2769651795/serving-signer.crt, /tmp/serving-cert-2769651795/serving-signer.key\nI0503 21:26:27.382328       1 observer_polling.go:159] Starting file observer\nW0503 21:26:27.397833       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-185-46.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 21:26:27.398056       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0503 21:26:27.410136       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2769651795/tls.crt::/tmp/serving-cert-2769651795/tls.key"\nF0503 21:26:27.792521       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 21:26:31.527 E ns/openshift-network-diagnostics pod/network-check-target-gg7ng node/ip-10-0-185-46.us-east-2.compute.internal uid/98c072a1-629e-4118-8924-930faeb670eb container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

#1786341754975293440junit27 hours ago
May 03 12:14:13.045 E ns/openshift-multus pod/network-metrics-daemon-k2stx node/ip-10-0-150-221.ec2.internal uid/ada3db9d-e533-48d3-bf5a-2a7efd44f03d container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 12:14:13.079 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-150-221.ec2.internal node/ip-10-0-150-221.ec2.internal uid/c2fad699-8160-473c-828b-b19c56c97bb6 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 12:14:07.292449       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 12:14:07.304135       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714738447 cert, and key in /tmp/serving-cert-3121057453/serving-signer.crt, /tmp/serving-cert-3121057453/serving-signer.key\nI0503 12:14:07.978310       1 observer_polling.go:159] Starting file observer\nW0503 12:14:07.987643       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-150-221.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 12:14:07.987791       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0503 12:14:07.996745       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3121057453/tls.crt::/tmp/serving-cert-3121057453/tls.key"\nW0503 12:14:12.552420       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0503 12:14:12.552472       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
May 03 12:14:13.098 E ns/openshift-dns pod/dns-default-fvv59 node/ip-10-0-150-221.ec2.internal uid/fd816d79-aa51-466b-a5bd-21606dc8c38c container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

pull-ci-openshift-service-ca-operator-master-e2e-aws-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786729620402343936junit2 hours ago
I0504 12:31:14.460334       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714825590\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714825589\" (2024-05-04 11:26:29 +0000 UTC to 2025-05-04 11:26:29 +0000 UTC (now=2024-05-04 12:31:14.460316386 +0000 UTC))"
E0504 12:34:51.366265       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-giv22kp4-c507f.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.26.39:6443: connect: connection refused
I0504 12:35:11.497142       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786729620402343936junit2 hours ago
I0504 13:39:51.572134       1 observer_polling.go:159] Starting file observer
W0504 13:39:51.583868       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-117-220.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 13:39:51.584004       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
periodic-ci-openshift-multiarch-master-nightly-4.16-upgrade-from-nightly-4.15-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786709922109460480junit3 hours ago
May 04 12:35:56.420 - 40s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-80-177.us-west-2.compute.internal" not ready since 2024-05-04 12:33:56 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 12:36:36.985 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-80-177.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 12:36:29.834387       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 12:36:29.834628       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714826189 cert, and key in /tmp/serving-cert-3383987326/serving-signer.crt, /tmp/serving-cert-3383987326/serving-signer.key\nStaticPodsDegraded: I0504 12:36:29.989852       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 12:36:29.991343       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-80-177.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 12:36:29.991494       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 12:36:29.992106       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3383987326/tls.crt::/tmp/serving-cert-3383987326/tls.key"\nStaticPodsDegraded: F0504 12:36:30.124414       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 12:43:33.126 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-8-25.us-west-2.compute.internal" not ready since 2024-05-04 12:43:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786709922109460480junit3 hours ago
I0504 12:29:18.820593       1 observer_polling.go:159] Starting file observer
W0504 12:29:18.832635       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-34.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 12:29:18.832797       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
periodic-ci-openshift-release-master-okd-scos-4.16-e2e-aws-ovn (all) - 12 runs, 100% failed, 100% of failures match = 100% impact
#1786700142972243968junit5 hours ago
d, skipping gathering clusterroles.rbac.authorization.k8s.io/machine-api-operator due to error: clusterroles.rbac.authorization.k8s.io "machine-api-operator" not found, skipping gathering clusterroles.rbac.authorization.k8s.io/machine-api-controllers due to error: clusterroles.rbac.authorization.k8s.io "machine-api-controllers" not found, skipping gathering machineconfigpools.machineconfiguration.openshift.io due to error: the server doesn't have a resource type "machineconfigpools", skipping gathering controllerconfigs.machineconfiguration.openshift.io due to error: the server doesn't have a resource type "controllerconfigs", skipping gathering machineconfigs.machineconfiguration.openshift.io due to error: the server doesn't have a resource type "machineconfigs", skipping gathering kubeletconfigs.machineconfiguration.openshift.io due to error: the server doesn't have a resource type "kubeletconfigs", skipping gathering configs.samples.operator.openshift.io/cluster due to error: configs.samples.operator.openshift.io "cluster" not found, skipping gathering templates.template.openshift.io due to error: the server doesn't have a resource type "templates", skipping gathering imagestreams.image.openshift.io due to error: the server doesn't have a resource type "imagestreams", skipping gathering clusterserviceversions.operators.coreos.com/packageserver due to error: clusterserviceversions.operators.coreos.com "packageserver" not found]
error: gather did not start for pod must-gather-v2f29: Get "https://api.ci-op-0lns4gm1-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-2wtjd/pods/must-gather-v2f29": dial tcp 100.27.91.116:6443: connect: connection refused
{"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-05-04T11:04:37Z"}
#1786682113412567040junit6 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-aws-console container test
E0504 09:54:00.144092      34 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-bz0c0xv2-5c62f.aws-2.ci.openshift.org:6443/api?timeout=5s": dial tcp 3.135.151.39:6443: connect: connection refused
The connection to the server api.ci-op-bz0c0xv2-5c62f.aws-2.ci.openshift.org:6443 was refused - did you specify the right host or port?
#1786682113412567040junit6 hours ago
ather      ] OUT pod for plug-in image registry.redhat.io/openshift4/ose-must-gather:latest created
[must-gather-kt68c] OUT gather did not start: Get "https://api.ci-op-bz0c0xv2-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-6x6ln/pods/must-gather-kt68c": dial tcp 18.224.234.168:6443: connect: connection refused
Delete "https://api.ci-op-bz0c0xv2-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-6x6ln": dial tcp 18.224.234.168:6443: connect: connection refused

... 2 lines not shown

#1786662219363127296junit7 hours ago
ather      ] OUT pod for plug-in image registry.redhat.io/openshift4/ose-must-gather:latest created
[must-gather-5njqw] OUT gather did not start: Get "https://api.ci-op-ksr4yypy-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-zpzfq/pods/must-gather-5njqw": dial tcp 44.223.141.253:6443: connect: connection refused
Delete "https://api.ci-op-ksr4yypy-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-zpzfq": dial tcp 44.223.141.253:6443: connect: connection refused

... 2 lines not shown

#1786518340911501312junit17 hours ago
error: creating temp namespace: Post "https://api.ci-op-kjp6kdjs-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces": dial tcp 3.12.157.2:6443: connect: connection refused
{"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-05-03T23:02:59Z"}
#1786518340911501312junit17 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-aws-console container test
E0503 23:01:48.545843      32 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-kjp6kdjs-5c62f.aws-2.ci.openshift.org:6443/api?timeout=5s": dial tcp 3.12.157.2:6443: connect: connection refused
aws-cli/2.15.42 Python/3.11.8 Linux/5.14.0-284.64.1.el9_2.x86_64 exe/x86_64.rhel.8 prompt/off
#1786501928088244224junit18 hours ago
Gathering artifacts ...
E0503 22:00:52.281151      33 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-x6p7c0kk-5c62f.aws-2.ci.openshift.org:6443/api?timeout=5s": dial tcp 54.241.39.0:6443: connect: connection refused
The connection to the server api.ci-op-x6p7c0kk-5c62f.aws-2.ci.openshift.org:6443 was refused - did you specify the right host or port?
#1786501928088244224junit18 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-must-gather container test
https://api.ci-op-x6p7c0kk-5c62f.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 54.241.39.0:6443: connect: connection refused
E0503 22:00:22.527534      36 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-x6p7c0kk-5c62f.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 54.241.39.0:6443: connect: connection refused

... 7 lines not shown

#1786481117705015296junit19 hours ago
[must-gather      ] OUT pod for plug-in image registry.redhat.io/openshift4/ose-must-gather:latest created
[must-gather-trz2k] OUT gather did not start: Get "https://api.ci-op-6qb01p5l-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-czcjh/pods/must-gather-trz2k": dial tcp 35.166.7.50:6443: connect: connection refused
Delete "https://api.ci-op-6qb01p5l-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-czcjh": dial tcp 35.166.7.50:6443: connect: connection refused

... 2 lines not shown

#1786335750275469312junit29 hours ago
[must-gather      ] OUT pod for plug-in image registry.redhat.io/openshift4/ose-must-gather:latest created
[must-gather-8nkkw] OUT gather did not start: Get "https://api.ci-op-0srjm0s5-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-72wcf/pods/must-gather-8nkkw": dial tcp 13.58.27.0:6443: connect: connection refused
Delete "https://api.ci-op-0srjm0s5-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-72wcf": dial tcp 13.58.27.0:6443: connect: connection refused

... 2 lines not shown

#1786319678780477440junit30 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-must-gather container test
api.ci-op-bdiyilww-5c62f.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 3.217.52.224:6443: connect: connection refused
E0503 09:55:35.856878      37 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-bdiyilww-5c62f.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 3.217.52.224:6443: connect: connection refused

... 7 lines not shown

#1786299864460562432junit31 hours ago
ather      ] OUT pod for plug-in image registry.redhat.io/openshift4/ose-must-gather:latest created
[must-gather-296w5] OUT gather did not start: Get "https://api.ci-op-g3t5fb39-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-6p8zf/pods/must-gather-296w5": dial tcp 44.232.191.240:6443: connect: connection refused
Delete "https://api.ci-op-g3t5fb39-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-6p8zf": dial tcp 44.232.191.240:6443: connect: connection refused

... 2 lines not shown

#1786142700479713280junit42 hours ago
[must-gather      ] OUT pod for plug-in image registry.redhat.io/openshift4/ose-must-gather:latest created
[must-gather-vnhqm] OUT gather did not start: Get "https://api.ci-op-iqs22dzl-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-slwb2/pods/must-gather-vnhqm": dial tcp 3.19.133.63:6443: connect: connection refused
Delete "https://api.ci-op-iqs22dzl-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-slwb2": dial tcp 3.19.133.63:6443: connect: connection refused

... 2 lines not shown

#1786124542905683968junit43 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-aws-console container test
E0502 20:59:01.250410      33 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-735w3i8j-5c62f.aws-2.ci.openshift.org:6443/api?timeout=5s": dial tcp 3.231.241.140:6443: connect: connection refused
E0502 20:59:01.255236      33 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-735w3i8j-5c62f.aws-2.ci.openshift.org:6443/api?timeout=5s": dial tcp 3.231.241.140:6443: connect: connection refused

... 1 lines not shown

#1786109273613275136junit44 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-aws-console container test
E0502 19:57:51.332209      34 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-ksk8696g-5c62f.aws-2.ci.openshift.org:6443/api?timeout=5s": dial tcp 54.146.229.231:6443: connect: connection refused
aws-cli/2.15.42 Python/3.11.8 Linux/5.14.0-284.62.1.el9_2.x86_64 exe/x86_64.rhel.8 prompt/off
#1786109273613275136junit44 hours ago
ather      ] OUT pod for plug-in image registry.redhat.io/openshift4/ose-must-gather:latest created
[must-gather-dsh5h] OUT gather did not start: Get "https://api.ci-op-ksk8696g-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-9cmkk/pods/must-gather-dsh5h": dial tcp 54.146.229.231:6443: connect: connection refused
Delete "https://api.ci-op-ksk8696g-5c62f.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-9cmkk": dial tcp 54.146.229.231:6443: connect: connection refused

... 2 lines not shown

periodic-ci-openshift-release-master-nightly-4.17-upgrade-from-stable-4.16-e2e-aws-sdn-upgrade (all) - 6 runs, 0% failed, 100% of runs match
#1786665754406424576junit5 hours ago
May 04 09:22:17.294 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-22-44.us-west-1.compute.internal" not ready since 2024-05-04 09:21:55 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 09:22:26.259 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-22-44.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 09:22:22.296905       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 09:22:22.297288       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714814542 cert, and key in /tmp/serving-cert-4198380353/serving-signer.crt, /tmp/serving-cert-4198380353/serving-signer.key\nStaticPodsDegraded: I0504 09:22:23.023783       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 09:22:23.042271       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-44.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 09:22:23.042380       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 09:22:23.071401       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4198380353/tls.crt::/tmp/serving-cert-4198380353/tls.key"\nStaticPodsDegraded: F0504 09:22:23.341853       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 04 09:29:20.290 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-77-203.us-west-1.compute.internal" not ready since 2024-05-04 09:29:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786665754406424576junit5 hours ago
I0504 09:22:23.023783       1 observer_polling.go:159] Starting file observer
W0504 09:22:23.042271       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-44.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 09:22:23.042380       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786579359612538880junit11 hours ago
I0504 03:50:30.673065       1 observer_polling.go:159] Starting file observer
W0504 03:50:30.692722       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-106-74.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 03:50:30.692856       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786480025206263808junit17 hours ago
I0503 21:11:12.885653       1 observer_polling.go:159] Starting file observer
W0503 21:11:12.903058       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-214.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 21:11:12.903198       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786371267302002688junit24 hours ago
May 03 13:52:30.186 - 24s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-89-56.us-west-2.compute.internal" not ready since 2024-05-03 13:50:30 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 13:52:54.993 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-89-56.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 13:52:51.034547       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 13:52:51.034793       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714744371 cert, and key in /tmp/serving-cert-742803121/serving-signer.crt, /tmp/serving-cert-742803121/serving-signer.key\nStaticPodsDegraded: I0503 13:52:51.706647       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 13:52:51.721342       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-89-56.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 13:52:51.721546       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 13:52:51.745058       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-742803121/tls.crt::/tmp/serving-cert-742803121/tls.key"\nStaticPodsDegraded: F0503 13:52:52.321074       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 13:59:32.110 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-75-106.us-west-2.compute.internal" not ready since 2024-05-03 13:57:32 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786282711275540480junit30 hours ago
I0503 07:59:23.341195       1 observer_polling.go:159] Starting file observer
W0503 07:59:23.357391       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-52-82.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 07:59:23.357505       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786173271897542656junit37 hours ago
I0503 00:51:06.637415       1 observer_polling.go:159] Starting file observer
W0503 00:51:06.658175       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-20-29.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:51:06.658355       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

periodic-ci-openshift-release-master-ci-4.17-e2e-aws-ovn-upgrade (all) - 66 runs, 14% failed, 444% of failures match = 61% impact
#1786665524055248896junit5 hours ago
E0504 08:16:54.977132       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-25f94kgy-d0d4d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0504 08:17:52.851399       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-25f94kgy-d0d4d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.9.129:6443: connect: connection refused
E0504 08:18:33.383603       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-25f94kgy-d0d4d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.83.158:6443: connect: connection refused

... 1 lines not shown

#1786665528249552896junit5 hours ago
I0504 09:06:09.069159       1 observer_polling.go:159] Starting file observer
W0504 09:06:09.089002       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-38-87.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 09:06:09.089164       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786665533282717696junit5 hours ago
I0504 09:23:37.922082       1 observer_polling.go:159] Starting file observer
W0504 09:23:37.939877       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-29.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0504 09:23:37.940019       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786665419197648896junit5 hours ago
May 04 09:09:01.619 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-20-17.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-20-17.us-east-2.compute.internal_openshift-kube-apiserver(9b1a56e3dac22a48b29503d79f2081a2) (exception: Degraded=False is the happy case)
May 04 09:13:46.529 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-203.us-east-2.compute.internal" not ready since 2024-05-04 09:13:19 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-100-203.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 09:13:41.808154       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 09:13:41.808558       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714814021 cert, and key in /tmp/serving-cert-1065547005/serving-signer.crt, /tmp/serving-cert-1065547005/serving-signer.key\nStaticPodsDegraded: I0504 09:13:42.282698       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 09:13:42.301274       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-203.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 09:13:42.301376       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 09:13:42.342823       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1065547005/tls.crt::/tmp/serving-cert-1065547005/tls.key"\nStaticPodsDegraded: F0504 09:13:42.469780       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 09:13:46.529 - 3s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-203.us-east-2.compute.internal" not ready since 2024-05-04 09:13:19 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-100-203.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 09:13:41.808154       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 09:13:41.808558       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714814021 cert, and key in /tmp/serving-cert-1065547005/serving-signer.crt, /tmp/serving-cert-1065547005/serving-signer.key\nStaticPodsDegraded: I0504 09:13:42.282698       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 09:13:42.301274       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-203.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 09:13:42.301376       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 09:13:42.342823       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1065547005/tls.crt::/tmp/serving-cert-1065547005/tls.key"\nStaticPodsDegraded: F0504 09:13:42.469780       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 2 lines not shown

#1786665516505501696junit5 hours ago
May 04 08:59:20.297 - 23s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-26-152.ec2.internal" not ready since 2024-05-04 08:59:12 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 08:59:43.363 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-26-152.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 08:59:34.691479       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 08:59:34.691820       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714813174 cert, and key in /tmp/serving-cert-807207408/serving-signer.crt, /tmp/serving-cert-807207408/serving-signer.key\nStaticPodsDegraded: I0504 08:59:35.141255       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 08:59:35.153621       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-26-152.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 08:59:35.153805       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 08:59:35.179323       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-807207408/tls.crt::/tmp/serving-cert-807207408/tls.key"\nStaticPodsDegraded: F0504 08:59:35.576753       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 09:04:07.300 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-4-146.ec2.internal" not ready since 2024-05-04 09:04:00 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786665525732970496junit5 hours ago
May 04 09:05:14.106 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-57-29.us-west-2.compute.internal" not ready since 2024-05-04 09:05:02 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 09:05:23.198 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-57-29.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 09:05:16.414718       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 09:05:16.415032       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714813516 cert, and key in /tmp/serving-cert-2303836590/serving-signer.crt, /tmp/serving-cert-2303836590/serving-signer.key\nStaticPodsDegraded: I0504 09:05:16.701841       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 09:05:16.703495       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-57-29.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 09:05:16.703645       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 09:05:16.704312       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2303836590/tls.crt::/tmp/serving-cert-2303836590/tls.key"\nStaticPodsDegraded: F0504 09:05:16.925281       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 09:10:06.114 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-0-185.us-west-2.compute.internal" not ready since 2024-05-04 09:09:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786665521534472192junit5 hours ago
May 04 09:03:25.290 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-101-103.us-west-1.compute.internal" not ready since 2024-05-04 09:03:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 09:03:34.438 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-101-103.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 09:03:26.409316       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 09:03:26.409503       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714813406 cert, and key in /tmp/serving-cert-3072043200/serving-signer.crt, /tmp/serving-cert-3072043200/serving-signer.key\nStaticPodsDegraded: I0504 09:03:26.631455       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 09:03:26.632910       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-103.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 09:03:26.633016       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 09:03:26.633644       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3072043200/tls.crt::/tmp/serving-cert-3072043200/tls.key"\nStaticPodsDegraded: F0504 09:03:26.907007       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 04 09:07:52.770 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-7-125.us-west-1.compute.internal" not ready since 2024-05-04 09:05:52 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786665521534472192junit5 hours ago
I0504 09:03:25.233679       1 observer_polling.go:159] Starting file observer
W0504 09:03:25.255557       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-103.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 09:03:25.255675       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786665535799300096junit5 hours ago
May 04 08:55:35.380 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-31-108.us-east-2.compute.internal" not ready since 2024-05-04 08:55:22 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 08:55:44.828 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-31-108.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 08:55:35.707568       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 08:55:35.707974       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714812935 cert, and key in /tmp/serving-cert-3248426528/serving-signer.crt, /tmp/serving-cert-3248426528/serving-signer.key\nStaticPodsDegraded: I0504 08:55:36.180429       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 08:55:36.191194       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-31-108.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 08:55:36.193665       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 08:55:36.210946       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3248426528/tls.crt::/tmp/serving-cert-3248426528/tls.key"\nStaticPodsDegraded: F0504 08:55:36.494248       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 09:00:02.633 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-85-218.us-east-2.compute.internal" not ready since 2024-05-04 08:58:02 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786665535799300096junit5 hours ago
May 04 09:00:33.914 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-85-218.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-85-218.us-east-2.compute.internal_openshift-kube-apiserver(066d955d7086f425a76361e1531476ed) (exception: Degraded=False is the happy case)
May 04 09:05:37.677 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-43-136.us-east-2.compute.internal" not ready since 2024-05-04 09:05:10 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-43-136.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 09:05:33.200496       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 09:05:33.200874       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714813533 cert, and key in /tmp/serving-cert-609196589/serving-signer.crt, /tmp/serving-cert-609196589/serving-signer.key\nStaticPodsDegraded: I0504 09:05:33.869591       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 09:05:33.882910       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-43-136.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 09:05:33.883036       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 09:05:33.897571       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-609196589/tls.crt::/tmp/serving-cert-609196589/tls.key"\nStaticPodsDegraded: F0504 09:05:34.365013       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 09:05:37.677 - 4s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-43-136.us-east-2.compute.internal" not ready since 2024-05-04 09:05:10 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-43-136.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 09:05:33.200496       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 09:05:33.200874       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714813533 cert, and key in /tmp/serving-cert-609196589/serving-signer.crt, /tmp/serving-cert-609196589/serving-signer.key\nStaticPodsDegraded: I0504 09:05:33.869591       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 09:05:33.882910       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-43-136.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 09:05:33.883036       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 09:05:33.897571       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-609196589/tls.crt::/tmp/serving-cert-609196589/tls.key"\nStaticPodsDegraded: F0504 09:05:34.365013       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 1 lines not shown

#1786665519017889792junit6 hours ago
I0504 09:01:36.262718       1 observer_polling.go:159] Starting file observer
W0504 09:01:36.278322       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-110-186.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0504 09:01:36.278449       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786665530766135296junit5 hours ago
May 04 09:05:57.034 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-0-34.ec2.internal" not ready since 2024-05-04 09:05:32 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 09:06:06.503 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-0-34.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 09:05:59.305348       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 09:05:59.305612       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714813559 cert, and key in /tmp/serving-cert-2259521746/serving-signer.crt, /tmp/serving-cert-2259521746/serving-signer.key\nStaticPodsDegraded: I0504 09:05:59.581110       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 09:05:59.583274       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-0-34.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 09:05:59.583412       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 09:05:59.584029       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2259521746/tls.crt::/tmp/serving-cert-2259521746/tls.key"\nStaticPodsDegraded: F0504 09:06:00.096979       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 09:10:54.067 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-66-77.ec2.internal" not ready since 2024-05-04 09:08:54 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786665530766135296junit5 hours ago
I0504 08:17:07.806995       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714810627\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714810627\" (2024-05-04 07:17:07 +0000 UTC to 2025-05-04 07:17:07 +0000 UTC (now=2024-05-04 08:17:07.806974853 +0000 UTC))"
E0504 08:17:38.849208       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-7hid5hwp-d0d4d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.107.237:6443: connect: connection refused
I0504 08:17:51.987426       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786665538311688192junit5 hours ago
I0504 09:12:17.413241       1 observer_polling.go:159] Starting file observer
W0504 09:12:17.428989       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-112-190.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 09:12:17.429547       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786371438664486912junit24 hours ago
I0503 14:08:58.390273       1 observer_polling.go:159] Starting file observer
W0503 14:08:58.403685       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-107-221.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:08:58.403847       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786371361225052160junit24 hours ago
I0503 14:00:57.415402       1 observer_polling.go:159] Starting file observer
W0503 14:00:57.430286       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-1-177.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:00:57.430484       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786371356284162048junit24 hours ago
I0503 14:07:14.955383       1 observer_polling.go:159] Starting file observer
W0503 14:07:14.976055       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-125-79.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:07:14.976167       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786371367998853120junit24 hours ago
May 03 14:17:12.904 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-113-104.ec2.internal" not ready since 2024-05-03 14:15:11 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 14:17:44.682 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-113-104.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 14:17:34.990555       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 14:17:34.990944       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714745854 cert, and key in /tmp/serving-cert-1974198519/serving-signer.crt, /tmp/serving-cert-1974198519/serving-signer.key\nStaticPodsDegraded: I0503 14:17:35.434119       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 14:17:35.457966       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-104.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 14:17:35.458101       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 14:17:35.480841       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1974198519/tls.crt::/tmp/serving-cert-1974198519/tls.key"\nStaticPodsDegraded: F0503 14:17:35.734632       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 14:23:31.237 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-32-160.ec2.internal" not ready since 2024-05-03 14:23:29 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786371353687887872junit24 hours ago
E0503 12:59:31.970891       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-pr8rjd43-d0d4d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0503 13:00:26.492594       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-pr8rjd43-d0d4d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.37.104:6443: connect: connection refused
I0503 13:00:32.709280       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786371353687887872junit24 hours ago
I0503 14:12:25.328965       1 observer_polling.go:159] Starting file observer
W0503 14:12:25.343114       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-45-118.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:12:25.343322       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786371370490269696junit24 hours ago
I0503 13:59:29.914719       1 observer_polling.go:159] Starting file observer
W0503 13:59:29.926754       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-21-12.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 13:59:29.926935       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786371363750023168junit24 hours ago
I0503 13:57:13.272564       1 observer_polling.go:159] Starting file observer
W0503 13:57:13.290562       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-16.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 13:57:13.290670       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786371348637945856junit24 hours ago
May 03 14:13:13.257 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-178.us-east-2.compute.internal" not ready since 2024-05-03 14:11:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 14:13:46.396 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-10-178.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 14:13:37.941351       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 14:13:37.941755       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714745617 cert, and key in /tmp/serving-cert-2593503069/serving-signer.crt, /tmp/serving-cert-2593503069/serving-signer.key\nStaticPodsDegraded: I0503 14:13:38.350988       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 14:13:38.363028       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-178.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 14:13:38.363173       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 14:13:38.382999       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2593503069/tls.crt::/tmp/serving-cert-2593503069/tls.key"\nStaticPodsDegraded: F0503 14:13:38.706143       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786371348637945856junit24 hours ago
I0503 12:49:17.545915       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 12:56:23.301536       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-5qi4m7k5-d0d4d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.109.85:6443: connect: connection refused
I0503 12:56:39.390561       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786371358762995712junit25 hours ago
I0503 14:05:37.115401       1 observer_polling.go:159] Starting file observer
W0503 14:05:37.125867       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-224.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:05:37.126429       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786371366254022656junit25 hours ago
I0503 14:02:39.623048       1 observer_polling.go:159] Starting file observer
W0503 14:02:39.636516       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-83.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:02:39.636779       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786371351162916864junit24 hours ago
I0503 14:06:00.098411       1 observer_polling.go:159] Starting file observer
W0503 14:06:00.114504       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-59-95.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:06:00.114636       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786282487308095488junit29 hours ago
I0503 07:42:43.313851       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
E0503 07:50:34.170496       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-vfmpyxf4-d0d4d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.100.164:6443: connect: connection refused
I0503 07:50:57.526101       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172

... 2 lines not shown

#1786282483105402880junit30 hours ago
I0503 08:15:29.358894       1 observer_polling.go:159] Starting file observer
W0503 08:15:29.371892       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-30-254.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 08:15:29.372065       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786282492337065984junit30 hours ago
I0503 08:03:31.567484       1 observer_polling.go:159] Starting file observer
W0503 08:03:31.575032       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-57.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 08:03:31.575210       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786282484778930176junit30 hours ago
I0503 08:07:45.857059       1 observer_polling.go:159] Starting file observer
W0503 08:07:45.870544       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-229.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 08:07:45.870857       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786282478063849472junit30 hours ago
May 03 08:04:47.075 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-11-51.us-west-2.compute.internal" not ready since 2024-05-03 08:04:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-11-51.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 08:04:43.487536       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 08:04:43.487755       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714723483 cert, and key in /tmp/serving-cert-3380246620/serving-signer.crt, /tmp/serving-cert-3380246620/serving-signer.key\nStaticPodsDegraded: I0503 08:04:43.695522       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 08:04:43.697088       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-11-51.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 08:04:43.697267       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 08:04:43.697857       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3380246620/tls.crt::/tmp/serving-cert-3380246620/tls.key"\nStaticPodsDegraded: F0503 08:04:43.894933       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 08:04:47.075 - 2s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-11-51.us-west-2.compute.internal" not ready since 2024-05-03 08:04:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-11-51.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 08:04:43.487536       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 08:04:43.487755       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714723483 cert, and key in /tmp/serving-cert-3380246620/serving-signer.crt, /tmp/serving-cert-3380246620/serving-signer.key\nStaticPodsDegraded: I0503 08:04:43.695522       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 08:04:43.697088       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-11-51.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 08:04:43.697267       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 08:04:43.697857       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3380246620/tls.crt::/tmp/serving-cert-3380246620/tls.key"\nStaticPodsDegraded: F0503 08:04:43.894933       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 2 lines not shown

#1786282489849843712junit30 hours ago
I0503 08:07:27.273590       1 observer_polling.go:159] Starting file observer
W0503 08:07:27.288451       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-26-109.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 08:07:27.288618       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786282478894321664junit30 hours ago
I0503 08:02:11.626527       1 observer_polling.go:159] Starting file observer
W0503 08:02:11.638955       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-112-126.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 08:02:11.639088       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786282479749959680junit31 hours ago
May 03 07:54:26.497 - 26s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-55-97.ec2.internal" not ready since 2024-05-03 07:52:26 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 07:54:53.217 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-55-97.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 07:54:47.456388       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 07:54:47.456733       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714722887 cert, and key in /tmp/serving-cert-1682667379/serving-signer.crt, /tmp/serving-cert-1682667379/serving-signer.key\nStaticPodsDegraded: I0503 07:54:47.780475       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 07:54:47.782608       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-55-97.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 07:54:47.782785       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 07:54:47.783863       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1682667379/tls.crt::/tmp/serving-cert-1682667379/tls.key"\nStaticPodsDegraded: F0503 07:54:48.031522       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 07:59:19.489 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-88-30.ec2.internal" not ready since 2024-05-03 07:59:10 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786282479749959680junit31 hours ago
May 03 08:04:22.398 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-24-7.ec2.internal" not ready since 2024-05-03 08:04:11 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 08:04:32.118 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-24-7.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 08:04:23.460365       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 08:04:23.461312       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714723463 cert, and key in /tmp/serving-cert-4151445077/serving-signer.crt, /tmp/serving-cert-4151445077/serving-signer.key\nStaticPodsDegraded: I0503 08:04:24.227961       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 08:04:24.244026       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-24-7.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 08:04:24.244170       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 08:04:24.269761       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4151445077/tls.crt::/tmp/serving-cert-4151445077/tls.key"\nStaticPodsDegraded: F0503 08:04:24.712651       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786282570468560896junit31 hours ago
I0503 08:05:40.359435       1 observer_polling.go:159] Starting file observer
W0503 08:05:40.368928       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-46-113.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 08:05:40.369179       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786172885174325248junit36 hours ago
I0503 01:34:45.005177       1 observer_polling.go:159] Starting file observer
W0503 01:34:45.023414       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-164.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 01:34:45.023639       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786172900298985472junit37 hours ago
I0503 00:49:59.994771       1 observer_polling.go:159] Starting file observer
W0503 00:50:00.004437       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-226.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:50:00.004592       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786172890207490048junit37 hours ago
May 03 01:03:38.084 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-20-181.ec2.internal" not ready since 2024-05-03 01:03:23 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 01:03:44.546 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-20-181.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 01:03:38.090947       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 01:03:38.091229       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714698218 cert, and key in /tmp/serving-cert-1585653477/serving-signer.crt, /tmp/serving-cert-1585653477/serving-signer.key\nStaticPodsDegraded: I0503 01:03:38.525288       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 01:03:38.526810       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-20-181.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 01:03:38.526937       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 01:03:38.527540       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1585653477/tls.crt::/tmp/serving-cert-1585653477/tls.key"\nStaticPodsDegraded: F0503 01:03:38.845485       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786172890207490048junit37 hours ago
I0503 00:53:52.058292       1 observer_polling.go:159] Starting file observer
W0503 00:53:52.085617       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-221.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:53:52.085712       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786172905306984448junit37 hours ago
May 03 01:02:53.097 - 27s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-108-221.us-west-1.compute.internal" not ready since 2024-05-03 01:00:53 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 01:03:20.509 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-108-221.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 01:03:13.344525       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 01:03:13.344770       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714698193 cert, and key in /tmp/serving-cert-3926701748/serving-signer.crt, /tmp/serving-cert-3926701748/serving-signer.key\nStaticPodsDegraded: I0503 01:03:13.650052       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 01:03:13.651573       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-221.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 01:03:13.651709       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 01:03:13.652278       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3926701748/tls.crt::/tmp/serving-cert-3926701748/tls.key"\nStaticPodsDegraded: F0503 01:03:13.827734       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 01:07:58.181 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-50-167.us-west-1.compute.internal" not ready since 2024-05-03 01:07:41 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786172905306984448junit37 hours ago
I0502 23:43:39.051111       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 23:43:52.037682       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-4dcwr85w-d0d4d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.127.232:6443: connect: connection refused
I0502 23:47:38.398921       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786172892724072448junit38 hours ago
May 03 00:50:23.930 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-49-54.ec2.internal container "kube-apiserver-check-endpoints" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-49-54.ec2.internal_openshift-kube-apiserver(1e89f6eeba89d7a1261dfc8a47e43418)\nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 00:55:04.161 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-105-59.ec2.internal" not ready since 2024-05-03 00:54:38 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-105-59.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 00:55:00.200974       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 00:55:00.201460       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714697700 cert, and key in /tmp/serving-cert-4050331227/serving-signer.crt, /tmp/serving-cert-4050331227/serving-signer.key\nStaticPodsDegraded: I0503 00:55:00.849364       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 00:55:00.866951       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-59.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 00:55:00.867093       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 00:55:00.889625       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4050331227/tls.crt::/tmp/serving-cert-4050331227/tls.key"\nStaticPodsDegraded: F0503 00:55:01.072724       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 00:55:04.161 - 5s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-105-59.ec2.internal" not ready since 2024-05-03 00:54:38 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-105-59.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 00:55:00.200974       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 00:55:00.201460       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714697700 cert, and key in /tmp/serving-cert-4050331227/serving-signer.crt, /tmp/serving-cert-4050331227/serving-signer.key\nStaticPodsDegraded: I0503 00:55:00.849364       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 00:55:00.866951       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-59.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 00:55:00.867093       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 00:55:00.889625       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4050331227/tls.crt::/tmp/serving-cert-4050331227/tls.key"\nStaticPodsDegraded: F0503 00:55:01.072724       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 2 lines not shown

#1786172983342010368junit38 hours ago
May 03 00:48:36.575 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-26-242.us-west-1.compute.internal" not ready since 2024-05-03 00:48:24 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 00:48:47.238 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-26-242.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 00:48:38.131306       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 00:48:38.131537       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714697318 cert, and key in /tmp/serving-cert-3637075161/serving-signer.crt, /tmp/serving-cert-3637075161/serving-signer.key\nStaticPodsDegraded: I0503 00:48:38.914181       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 00:48:38.927771       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-26-242.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 00:48:38.927925       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 00:48:38.953115       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3637075161/tls.crt::/tmp/serving-cert-3637075161/tls.key"\nStaticPodsDegraded: F0503 00:48:39.195785       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 00:53:12.165 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-97-91.us-west-1.compute.internal" not ready since 2024-05-03 00:53:06 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786172983342010368junit38 hours ago
May 03 00:58:10.645 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-46-154.us-west-1.compute.internal" not ready since 2024-05-03 00:57:47 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 00:58:21.423 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-46-154.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 00:58:14.254311       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 00:58:14.254570       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714697894 cert, and key in /tmp/serving-cert-2731767454/serving-signer.crt, /tmp/serving-cert-2731767454/serving-signer.key\nStaticPodsDegraded: I0503 00:58:14.482218       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 00:58:14.483813       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-46-154.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 00:58:14.483942       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 00:58:14.484495       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2731767454/tls.crt::/tmp/serving-cert-2731767454/tls.key"\nStaticPodsDegraded: F0503 00:58:14.597190       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786172902819762176junit38 hours ago
May 03 00:50:42.200 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-107-102.us-west-1.compute.internal" not ready since 2024-05-03 00:50:30 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 00:50:53.120 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-107-102.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 00:50:46.118852       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 00:50:46.119131       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714697446 cert, and key in /tmp/serving-cert-631714997/serving-signer.crt, /tmp/serving-cert-631714997/serving-signer.key\nStaticPodsDegraded: I0503 00:50:46.279257       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 00:50:46.280750       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-107-102.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 00:50:46.280859       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 00:50:46.281428       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-631714997/tls.crt::/tmp/serving-cert-631714997/tls.key"\nStaticPodsDegraded: F0503 00:50:46.436532       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 00:55:30.845 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-5-4.us-west-1.compute.internal" not ready since 2024-05-03 00:55:05 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786172902819762176junit38 hours ago
I0502 23:43:15.938359       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 23:50:19.952843       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-nfi359d3-d0d4d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.74.211:6443: connect: connection refused
I0502 23:50:22.796455       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786172907831955456junit38 hours ago
May 03 00:41:20.339 - 5s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-113-241.ec2.internal" not ready since 2024-05-03 00:41:05 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 00:41:25.372 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-113-241.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 00:41:19.913119       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 00:41:19.913306       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714696879 cert, and key in /tmp/serving-cert-3287259535/serving-signer.crt, /tmp/serving-cert-3287259535/serving-signer.key\nStaticPodsDegraded: I0503 00:41:20.127840       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 00:41:20.129424       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-241.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 00:41:20.129566       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 00:41:20.130120       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3287259535/tls.crt::/tmp/serving-cert-3287259535/tls.key"\nStaticPodsDegraded: F0503 00:41:20.330474       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 00:45:57.336 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-112-225.ec2.internal" not ready since 2024-05-03 00:43:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786172897753042944junit38 hours ago
I0503 00:51:52.976578       1 observer_polling.go:159] Starting file observer
W0503 00:51:52.989740       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-106-245.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:51:52.989891       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

periodic-ci-openshift-release-master-nightly-4.16-e2e-aws-ovn-single-node-serial (all) - 11 runs, 45% failed, 220% of failures match = 100% impact
#1786671437176639488junit6 hours ago
I0504 08:46:04.950999       1 node_controller.go:267] Update 1 nodes status took 121.565565ms.
E0504 08:46:36.413207       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-icbs7f9k-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.94.9:6443: connect: connection refused
I0504 08:47:30.502558       1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159
#1786671437176639488junit6 hours ago
namespace/openshift-kube-controller-manager node/ip-10-0-76-28.us-east-2.compute.internal pod/kube-controller-manager-ip-10-0-76-28.us-east-2.compute.internal uid/9d98d530-60d4-4852-8de4-a5ab5b7af673 container/kube-controller-manager mirror-uid/d63445fe551d93aaed533918a32c0d4f restarted 1 times:
cause/Error code/1 reason/ContainerExit  Get "https://api-int.ci-op-icbs7f9k-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.94.9:6443: connect: connection refused
E0504 09:53:32.540818       1 resource_quota_controller.go:440] failed to discover resources: Get "https://api-int.ci-op-icbs7f9k-6d920.aws-2.ci.openshift.org:6443/api": dial tcp 10.0.60.74:6443: connect: connection refused

... 2 lines not shown

#1786633724150943744junit8 hours ago
namespace/openshift-kube-controller-manager node/ip-10-0-106-254.us-west-2.compute.internal pod/kube-controller-manager-ip-10-0-106-254.us-west-2.compute.internal uid/6c7d785d-b8ac-4908-8a5d-6264e878546c container/kube-controller-manager mirror-uid/8c695bfc954d6c07065162a1248f4c8b restarted 1 times:
cause/Error code/1 reason/ContainerExit os?resourceVersion=55174": dial tcp 10.0.68.210:6443: connect: connection refused
E0504 07:30:07.637795       1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: Get "https://api-int.ci-op-czmjjwry-6d920.aws-2.ci.openshift.org:6443/apis/hello.example.com/v1alpha1/hellos?resourceVersion=55174": dial tcp 10.0.68.210:6443: connect: connection refused

... 5 lines not shown

#1786596206072303616junit10 hours ago
namespace/openshift-cloud-controller-manager node/ip-10-0-114-186.us-west-1.compute.internal pod/aws-cloud-controller-manager-5968f44bf9-wnvbq uid/618894f7-4211-4363-9da2-7e5c7434a029 container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit shift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.81.113:6443: connect: connection refused
I0504 03:55:03.630981       1 node_controller.go:267] Update 1 nodes status took 149.073509ms.
#1786596206072303616junit10 hours ago
2024-05-04T05:05:24Z node/ip-10-0-114-186.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jly0t8m1-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-186.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.81.113:6443: connect: connection refused
2024-05-04T05:05:24Z node/ip-10-0-114-186.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jly0t8m1-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-186.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.59.160:6443: connect: connection refused

... 14 lines not shown

#1786523780147843072junit15 hours ago
2024-05-04T00:12:27Z node/ip-10-0-76-170.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bywkdq6i-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-170.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.4.148:6443: connect: connection refused
2024-05-04T00:12:27Z node/ip-10-0-76-170.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bywkdq6i-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-170.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.64.225:6443: connect: connection refused

... 14 lines not shown

#1786438877871869952junit21 hours ago
namespace/openshift-kube-controller-manager node/ip-10-0-44-133.us-west-2.compute.internal pod/kube-controller-manager-ip-10-0-44-133.us-west-2.compute.internal uid/353a7c8f-ecad-4127-ac0c-8bdf038e873a container/kube-controller-manager mirror-uid/38e11d8a5c600f5b308330e585af3091 restarted 1 times:
cause/Error code/1 reason/ContainerExit .bar.com/v1/foos?resourceVersion=68402": dial tcp 10.0.38.221:6443: connect: connection refused
E0503 18:46:14.038556       1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-4v23vyy3-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.92.238:6443: connect: connection refused

... 5 lines not shown

#1786404006843650048junit23 hours ago
I0503 15:06:04.619500       1 node_controller.go:267] Update 1 nodes status took 137.105447ms.
E0503 15:06:16.371692       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-19ypcpgg-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.54.174:6443: connect: connection refused
I0503 15:06:50.717262       1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159
#1786404006843650048junit23 hours ago
namespace/openshift-kube-controller-manager node/ip-10-0-40-229.us-west-1.compute.internal pod/kube-controller-manager-ip-10-0-40-229.us-west-1.compute.internal uid/4cf5449a-b457-4e08-99b4-cfe42f9f46f4 container/kube-controller-manager mirror-uid/0567657b12ae8e57d23a5e88f7429645 restarted 1 times:
cause/Error code/1 reason/ContainerExit gg-6d920.aws-2.ci.openshift.org:6443/apis/custom.fancy.com/v2/pants?resourceVersion=67189": dial tcp 10.0.91.120:6443: connect: connection refused
E0503 16:28:36.381107       1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: Get "https://api-int.ci-op-19ypcpgg-6d920.aws-2.ci.openshift.org:6443/apis/custom.fancy.com/v2/pants?resourceVersion=67189": dial tcp 10.0.91.120:6443: connect: connection refused

... 5 lines not shown

#1786325479838453760junit28 hours ago
I0503 09:53:13.447098       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 09:54:25.199615       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-s8cz09ci-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.37.249:6443: connect: connection refused
I0503 09:55:31.289585       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786325479838453760junit28 hours ago
namespace/openshift-kube-controller-manager node/ip-10-0-110-75.us-west-2.compute.internal pod/kube-controller-manager-ip-10-0-110-75.us-west-2.compute.internal uid/dafd081e-13ba-476d-963f-90e93187a9f3 container/kube-controller-manager mirror-uid/18bb9ee7b6e04fad275efbf5bbd95017 restarted 1 times:
cause/Error code/1 reason/ContainerExit mespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.78.48:6443: connect: connection refused
W0503 11:16:55.594864       1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: Get "https://api-int.ci-op-s8cz09ci-6d920.aws-2.ci.openshift.org:6443/apis/populator.storage.k8s.io/v1beta1/volumepopulators?resourceVersion=65930": dial tcp 10.0.78.48:6443: connect: connection refused

... 4 lines not shown

#1786198500015345664junit37 hours ago
2024-05-03T02:47:27Z node/ip-10-0-126-95.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-80vxj6rj-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-95.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.83.170:6443: connect: connection refused
2024-05-03T02:47:27Z node/ip-10-0-126-95.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-80vxj6rj-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-95.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.39.206:6443: connect: connection refused

... 14 lines not shown

#1786162417491775488junit39 hours ago
namespace/openshift-kube-controller-manager node/ip-10-0-15-179.us-west-1.compute.internal pod/kube-controller-manager-ip-10-0-15-179.us-west-1.compute.internal uid/6f2a6187-6fda-4df7-802c-978199d4bdd2 container/kube-controller-manager mirror-uid/e83daa9082bcef1897337f4b67f16341 restarted 1 times:
cause/Error code/1 reason/ContainerExit -op-iwhsdf0n-6d920.aws-2.ci.openshift.org:6443/api": dial tcp 10.0.55.103:6443: connect: connection refused
E0503 00:27:23.122669       1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-iwhsdf0n-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.86.55:6443: connect: connection refused

... 5 lines not shown

#1786120814295257088junit41 hours ago
2024-05-02T21:55:30Z node/ip-10-0-92-124.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mpimhr51-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-92-124.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.11.127:6443: connect: connection refused
2024-05-02T21:55:30Z node/ip-10-0-92-124.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mpimhr51-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-92-124.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.11.127:6443: connect: connection refused

... 14 lines not shown

#1786049504739332096junit46 hours ago
E0502 16:46:45.059301       1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0502 16:47:02.572044       1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-cjrki4sm-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.42.103:6443: connect: connection refused
E0502 16:47:05.576399       1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-cjrki4sm-6d920.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.42.103:6443: connect: connection refused

... 3 lines not shown

periodic-ci-openshift-release-master-ci-4.17-e2e-aws-ovn-upgrade-out-of-change (all) - 6 runs, 17% failed, 400% of failures match = 67% impact
#1786665503922589696junit6 hours ago
I0504 09:05:54.282568       1 observer_polling.go:159] Starting file observer
W0504 09:05:54.299039       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-25-41.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 09:05:54.299164       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786371399821037568junit24 hours ago
I0503 13:01:58.694859       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0503 13:02:00.390941       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-dhcqt1lq-44d33.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.65.4:6443: connect: connection refused
I0503 13:02:40.577669       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786371399821037568junit24 hours ago
I0503 14:09:30.443200       1 observer_polling.go:159] Starting file observer
W0503 14:09:30.456903       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-8.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:09:30.457100       1 builder.go:299] check-endpoints version 4.17.0-202404300240.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786282499886813184junit30 hours ago
I0503 08:16:03.684972       1 observer_polling.go:159] Starting file observer
W0503 08:16:03.693583       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-190.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 08:16:03.693857       1 builder.go:299] check-endpoints version 4.17.0-202404300240.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786173193036238848junit38 hours ago
I0503 00:53:17.281946       1 observer_polling.go:159] Starting file observer
W0503 00:53:17.299547       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-120.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:53:17.299744       1 builder.go:299] check-endpoints version 4.17.0-202404300240.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

pull-ci-openshift-oc-master-e2e-aws-ovn-upgrade (all) - 5 runs, 20% failed, 400% of failures match = 80% impact
#1786654172347633664junit6 hours ago
I0504 09:13:05.271734       1 observer_polling.go:159] Starting file observer
W0504 09:13:05.289126       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-33.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0504 09:13:05.289257       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1786334923372630016junit27 hours ago
May 03 12:09:59.294 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-49-98.us-east-2.compute.internal" not ready since 2024-05-03 12:09:39 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 12:10:12.762 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-49-98.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 12:10:04.977966       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 12:10:04.978406       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714738204 cert, and key in /tmp/serving-cert-95416483/serving-signer.crt, /tmp/serving-cert-95416483/serving-signer.key\nStaticPodsDegraded: I0503 12:10:05.177886       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 12:10:05.179330       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-49-98.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 12:10:05.179451       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 12:10:05.180076       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-95416483/tls.crt::/tmp/serving-cert-95416483/tls.key"\nStaticPodsDegraded: F0503 12:10:05.493984       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 12:15:01.847 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-97-144.us-east-2.compute.internal" not ready since 2024-05-03 12:13:01 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786334923372630016junit27 hours ago
May 03 12:20:52.056 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-7-190.us-east-2.compute.internal" not ready since 2024-05-03 12:20:33 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 12:21:05.976 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-7-190.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 12:20:58.140517       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 12:20:58.140723       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714738858 cert, and key in /tmp/serving-cert-2597060028/serving-signer.crt, /tmp/serving-cert-2597060028/serving-signer.key\nStaticPodsDegraded: I0503 12:20:58.363971       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 12:20:58.365588       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-7-190.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 12:20:58.365703       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 12:20:58.366310       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2597060028/tls.crt::/tmp/serving-cert-2597060028/tls.key"\nStaticPodsDegraded: F0503 12:20:58.565538       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786298862416171008junit30 hours ago
I0503 09:34:00.536650       1 observer_polling.go:159] Starting file observer
W0503 09:34:00.553095       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-126-55.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 09:34:00.553402       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786128492723703808junit41 hours ago
I0502 22:11:51.976954       1 observer_polling.go:159] Starting file observer
W0502 22:11:51.990887       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-33-77.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 22:11:51.990997       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

periodic-ci-openshift-release-master-nightly-4.15-e2e-aws-ovn-single-node-serial (all) - 9 runs, 100% failed, 100% of failures match = 100% impact
#1786659303059361792junit7 hours ago
2024-05-04T09:03:26Z node/ip-10-0-109-216.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-vmdstqx8-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-216.ec2.internal?timeout=10s - dial tcp 10.0.19.200:6443: connect: connection refused
2024-05-04T09:03:26Z node/ip-10-0-109-216.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-vmdstqx8-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-216.ec2.internal?timeout=10s - dial tcp 10.0.19.200:6443: connect: connection refused

... 14 lines not shown

#1786629530182488064junit9 hours ago
2024-05-04T06:53:19Z node/ip-10-0-71-231.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-80klzk43-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-231.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.44.227:6443: connect: connection refused
2024-05-04T06:53:19Z node/ip-10-0-71-231.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-80klzk43-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-231.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.127.119:6443: connect: connection refused

... 14 lines not shown

#1786595440972533760junit10 hours ago
May 04 04:50:30.256 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited: failed to get current state of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited: Get "https://api-int.ci-op-1dq5wcff-5a22e.aws-2.ci.openshift.org:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-ovn-kubernetes/rolebindings/openshift-ovn-kubernetes-nodes-identity-limited": dial tcp 10.0.83.36:6443: connect: connection refused (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)
May 04 04:50:30.256 - 47s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited: failed to get current state of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited: Get "https://api-int.ci-op-1dq5wcff-5a22e.aws-2.ci.openshift.org:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-ovn-kubernetes/rolebindings/openshift-ovn-kubernetes-nodes-identity-limited": dial tcp 10.0.83.36:6443: connect: connection refused (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)

... 1 lines not shown

#1786448161611452416junit20 hours ago
2024-05-03T19:19:28Z node/ip-10-0-61-228.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cmzk8lzb-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-228.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.36.53:6443: connect: connection refused
2024-05-03T19:19:28Z node/ip-10-0-61-228.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cmzk8lzb-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-228.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.102.95:6443: connect: connection refused

... 4 lines not shown

#1786410907018989568junit23 hours ago
2024-05-03T16:46:44Z node/ip-10-0-53-70.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-70ssjvwn-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-70.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.116.141:6443: connect: connection refused
2024-05-03T16:46:44Z node/ip-10-0-53-70.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-70ssjvwn-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-70.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.116.141:6443: connect: connection refused

... 14 lines not shown

#1786374303218929664junit25 hours ago
2024-05-03T14:34:19Z node/ip-10-0-116-44.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hv4pbmyf-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-44.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.58.36:6443: connect: connection refused
2024-05-03T14:34:19Z node/ip-10-0-116-44.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hv4pbmyf-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-44.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.58.36:6443: connect: connection refused

... 14 lines not shown

#1786109727642488832junit42 hours ago
2024-05-02T21:03:27Z node/ip-10-0-20-217.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-262zbnjk-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-217.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.67.163:6443: connect: connection refused
2024-05-02T21:03:27Z node/ip-10-0-20-217.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-262zbnjk-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-217.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.26.195:6443: connect: connection refused

... 14 lines not shown

#1786077491547344896junit45 hours ago
2024-05-02T18:31:03Z node/ip-10-0-72-66.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ts92xbri-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-66.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.104.174:6443: connect: connection refused
2024-05-02T18:31:03Z node/ip-10-0-72-66.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ts92xbri-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-66.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.2.229:6443: connect: connection refused

... 14 lines not shown

#1786044836428648448junit47 hours ago
2024-05-02T16:21:20Z node/ip-10-0-71-213.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bfcym03d-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-213.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.112.227:6443: connect: connection refused
2024-05-02T16:21:20Z node/ip-10-0-71-213.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bfcym03d-5a22e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-213.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.51.14:6443: connect: connection refused

... 4 lines not shown

periodic-ci-openshift-multiarch-master-nightly-4.15-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 33 runs, 3% failed, 1000% of failures match = 30% impact
#1786629174681669632junit8 hours ago
May 04 07:06:12.929 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-63-18.us-west-2.compute.internal" not ready since 2024-05-04 07:04:12 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 07:06:42.787 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-63-18.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 07:06:33.294153       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 07:06:33.295620       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714806393 cert, and key in /tmp/serving-cert-1500073690/serving-signer.crt, /tmp/serving-cert-1500073690/serving-signer.key\nStaticPodsDegraded: I0504 07:06:33.789797       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 07:06:33.805729       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-63-18.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 07:06:33.805903       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 07:06:33.824753       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1500073690/tls.crt::/tmp/serving-cert-1500073690/tls.key"\nStaticPodsDegraded: F0504 07:06:34.199097       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 07:12:00.915 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-76-62.us-west-2.compute.internal" not ready since 2024-05-04 07:10:00 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786613346536001536junit9 hours ago
May 04 06:00:52.374 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-56-212.us-east-2.compute.internal" not ready since 2024-05-04 06:00:41 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 06:01:04.792 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-56-212.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 06:00:53.981363       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 06:00:53.981894       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714802453 cert, and key in /tmp/serving-cert-3011890069/serving-signer.crt, /tmp/serving-cert-3011890069/serving-signer.key\nStaticPodsDegraded: I0504 06:00:54.619792       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 06:00:54.642099       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-56-212.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 06:00:54.642269       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 06:00:54.668790       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3011890069/tls.crt::/tmp/serving-cert-3011890069/tls.key"\nStaticPodsDegraded: F0504 06:00:54.884075       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 06:06:04.360 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-92-86.us-east-2.compute.internal" not ready since 2024-05-04 06:04:04 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786620550471225344junit9 hours ago
May 04 06:39:26.092 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-81-216.us-east-2.compute.internal" not ready since 2024-05-04 06:37:26 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 06:40:00.603 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-81-216.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 06:39:49.162210       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 06:39:49.162642       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714804789 cert, and key in /tmp/serving-cert-2100882405/serving-signer.crt, /tmp/serving-cert-2100882405/serving-signer.key\nStaticPodsDegraded: I0504 06:39:49.668951       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 06:39:49.687765       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-81-216.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 06:39:49.687890       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 06:39:49.715132       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2100882405/tls.crt::/tmp/serving-cert-2100882405/tls.key"\nStaticPodsDegraded: F0504 06:39:50.134426       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786603749448355840junit10 hours ago
May 04 05:34:53.105 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-23-54.us-west-2.compute.internal" not ready since 2024-05-04 05:34:27 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 05:35:03.393 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-23-54.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 05:34:52.552963       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 05:34:52.553599       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714800892 cert, and key in /tmp/serving-cert-3872654980/serving-signer.crt, /tmp/serving-cert-3872654980/serving-signer.key\nStaticPodsDegraded: I0504 05:34:53.133766       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 05:34:53.144992       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-23-54.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 05:34:53.145468       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 05:34:53.169029       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3872654980/tls.crt::/tmp/serving-cert-3872654980/tls.key"\nStaticPodsDegraded: F0504 05:34:53.594534       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 05:40:15.523 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-41-206.us-west-2.compute.internal" not ready since 2024-05-04 05:38:15 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786587023138623488junit11 hours ago
May 04 04:19:50.569 - 23s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-13-62.ec2.internal" not ready since 2024-05-04 04:19:46 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:20:13.983 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-13-62.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:20:08.550960       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:20:08.551504       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714796408 cert, and key in /tmp/serving-cert-1053635093/serving-signer.crt, /tmp/serving-cert-1053635093/serving-signer.key\nStaticPodsDegraded: I0504 04:20:09.241340       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:20:09.256790       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-13-62.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:20:09.256954       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 04:20:09.270356       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1053635093/tls.crt::/tmp/serving-cert-1053635093/tls.key"\nStaticPodsDegraded: F0504 04:20:09.405384       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 04:25:14.419 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-78-182.ec2.internal" not ready since 2024-05-04 04:23:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786512934608834560junit17 hours ago
May 03 23:03:13.732 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-29-136.ec2.internal" not ready since 2024-05-03 23:03:00 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 23:03:22.376 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-29-136.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 23:03:14.221680       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 23:03:14.221879       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714777394 cert, and key in /tmp/serving-cert-1748188345/serving-signer.crt, /tmp/serving-cert-1748188345/serving-signer.key\nStaticPodsDegraded: I0503 23:03:14.554953       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 23:03:14.556467       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-29-136.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 23:03:14.556588       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 23:03:14.557264       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1748188345/tls.crt::/tmp/serving-cert-1748188345/tls.key"\nStaticPodsDegraded: F0503 23:03:14.802045       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 23:08:41.180 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-86-252.ec2.internal" not ready since 2024-05-03 23:08:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786488015418298368junit18 hours ago
May 03 21:36:04.394 - 38s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-85-12.us-east-2.compute.internal" not ready since 2024-05-03 21:34:04 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:36:43.226 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-85-12.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:36:34.117863       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:36:34.124146       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714772194 cert, and key in /tmp/serving-cert-2490058729/serving-signer.crt, /tmp/serving-cert-2490058729/serving-signer.key\nStaticPodsDegraded: I0503 21:36:34.476096       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:36:34.477491       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-85-12.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:36:34.477601       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 21:36:34.478208       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2490058729/tls.crt::/tmp/serving-cert-2490058729/tls.key"\nStaticPodsDegraded: F0503 21:36:34.770505       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 21:42:13.428 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-86-82.us-east-2.compute.internal" not ready since 2024-05-03 21:42:03 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786482260300533760junit18 hours ago
May 03 21:06:18.547 - 91s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-96-162.ec2.internal" not ready since 2024-05-03 21:04:18 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:07:49.610 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-96-162.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:07:40.764274       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:07:40.764542       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714770460 cert, and key in /tmp/serving-cert-2950153434/serving-signer.crt, /tmp/serving-cert-2950153434/serving-signer.key\nStaticPodsDegraded: I0503 21:07:41.254467       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:07:41.265537       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-96-162.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:07:41.265649       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 21:07:41.280740       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2950153434/tls.crt::/tmp/serving-cert-2950153434/tls.key"\nStaticPodsDegraded: F0503 21:07:41.648977       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 21:13:08.467 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-59-139.ec2.internal" not ready since 2024-05-03 21:12:59 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786470701016813568junit19 hours ago
May 03 20:31:05.232 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-40-65.ec2.internal" not ready since 2024-05-03 20:31:01 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 20:31:33.541 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-40-65.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 20:31:26.309681       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 20:31:26.310104       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714768286 cert, and key in /tmp/serving-cert-632245643/serving-signer.crt, /tmp/serving-cert-632245643/serving-signer.key\nStaticPodsDegraded: I0503 20:31:26.437836       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 20:31:26.439343       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-40-65.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 20:31:26.439552       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 20:31:26.440424       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-632245643/tls.crt::/tmp/serving-cert-632245643/tls.key"\nStaticPodsDegraded: F0503 20:31:26.629358       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 20:36:54.243 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-72-71.ec2.internal" not ready since 2024-05-03 20:36:38 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786374004764839936junit25 hours ago
May 03 14:26:20.478 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-90-160.us-east-2.compute.internal" not ready since 2024-05-03 14:26:09 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 14:26:37.328 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-90-160.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 14:26:25.626070       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 14:26:25.628019       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714746385 cert, and key in /tmp/serving-cert-1181369609/serving-signer.crt, /tmp/serving-cert-1181369609/serving-signer.key\nStaticPodsDegraded: I0503 14:26:26.200553       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 14:26:26.212520       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-90-160.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 14:26:26.212704       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 14:26:26.224127       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1181369609/tls.crt::/tmp/serving-cert-1181369609/tls.key"\nStaticPodsDegraded: F0503 14:26:26.461105       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
periodic-ci-openshift-multiarch-master-nightly-4.13-upgrade-from-nightly-4.12-ocp-e2e-aws-sdn-arm64 (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786610767508803584junit9 hours ago
May 04 05:59:07.352 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-km79m node/ip-10-0-181-79.us-east-2.compute.internal uid/3931714e-1aba-4ed8-a397-50bfdf8616e1 container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 04 05:59:14.397 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-181-79.us-east-2.compute.internal node/ip-10-0-181-79.us-east-2.compute.internal uid/3598ae7e-20b3-4758-a15d-c7b09680feaa container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0504 05:59:11.451857       1 cmd.go:216] Using insecure, self-signed certificates\nI0504 05:59:11.456383       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714802351 cert, and key in /tmp/serving-cert-957209156/serving-signer.crt, /tmp/serving-cert-957209156/serving-signer.key\nI0504 05:59:12.342610       1 observer_polling.go:159] Starting file observer\nW0504 05:59:12.370460       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-181-79.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0504 05:59:12.370697       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0504 05:59:12.389954       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-957209156/tls.crt::/tmp/serving-cert-957209156/tls.key"\nF0504 05:59:13.351799       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 04 05:59:15.428 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-181-79.us-east-2.compute.internal node/ip-10-0-181-79.us-east-2.compute.internal uid/3598ae7e-20b3-4758-a15d-c7b09680feaa container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0504 05:59:11.451857       1 cmd.go:216] Using insecure, self-signed certificates\nI0504 05:59:11.456383       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714802351 cert, and key in /tmp/serving-cert-957209156/serving-signer.crt, /tmp/serving-cert-957209156/serving-signer.key\nI0504 05:59:12.342610       1 observer_polling.go:159] Starting file observer\nW0504 05:59:12.370460       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-181-79.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0504 05:59:12.370697       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0504 05:59:12.389954       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-957209156/tls.crt::/tmp/serving-cert-957209156/tls.key"\nF0504 05:59:13.351799       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

periodic-ci-openshift-knative-eventing-kafka-broker-release-v1.14-412-test-reconciler-encryption-auth-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786629897796456448junit9 hours ago
# step graph.Run multi-stage test test-reconciler-encryption-auth-aws-412-c - test-reconciler-encryption-auth-aws-412-c-knative-must-gather container test
4f.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.134.237.113:6443: connect: connection refused
ClusterOperators:
#1786629897796456448junit9 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-xny64psy-e864f.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.134.237.113:6443: connect: connection refused
periodic-ci-openshift-knative-eventing-kafka-broker-release-v1.12-412-test-reconciler-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786626120012009472junit10 hours ago
# step graph.Run multi-stage test test-reconciler-aws-412-c - test-reconciler-aws-412-c-knative-must-gather container test
rr-8fcb2.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.13.162.131:6443: connect: connection refused
ClusterOperators:
#1786626120012009472junit10 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-6bhgmfrr-8fcb2.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.13.162.131:6443: connect: connection refused
periodic-ci-openshift-knative-eventing-release-next-412-test-reconciler-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786625870979403776junit10 hours ago
# step graph.Run multi-stage test test-reconciler-aws-412-c - test-reconciler-aws-412-c-knative-must-gather container test
39.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 13.59.248.191:6443: connect: connection refused
ClusterOperators:
#1786625870979403776junit10 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-y1v3zfg7-7b839.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 13.59.248.191:6443: connect: connection refused
periodic-ci-openshift-release-master-nightly-4.16-upgrade-from-stable-4.15-e2e-aws-sdn-upgrade (all) - 7 runs, 29% failed, 350% of failures match = 100% impact
#1786596293389324288junit10 hours ago
I0504 05:00:06.046128       1 observer_polling.go:159] Starting file observer
W0504 05:00:06.067748       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-55-79.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0504 05:00:06.067937       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786523946229698560junit14 hours ago
I0504 00:14:06.482875       1 observer_polling.go:159] Starting file observer
W0504 00:14:06.495551       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-116-82.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 00:14:06.495695       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786405497088249856junit22 hours ago
I0503 16:10:01.264272       1 observer_polling.go:159] Starting file observer
W0503 16:10:01.273768       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-43-88.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 16:10:01.273911       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786325872920236032junit28 hours ago
I0503 10:53:29.779578       1 observer_polling.go:159] Starting file observer
W0503 10:53:29.801512       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-1-16.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:53:29.801658       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786198343987236864junit36 hours ago
May 03 02:27:55.347 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-13-161.us-west-1.compute.internal" not ready since 2024-05-03 02:25:55 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 02:28:25.339 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-13-161.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 02:28:21.746643       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 02:28:21.747468       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714703301 cert, and key in /tmp/serving-cert-1099488020/serving-signer.crt, /tmp/serving-cert-1099488020/serving-signer.key\nStaticPodsDegraded: I0503 02:28:22.306671       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 02:28:22.323032       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-13-161.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 02:28:22.323322       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 02:28:22.342812       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1099488020/tls.crt::/tmp/serving-cert-1099488020/tls.key"\nStaticPodsDegraded: F0503 02:28:22.551037       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 02:34:56.403 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-71-37.us-west-1.compute.internal" not ready since 2024-05-03 02:34:48 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786198343987236864junit36 hours ago
I0503 02:42:08.200980       1 observer_polling.go:159] Starting file observer
W0503 02:42:08.213569       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-10.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 02:42:08.213787       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786120940124377088junit41 hours ago
I0502 21:18:12.734835       1 observer_polling.go:159] Starting file observer
W0502 21:18:12.754264       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-102-186.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 21:18:12.754378       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786049274673369088junit46 hours ago
I0502 16:49:17.983875       1 observer_polling.go:159] Starting file observer
W0502 16:49:17.998454       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-126-126.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:49:17.998652       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

periodic-ci-openshift-release-master-nightly-4.15-upgrade-from-stable-4.14-e2e-aws-sdn-upgrade (all) - 3 runs, 0% failed, 100% of runs match
#1786595514783895552junit10 hours ago
May 04 04:40:22.883 - 23s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-55-106.us-west-1.compute.internal" not ready since 2024-05-04 04:38:22 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:40:46.625 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-55-106.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:40:41.436556       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:40:41.436823       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714797641 cert, and key in /tmp/serving-cert-1090989217/serving-signer.crt, /tmp/serving-cert-1090989217/serving-signer.key\nStaticPodsDegraded: I0504 04:40:42.026016       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:40:42.049618       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-55-106.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:40:42.049743       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 04:40:42.076045       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1090989217/tls.crt::/tmp/serving-cert-1090989217/tls.key"\nStaticPodsDegraded: F0504 04:40:42.372872       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 04:47:02.908 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-87-98.us-west-1.compute.internal" not ready since 2024-05-04 04:46:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786595514783895552junit10 hours ago
May 04 04:53:03.179 - 3s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-7-117.us-west-1.compute.internal" not ready since 2024-05-04 04:52:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:53:06.315 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-7-117.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:53:02.879125       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:53:02.879393       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714798382 cert, and key in /tmp/serving-cert-4239634792/serving-signer.crt, /tmp/serving-cert-4239634792/serving-signer.key\nStaticPodsDegraded: I0504 04:53:03.310034       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:53:03.332658       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-7-117.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:53:03.332759       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 04:53:03.356200       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4239634792/tls.crt::/tmp/serving-cert-4239634792/tls.key"\nStaticPodsDegraded: F0504 04:53:03.835367       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786374321665478656junit24 hours ago
May 03 14:07:10.491 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-113-51.us-west-2.compute.internal" not ready since 2024-05-03 14:06:58 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 14:07:17.021 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-113-51.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 14:07:12.834519       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 14:07:12.834829       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714745232 cert, and key in /tmp/serving-cert-1944982839/serving-signer.crt, /tmp/serving-cert-1944982839/serving-signer.key\nStaticPodsDegraded: I0503 14:07:13.479571       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 14:07:13.501244       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-51.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 14:07:13.501344       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 14:07:13.523005       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1944982839/tls.crt::/tmp/serving-cert-1944982839/tls.key"\nStaticPodsDegraded: F0503 14:07:13.755553       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 14:13:13.406 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-99-170.us-west-2.compute.internal" not ready since 2024-05-03 14:13:01 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786044836965519360junit46 hours ago
May 02 16:21:38.113 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-23-49.us-west-2.compute.internal" not ready since 2024-05-02 16:21:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:21:48.208 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-23-49.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:21:43.665904       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:21:43.666224       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666903 cert, and key in /tmp/serving-cert-3023542117/serving-signer.crt, /tmp/serving-cert-3023542117/serving-signer.key\nStaticPodsDegraded: I0502 16:21:44.187444       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:21:44.208498       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-23-49.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:21:44.208602       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:21:44.226948       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3023542117/tls.crt::/tmp/serving-cert-3023542117/tls.key"\nStaticPodsDegraded: F0502 16:21:44.548057       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 16:27:53.112 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-47-95.us-west-2.compute.internal" not ready since 2024-05-02 16:27:29 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

periodic-ci-openshift-knative-eventing-release-v1.14-412-test-e2e-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786626119982649344junit10 hours ago
# step graph.Run multi-stage test test-e2e-aws-412-c - test-e2e-aws-412-c-knative-must-gather container test
dy-c4541.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.130.108.45:6443: connect: connection refused
ClusterOperators:
#1786626119982649344junit10 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-44gqxidy-c4541.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.130.108.45:6443: connect: connection refused
periodic-ci-openshift-release-master-nightly-4.16-e2e-aws-sdn-upgrade (all) - 70 runs, 20% failed, 314% of failures match = 63% impact
#1786596031547314176junit10 hours ago
May 04 04:30:34.448 - 22s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-5-184.us-west-1.compute.internal" not ready since 2024-05-04 04:28:34 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:30:56.954 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-5-184.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:30:52.672553       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:30:52.673005       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714797052 cert, and key in /tmp/serving-cert-2850281216/serving-signer.crt, /tmp/serving-cert-2850281216/serving-signer.key\nStaticPodsDegraded: I0504 04:30:53.124993       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:30:53.139558       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-5-184.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:30:53.139677       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 04:30:53.163497       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2850281216/tls.crt::/tmp/serving-cert-2850281216/tls.key"\nStaticPodsDegraded: F0504 04:30:53.419105       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 04:35:47.351 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-65-158.us-west-1.compute.internal" not ready since 2024-05-04 04:35:23 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786596017282486272junit10 hours ago
May 04 04:31:44.060 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-2-172.us-west-2.compute.internal" not ready since 2024-05-04 04:31:19 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-2-172.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:31:42.331908       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:31:42.332143       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714797102 cert, and key in /tmp/serving-cert-2431369545/serving-signer.crt, /tmp/serving-cert-2431369545/serving-signer.key\nStaticPodsDegraded: I0504 04:31:42.648543       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:31:42.662553       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-2-172.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:31:42.662733       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 04:31:42.683439       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2431369545/tls.crt::/tmp/serving-cert-2431369545/tls.key"\nStaticPodsDegraded: F0504 04:31:43.172505       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:31:44.060 - 2s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-2-172.us-west-2.compute.internal" not ready since 2024-05-04 04:31:19 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-2-172.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:31:42.331908       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:31:42.332143       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714797102 cert, and key in /tmp/serving-cert-2431369545/serving-signer.crt, /tmp/serving-cert-2431369545/serving-signer.key\nStaticPodsDegraded: I0504 04:31:42.648543       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:31:42.662553       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-2-172.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:31:42.662733       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 04:31:42.683439       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2431369545/tls.crt::/tmp/serving-cert-2431369545/tls.key"\nStaticPodsDegraded: F0504 04:31:43.172505       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 2 lines not shown

#1786596022319845376junit10 hours ago
namespace/openshift-cloud-controller-manager node/ip-10-0-37-5.us-west-2.compute.internal pod/aws-cloud-controller-manager-5968f44bf9-mwjsq uid/9b2789ce-34a1-4194-abb3-1ff6464b4ba2 container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit //api-int.ci-op-rrym301z-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.118.92:6443: connect: connection refused
I0504 03:50:14.905588       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786596022319845376junit10 hours ago
I0504 03:50:16.131182       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0504 03:56:29.828328       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-rrym301z-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.118.92:6443: connect: connection refused
I0504 03:56:57.747275       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786596034068090880junit10 hours ago
May 04 04:29:20.273 - 1s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-66-173.us-east-2.compute.internal" not ready since 2024-05-04 04:28:55 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:29:22.146 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-66-173.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:29:18.761117       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:29:18.761632       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714796958 cert, and key in /tmp/serving-cert-1880311532/serving-signer.crt, /tmp/serving-cert-1880311532/serving-signer.key\nStaticPodsDegraded: I0504 04:29:19.158278       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:29:19.176486       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-66-173.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:29:19.176632       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 04:29:19.193731       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1880311532/tls.crt::/tmp/serving-cert-1880311532/tls.key"\nStaticPodsDegraded: F0504 04:29:19.735536       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 04:34:18.287 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-80-182.us-east-2.compute.internal" not ready since 2024-05-04 04:34:03 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786596019803262976junit10 hours ago
I0504 04:26:00.277736       1 observer_polling.go:159] Starting file observer
W0504 04:26:00.300622       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-17-95.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 04:26:00.300795       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786596029030731776junit10 hours ago
I0504 03:38:24.794015       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714793628\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714793628\" (2024-05-04 02:33:48 +0000 UTC to 2025-05-04 02:33:48 +0000 UTC (now=2024-05-04 03:38:24.793994169 +0000 UTC))"
E0504 03:42:44.542756       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-nd9f0g9q-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.40.211:6443: connect: connection refused
I0504 03:43:18.242990       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786596029030731776junit10 hours ago
I0504 04:20:52.208845       1 observer_polling.go:159] Starting file observer
W0504 04:20:52.241421       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-4.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 04:20:52.241525       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786596026518343680junit10 hours ago
May 04 04:20:53.041 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-4-141.ec2.internal" not ready since 2024-05-04 04:20:40 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:21:07.253 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-4-141.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:21:04.078932       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:21:04.079400       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714796464 cert, and key in /tmp/serving-cert-2243873722/serving-signer.crt, /tmp/serving-cert-2243873722/serving-signer.key\nStaticPodsDegraded: I0504 04:21:04.613507       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:21:04.625802       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-141.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:21:04.625946       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 04:21:04.644714       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2243873722/tls.crt::/tmp/serving-cert-2243873722/tls.key"\nStaticPodsDegraded: F0504 04:21:04.865911       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 04 04:25:31.145 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-58-120.ec2.internal" not ready since 2024-05-04 04:25:29 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786596012253515776junit10 hours ago
May 04 04:17:08.897 - 20s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-18-35.ec2.internal" not ready since 2024-05-04 04:15:08 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:17:29.348 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-18-35.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:17:26.008736       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:17:26.009076       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714796246 cert, and key in /tmp/serving-cert-198514749/serving-signer.crt, /tmp/serving-cert-198514749/serving-signer.key\nStaticPodsDegraded: I0504 04:17:26.559879       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:17:26.578884       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-18-35.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:17:26.579020       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 04:17:26.603543       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-198514749/tls.crt::/tmp/serving-cert-198514749/tls.key"\nStaticPodsDegraded: F0504 04:17:27.327719       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 04 04:22:15.756 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-107-254.ec2.internal" not ready since 2024-05-04 04:22:00 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-107-254.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:22:13.772520       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:22:13.772925       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714796533 cert, and key in /tmp/serving-cert-1782167373/serving-signer.crt, /tmp/serving-cert-1782167373/serving-signer.key\nStaticPodsDegraded: I0504 04:22:14.358555       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:22:14.372044       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-107-254.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:22:14.372228       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 04:22:14.392365       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1782167373/tls.crt::/tmp/serving-cert-1782167373/tls.key"\nStaticPodsDegraded: F0504 04:22:14.613373       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786596024832233472junit10 hours ago
I0504 04:29:44.830529       1 observer_polling.go:159] Starting file observer
W0504 04:29:44.886708       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-109-58.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 04:29:44.886848       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786596014765903872junit11 hours ago
# step graph.Run multi-stage test e2e-aws-sdn-upgrade - e2e-aws-sdn-upgrade-gather-audit-logs container test
-6zfdt41l-29e8e.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 18.221.153.254:6443: connect: connection refused
E0504 04:24:25.589443      35 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-6zfdt41l-29e8e.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 18.221.153.254:6443: connect: connection refused

... 3 lines not shown

#1786325825088393216junit27 hours ago
I0503 09:55:52.483410       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 09:55:54.040241       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-39g5sx6i-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.86.196:6443: connect: connection refused
I0503 09:55:54.086806       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786325825088393216junit27 hours ago
I0503 09:59:19.268909       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0503 09:59:25.979640       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-39g5sx6i-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.56.194:6443: connect: connection refused
#1786325814187397120junit27 hours ago
I0503 10:57:34.197794       1 observer_polling.go:159] Starting file observer
W0503 10:57:34.213977       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-127-161.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:57:34.214081       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786325812505481216junit27 hours ago
I0503 10:58:58.940896       1 observer_polling.go:159] Starting file observer
W0503 10:58:58.950417       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-117-213.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:58:58.950614       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786325818390089728junit28 hours ago
I0503 10:46:07.742054       1 observer_polling.go:159] Starting file observer
W0503 10:46:07.757135       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-176.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:46:07.757286       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786325815017869312junit28 hours ago
I0503 10:46:14.656871       1 observer_polling.go:159] Starting file observer
W0503 10:46:14.673054       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-111-47.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:46:14.673195       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786325820051034112junit28 hours ago
I0503 09:54:01.534644       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 09:57:28.506899       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-0vgwrlf4-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.49.235:6443: connect: connection refused
I0503 09:57:51.978697       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786325820051034112junit28 hours ago
I0503 10:51:02.235244       1 observer_polling.go:159] Starting file observer
W0503 10:51:02.248114       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-52-102.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:51:02.248247       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786325817530257408junit28 hours ago
I0503 10:45:21.817046       1 observer_polling.go:159] Starting file observer
W0503 10:45:21.828981       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-48-70.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:45:21.829091       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786325822563422208junit28 hours ago
May 03 10:48:12.585 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-89-176.us-west-2.compute.internal" not ready since 2024-05-03 10:46:12 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 10:48:41.226 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-89-176.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 10:48:37.313427       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 10:48:37.313698       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714733317 cert, and key in /tmp/serving-cert-2571526365/serving-signer.crt, /tmp/serving-cert-2571526365/serving-signer.key\nStaticPodsDegraded: I0503 10:48:37.694247       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 10:48:37.702097       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-89-176.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 10:48:37.702256       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 10:48:37.729982       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2571526365/tls.crt::/tmp/serving-cert-2571526365/tls.key"\nStaticPodsDegraded: F0503 10:48:38.235972       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 10:54:59.571 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-76-35.us-west-2.compute.internal" not ready since 2024-05-03 10:54:51 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786198454704279552junit36 hours ago
I0503 02:23:42.488057       1 observer_polling.go:159] Starting file observer
W0503 02:23:42.501870       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-52-25.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 02:23:42.502037       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786198452196085760junit36 hours ago
May 03 02:25:55.753 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 02:30:46.196 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-58-114.us-west-2.compute.internal" not ready since 2024-05-03 02:30:27 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-58-114.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 02:30:43.572239       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 02:30:43.572462       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714703443 cert, and key in /tmp/serving-cert-1609374322/serving-signer.crt, /tmp/serving-cert-1609374322/serving-signer.key\nStaticPodsDegraded: I0503 02:30:43.939441       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 02:30:43.961325       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-58-114.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 02:30:43.961491       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 02:30:43.992071       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1609374322/tls.crt::/tmp/serving-cert-1609374322/tls.key"\nStaticPodsDegraded: F0503 02:30:44.355228       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 02:30:46.196 - 855ms E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-58-114.us-west-2.compute.internal" not ready since 2024-05-03 02:30:27 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-58-114.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 02:30:43.572239       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 02:30:43.572462       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714703443 cert, and key in /tmp/serving-cert-1609374322/serving-signer.crt, /tmp/serving-cert-1609374322/serving-signer.key\nStaticPodsDegraded: I0503 02:30:43.939441       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 02:30:43.961325       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-58-114.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 02:30:43.961491       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 02:30:43.992071       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1609374322/tls.crt::/tmp/serving-cert-1609374322/tls.key"\nStaticPodsDegraded: F0503 02:30:44.355228       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 2 lines not shown

#1786198461410971648junit36 hours ago
May 03 02:23:09.216 - 4s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-53-135.us-west-2.compute.internal" not ready since 2024-05-03 02:22:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 02:23:13.978 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-53-135.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 02:23:09.439114       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 02:23:09.439799       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714702989 cert, and key in /tmp/serving-cert-1799625724/serving-signer.crt, /tmp/serving-cert-1799625724/serving-signer.key\nStaticPodsDegraded: I0503 02:23:09.931361       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 02:23:09.949541       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-53-135.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 02:23:09.949759       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 02:23:09.978488       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1799625724/tls.crt::/tmp/serving-cert-1799625724/tls.key"\nStaticPodsDegraded: F0503 02:23:10.389310       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 02:27:40.570 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-31-93.us-west-2.compute.internal" not ready since 2024-05-03 02:25:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786198466444136448junit36 hours ago
I0503 02:27:44.801933       1 observer_polling.go:159] Starting file observer
W0503 02:27:44.802311       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-219.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 02:27:44.802446       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786198458055528448junit36 hours ago
May 03 02:21:23.639 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-67-197.us-west-2.compute.internal" not ready since 2024-05-03 02:21:09 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 02:21:38.329 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-67-197.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 02:21:35.439436       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 02:21:35.439786       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714702895 cert, and key in /tmp/serving-cert-335467671/serving-signer.crt, /tmp/serving-cert-335467671/serving-signer.key\nStaticPodsDegraded: I0503 02:21:35.889208       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 02:21:35.908490       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-67-197.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 02:21:35.908677       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 02:21:35.927282       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-335467671/tls.crt::/tmp/serving-cert-335467671/tls.key"\nStaticPodsDegraded: F0503 02:21:36.423619       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 02:26:10.642 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-118-78.us-west-2.compute.internal" not ready since 2024-05-03 02:24:10 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786198458055528448junit36 hours ago
May 03 02:31:16.289 - 7s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-27-77.us-west-2.compute.internal" not ready since 2024-05-03 02:30:59 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 02:31:24.013 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-27-77.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 02:31:20.587694       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 02:31:20.588154       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714703480 cert, and key in /tmp/serving-cert-1001661797/serving-signer.crt, /tmp/serving-cert-1001661797/serving-signer.key\nStaticPodsDegraded: I0503 02:31:21.219333       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 02:31:21.235608       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-77.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 02:31:21.235719       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 02:31:21.258110       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1001661797/tls.crt::/tmp/serving-cert-1001661797/tls.key"\nStaticPodsDegraded: F0503 02:31:21.468415       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786198476510466048junit36 hours ago
E0503 01:22:11.380493       1 reflector.go:147] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)
E0503 01:22:18.373720       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-y39xhhg2-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.20.194:6443: connect: connection refused
I0503 01:22:19.602841       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786198476510466048junit36 hours ago
I0503 02:17:26.555632       1 observer_polling.go:159] Starting file observer
W0503 02:17:26.571054       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-36-158.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 02:17:26.571292       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786198468960718848junit36 hours ago
I0503 01:22:58.671583       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 01:29:10.305681       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-ix7p3lrs-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.88.112:6443: connect: connection refused
I0503 01:29:17.974836       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786198468960718848junit36 hours ago
I0503 02:23:43.688701       1 observer_polling.go:159] Starting file observer
W0503 02:23:43.716878       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-30-47.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 02:23:43.717030       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786198473998077952junit36 hours ago
I0503 02:24:35.352410       1 observer_polling.go:159] Starting file observer
W0503 02:24:35.381713       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-197.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 02:24:35.381828       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786120818493755392junit41 hours ago
I0502 21:21:25.348439       1 observer_polling.go:159] Starting file observer
W0502 21:21:25.363603       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-123-214.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 21:21:25.363878       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786120835262582784junit41 hours ago
I0502 21:22:24.210337       1 observer_polling.go:159] Starting file observer
W0502 21:22:24.223169       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-143.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 21:22:24.223367       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786120828576862208junit41 hours ago
cause/Error code/2 reason/ContainerExit lient@1714680317\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714680317\" (2024-05-02 19:05:17 +0000 UTC to 2025-05-02 19:05:17 +0000 UTC (now=2024-05-02 20:10:13.655831277 +0000 UTC))"
E0502 20:15:47.891347       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-jnv3f1fl-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.64.144:6443: connect: connection refused
I0502 20:15:50.268115       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786120828576862208junit41 hours ago
I0502 20:15:51.314364       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 20:20:09.104844       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-jnv3f1fl-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.64.144:6443: connect: connection refused
I0502 20:20:18.306750       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786120821001949184junit41 hours ago
I0502 21:19:38.080932       1 observer_polling.go:159] Starting file observer
W0502 21:19:38.100733       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-28-125.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 21:19:38.100875       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786120831076667392junit41 hours ago
I0502 21:13:55.819952       1 observer_polling.go:159] Starting file observer
W0502 21:13:55.841600       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-109-151.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 21:13:55.841868       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786120833593249792junit41 hours ago
I0502 20:19:47.762478       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0502 20:19:51.227173       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-ybmv35i6-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.19.36:6443: connect: connection refused
#1786120833593249792junit41 hours ago
I0502 21:28:59.728846       1 observer_polling.go:159] Starting file observer
W0502 21:28:59.739772       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-208.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 21:28:59.739937       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786120823526920192junit41 hours ago
I0502 21:10:29.247418       1 observer_polling.go:159] Starting file observer
W0502 21:10:29.291349       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-35-42.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 21:10:29.291636       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786120837821108224junit41 hours ago
I0502 21:15:22.013914       1 observer_polling.go:159] Starting file observer
W0502 21:15:22.031323       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-243.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 21:15:22.031479       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786049220076113920junit45 hours ago
1 tests failed during this blip (2024-05-02 16:43:59.581212227 +0000 UTC m=+2823.047064719 to 2024-05-02 16:43:59.581212227 +0000 UTC m=+2823.047064719): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:44:24.371 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-44-166.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:44:19.436496       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:44:19.436963       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714668259 cert, and key in /tmp/serving-cert-397526882/serving-signer.crt, /tmp/serving-cert-397526882/serving-signer.key\nStaticPodsDegraded: I0502 16:44:19.689039       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:44:19.713629       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-44-166.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:44:19.713770       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0502 16:44:19.737259       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-397526882/tls.crt::/tmp/serving-cert-397526882/tls.key"\nStaticPodsDegraded: F0502 16:44:19.949135       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
1 tests failed during this blip (2024-05-02 16:44:24.371782617 +0000 UTC m=+2847.837635096 to 2024-05-02 16:44:24.371782617 +0000 UTC m=+2847.837635096): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1786049220076113920junit45 hours ago
I0502 15:41:27.569010       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 15:45:17.147553       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-dgyswxdh-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.122.28:6443: connect: connection refused
I0502 15:45:20.953721       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786049217525977088junit46 hours ago
May 02 16:55:31.071 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-40-154.us-west-1.compute.internal" not ready since 2024-05-02 16:55:16 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:55:37.540 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-40-154.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:55:33.443090       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:55:33.443329       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714668933 cert, and key in /tmp/serving-cert-1564189173/serving-signer.crt, /tmp/serving-cert-1564189173/serving-signer.key\nStaticPodsDegraded: I0502 16:55:33.755453       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:55:33.756970       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-40-154.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:55:33.757067       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0502 16:55:33.757624       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1564189173/tls.crt::/tmp/serving-cert-1564189173/tls.key"\nStaticPodsDegraded: F0502 16:55:34.138731       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786049217525977088junit46 hours ago
I0502 16:45:30.927834       1 observer_polling.go:159] Starting file observer
W0502 16:45:30.937112       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-119-66.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:45:30.937305       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786049208294313984junit46 hours ago
I0502 16:47:28.145173       1 observer_polling.go:159] Starting file observer
W0502 16:47:28.159151       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-120-135.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:47:28.159361       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786049224241057792junit46 hours ago
I0502 16:52:35.837636       1 observer_polling.go:159] Starting file observer
W0502 16:52:35.849835       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-127-47.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:52:35.850080       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786049221737058304junit46 hours ago
I0502 15:34:39.163364       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 15:34:40.128118       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-56xqq36r-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.76.162:6443: connect: connection refused
I0502 15:38:09.191370       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786049221737058304junit46 hours ago
I0502 16:49:50.669179       1 observer_polling.go:159] Starting file observer
W0502 16:49:50.701296       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-28.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:49:50.701437       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786049226770223104junit46 hours ago
I0502 16:44:25.904798       1 observer_polling.go:159] Starting file observer
W0502 16:44:25.925141       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-14.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:44:25.925313       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786049215844061184junit46 hours ago
E0502 15:37:00.818049       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-99nl28y1-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0502 15:37:39.656202       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-99nl28y1-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.22.105:6443: connect: connection refused
I0502 15:38:17.425132       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786049215844061184junit46 hours ago
I0502 15:38:20.263055       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0502 15:38:23.675362       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-99nl28y1-29e8e.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.22.105:6443: connect: connection refused
#1786049213335867392junit46 hours ago
I0502 16:47:06.644479       1 observer_polling.go:159] Starting file observer
W0502 16:47:06.654669       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-36.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:47:06.654816       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786049205790314496junit46 hours ago
I0502 16:41:01.910999       1 observer_polling.go:159] Starting file observer
W0502 16:41:01.929311       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-243.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:41:01.929504       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786049210819284992junit46 hours ago
I0502 16:39:13.032979       1 observer_polling.go:159] Starting file observer
W0502 16:39:13.041888       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-116-51.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:39:13.042112       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-e2e-upgrade-aws-ovn-arm64 (all) - 11 runs, 45% failed, 140% of failures match = 64% impact
#1786595498002485248junit10 hours ago
I0504 04:39:37.705096       1 observer_polling.go:159] Starting file observer
W0504 04:39:37.714448       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-14-106.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 04:39:37.714679       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786595498002485248junit10 hours ago
I0504 04:34:37.867480       1 observer_polling.go:159] Starting file observer
W0504 04:34:37.884909       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-43-61.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 04:34:37.885061       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786360603871285248junit26 hours ago
E0503 12:04:38.333999       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-yl7qnrfi-be8df.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0503 12:05:19.970927       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-yl7qnrfi-be8df.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.115.57:6443: connect: connection refused
I0503 12:05:52.309268       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786360603871285248junit26 hours ago
I0503 12:52:17.714002       1 observer_polling.go:159] Starting file observer
W0503 12:52:17.750195       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-115-32.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 12:52:17.750418       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786232770838663168junit34 hours ago
May 03 04:44:21.891 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-105-43.us-east-2.compute.internal" not ready since 2024-05-03 04:42:21 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 04:44:58.769 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-105-43.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 04:44:50.234494       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 04:44:50.234841       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714711490 cert, and key in /tmp/serving-cert-845179838/serving-signer.crt, /tmp/serving-cert-845179838/serving-signer.key\nStaticPodsDegraded: I0503 04:44:51.292645       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 04:44:51.320789       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-43.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 04:44:51.320993       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 04:44:51.340632       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-845179838/tls.crt::/tmp/serving-cert-845179838/tls.key"\nStaticPodsDegraded: F0503 04:44:52.344366       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 04:50:13.886 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-75-216.us-east-2.compute.internal" not ready since 2024-05-03 04:48:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786232770838663168junit34 hours ago
I0503 03:37:16.459962       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 03:43:07.233336       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-dr1pvmq2-be8df.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.117.21:6443: connect: connection refused
I0503 03:43:35.427453       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786184566663286784junit37 hours ago
I0503 00:33:41.982903       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 00:36:19.644762       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-tmpylh73-be8df.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.120.172:6443: connect: connection refused
E0503 00:36:29.767353       1 reflector.go:147] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)
#1786184566663286784junit37 hours ago
I0503 01:38:36.810960       1 observer_polling.go:159] Starting file observer
W0503 01:38:36.820316       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-14-81.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 01:38:36.820431       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786142355162664960junit40 hours ago
I0502 22:40:49.751289       1 observer_polling.go:159] Starting file observer
W0502 22:40:49.768616       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-41-215.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 22:40:49.768885       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786100172854398976junit43 hours ago
I0502 20:01:13.470881       1 observer_polling.go:159] Starting file observer
W0502 20:01:13.486179       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-35-10.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 20:01:13.486321       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786056605368848384junit46 hours ago
May 02 17:10:21.799 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-77-187.us-east-2.compute.internal" not ready since 2024-05-02 17:08:21 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 17:10:58.245 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-77-187.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 17:10:51.602397       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 17:10:51.602640       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714669851 cert, and key in /tmp/serving-cert-1404154232/serving-signer.crt, /tmp/serving-cert-1404154232/serving-signer.key\nStaticPodsDegraded: I0502 17:10:52.401304       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 17:10:52.402825       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-77-187.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 17:10:52.402957       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0502 17:10:52.403759       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1404154232/tls.crt::/tmp/serving-cert-1404154232/tls.key"\nStaticPodsDegraded: W0502 17:10:56.035114       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nStaticPodsDegraded: F0502 17:10:56.035153       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 17:16:32.369 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-35-247.us-east-2.compute.internal" not ready since 2024-05-02 17:16:15 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786056605368848384junit46 hours ago
namespace/openshift-cloud-controller-manager node/ip-10-0-101-54.us-east-2.compute.internal pod/aws-cloud-controller-manager-646f4fdfbf-cr7hz uid/82b21e48-a6c6-4cb5-a812-cce9a4ce0948 container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit /api-int.ci-op-t2v57781-be8df.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.9.118:6443: connect: connection refused
I0502 16:00:04.143273       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
periodic-ci-openshift-multiarch-master-nightly-4.13-upgrade-from-stable-4.12-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#1786594410616590336junit10 hours ago
May 04 05:26:25.705 E ns/openshift-machine-config-operator pod/machine-config-daemon-rcs8q node/ip-10-0-153-136.us-west-2.compute.internal uid/8f42fbfe-b00a-4d84-9538-f31bef07bd39 container/oauth-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 04 05:26:26.699 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-136.us-west-2.compute.internal node/ip-10-0-153-136.us-west-2.compute.internal uid/d3d5bc9e-2e9f-4db2-84b9-5946ca39eebc container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0504 05:26:24.881745       1 cmd.go:216] Using insecure, self-signed certificates\nI0504 05:26:24.889856       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714800384 cert, and key in /tmp/serving-cert-1477881899/serving-signer.crt, /tmp/serving-cert-1477881899/serving-signer.key\nI0504 05:26:25.589080       1 observer_polling.go:159] Starting file observer\nW0504 05:26:25.599887       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-153-136.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0504 05:26:25.600107       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0504 05:26:25.608883       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1477881899/tls.crt::/tmp/serving-cert-1477881899/tls.key"\nF0504 05:26:26.131353       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 04 05:26:27.705 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-136.us-west-2.compute.internal node/ip-10-0-153-136.us-west-2.compute.internal uid/d3d5bc9e-2e9f-4db2-84b9-5946ca39eebc container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0504 05:26:24.881745       1 cmd.go:216] Using insecure, self-signed certificates\nI0504 05:26:24.889856       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714800384 cert, and key in /tmp/serving-cert-1477881899/serving-signer.crt, /tmp/serving-cert-1477881899/serving-signer.key\nI0504 05:26:25.589080       1 observer_polling.go:159] Starting file observer\nW0504 05:26:25.599887       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-153-136.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0504 05:26:25.600107       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0504 05:26:25.608883       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1477881899/tls.crt::/tmp/serving-cert-1477881899/tls.key"\nF0504 05:26:26.131353       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1786047062341586944junit46 hours ago
May 02 16:58:00.557 E ns/openshift-multus pod/network-metrics-daemon-gv2vf node/ip-10-0-234-243.us-west-2.compute.internal uid/b3954bfe-58af-42af-9c21-af4ba53affc0 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 16:58:03.552 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-234-243.us-west-2.compute.internal node/ip-10-0-234-243.us-west-2.compute.internal uid/7b04aec6-03e2-4b10-b897-cc40fe3a2c96 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 16:58:01.810560       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 16:58:01.824846       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714669081 cert, and key in /tmp/serving-cert-2055078694/serving-signer.crt, /tmp/serving-cert-2055078694/serving-signer.key\nI0502 16:58:02.474224       1 observer_polling.go:159] Starting file observer\nW0502 16:58:02.494148       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-234-243.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 16:58:02.494350       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0502 16:58:02.504290       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2055078694/tls.crt::/tmp/serving-cert-2055078694/tls.key"\nF0502 16:58:02.990332       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 16:58:04.577 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-234-243.us-west-2.compute.internal node/ip-10-0-234-243.us-west-2.compute.internal uid/7b04aec6-03e2-4b10-b897-cc40fe3a2c96 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 16:58:01.810560       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 16:58:01.824846       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714669081 cert, and key in /tmp/serving-cert-2055078694/serving-signer.crt, /tmp/serving-cert-2055078694/serving-signer.key\nI0502 16:58:02.474224       1 observer_polling.go:159] Starting file observer\nW0502 16:58:02.494148       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-234-243.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 16:58:02.494350       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0502 16:58:02.504290       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2055078694/tls.crt::/tmp/serving-cert-2055078694/tls.key"\nF0502 16:58:02.990332       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

periodic-ci-openshift-release-master-ci-4.16-e2e-aws-ovn-upgrade (all) - 8 runs, 13% failed, 500% of failures match = 63% impact
#1786596147347853312junit10 hours ago
May 04 04:27:26.878 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-109-163.us-west-2.compute.internal" not ready since 2024-05-04 04:27:13 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:27:35.267 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-109-163.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:27:27.437643       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:27:27.437901       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714796847 cert, and key in /tmp/serving-cert-3006894120/serving-signer.crt, /tmp/serving-cert-3006894120/serving-signer.key\nStaticPodsDegraded: I0504 04:27:27.730531       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:27:27.731513       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-109-163.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:27:27.731629       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 04:27:27.732202       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3006894120/tls.crt::/tmp/serving-cert-3006894120/tls.key"\nStaticPodsDegraded: F0504 04:27:28.083827       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 04:32:16.906 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-80-207.us-west-2.compute.internal" not ready since 2024-05-04 04:32:05 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786325637938548736junit28 hours ago
I0503 11:09:24.496288       1 observer_polling.go:159] Starting file observer
W0503 11:09:24.518556       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-109-9.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 11:09:24.518669       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786274208255315968junit31 hours ago
May 03 07:41:40.065 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-74-105.us-west-2.compute.internal" not ready since 2024-05-03 07:39:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 07:42:14.286 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-74-105.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 07:42:07.655953       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 07:42:07.656168       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714722127 cert, and key in /tmp/serving-cert-740884963/serving-signer.crt, /tmp/serving-cert-740884963/serving-signer.key\nStaticPodsDegraded: I0503 07:42:07.997461       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 07:42:07.999296       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-74-105.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 07:42:07.999516       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 07:42:08.000106       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-740884963/tls.crt::/tmp/serving-cert-740884963/tls.key"\nStaticPodsDegraded: F0503 07:42:08.266567       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 07:46:56.878 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-9-213.us-west-2.compute.internal" not ready since 2024-05-03 07:46:41 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786274208255315968junit31 hours ago
I0503 07:37:11.595111       1 observer_polling.go:159] Starting file observer
W0503 07:37:11.626567       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-55-199.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 07:37:11.626694       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786120805906649088junit41 hours ago
May 02 21:11:40.567 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-27-235.us-east-2.compute.internal" not ready since 2024-05-02 21:11:24 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-27-235.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 21:11:37.247745       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 21:11:37.248035       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714684297 cert, and key in /tmp/serving-cert-264055268/serving-signer.crt, /tmp/serving-cert-264055268/serving-signer.key\nStaticPodsDegraded: I0502 21:11:37.704691       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 21:11:37.723672       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-235.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 21:11:37.723825       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0502 21:11:37.743974       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-264055268/tls.crt::/tmp/serving-cert-264055268/tls.key"\nStaticPodsDegraded: F0502 21:11:37.945765       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 21:11:40.567 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-27-235.us-east-2.compute.internal" not ready since 2024-05-02 21:11:24 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-27-235.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 21:11:37.247745       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 21:11:37.248035       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714684297 cert, and key in /tmp/serving-cert-264055268/serving-signer.crt, /tmp/serving-cert-264055268/serving-signer.key\nStaticPodsDegraded: I0502 21:11:37.704691       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 21:11:37.723672       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-235.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 21:11:37.723825       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0502 21:11:37.743974       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-264055268/tls.crt::/tmp/serving-cert-264055268/tls.key"\nStaticPodsDegraded: F0502 21:11:37.945765       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 1 lines not shown

#1786049272941121536junit46 hours ago
I0502 16:42:35.743836       1 observer_polling.go:159] Starting file observer
W0502 16:42:35.758746       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-28-246.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:42:35.758906       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

periodic-ci-openshift-release-master-ci-4.16-e2e-aws-sdn-upgrade-out-of-change (all) - 7 runs, 29% failed, 200% of failures match = 57% impact
#1786596198484807680junit10 hours ago
May 04 04:32:17.189 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-59-217.us-west-2.compute.internal" not ready since 2024-05-04 04:32:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:32:30.901 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-59-217.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:32:27.626428       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:32:27.626845       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714797147 cert, and key in /tmp/serving-cert-3231680558/serving-signer.crt, /tmp/serving-cert-3231680558/serving-signer.key\nStaticPodsDegraded: I0504 04:32:28.278045       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:32:28.299861       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-59-217.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:32:28.299986       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 04:32:28.322278       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3231680558/tls.crt::/tmp/serving-cert-3231680558/tls.key"\nStaticPodsDegraded: F0504 04:32:28.531533       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 04:37:21.185 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-80-42.us-west-2.compute.internal" not ready since 2024-05-04 04:36:54 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-80-42.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:37:19.444556       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:37:19.444941       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714797439 cert, and key in /tmp/serving-cert-1873173958/serving-signer.crt, /tmp/serving-cert-1873173958/serving-signer.key\nStaticPodsDegraded: I0504 04:37:19.768859       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:37:19.770460       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-80-42.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:37:19.770605       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0504 04:37:19.771366       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1873173958/tls.crt::/tmp/serving-cert-1873173958/tls.key"\nStaticPodsDegraded: F0504 04:37:20.123810       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786325640446742528junit28 hours ago
I0503 10:44:38.692678       1 observer_polling.go:159] Starting file observer
W0503 10:44:38.713544       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-13-76.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:44:38.713688       1 builder.go:299] check-endpoints version 4.16.0-202404292211.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786198437088202752junit36 hours ago
I0503 02:31:25.349330       1 observer_polling.go:159] Starting file observer
W0503 02:31:25.368360       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-52-112.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 02:31:25.368504       1 builder.go:299] check-endpoints version 4.16.0-202404292211.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786049408190648320junit46 hours ago
I0502 16:35:53.760621       1 observer_polling.go:159] Starting file observer
W0502 16:35:53.774400       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-104.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:35:53.774592       1 builder.go:299] check-endpoints version 4.16.0-202404292211.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

periodic-ci-openshift-cluster-etcd-operator-release-4.14-periodics-e2e-aws-etcd-recovery (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1786611774179512320junit10 hours ago
May 04 05:04:36.344 - 16s   E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/34776084-06ec-4f95-a1d9-49aec38e1311 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-bf6m5ylx-3534b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
May 04 05:04:52.344 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/ee689dd7-2c3f-4f66-bbdc-2b58327b3f35 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-bf6m5ylx-3534b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 54.89.52.190:6443: connect: connection refused
May 04 05:04:53.344 - 8s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/1ede0f83-9e7e-401f-82ea-ddb3b1bac1b7 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-bf6m5ylx-3534b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers

... 2 lines not shown

#1786249496775102464junit34 hours ago
May 03 05:05:46.641 - 9s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/07db2ce6-8e0a-47fb-b5cf-4e1153949b38 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-t6pc4k81-3534b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
May 03 05:05:55.642 - 2s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/d4f881e0-d6c4-4570-b1b4-bb39ea7e98bc backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-t6pc4k81-3534b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 44.232.199.144:6443: connect: connection refused
May 03 05:05:57.642 - 2s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/51d58a5a-1378-453c-b6d3-e35b77ad0615 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-t6pc4k81-3534b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers

... 2 lines not shown

periodic-ci-openshift-multiarch-master-nightly-4.17-ocp-e2e-upgrade-aws-ovn-arm64 (all) - 10 runs, 30% failed, 167% of failures match = 50% impact
#1786596215266217984junit10 hours ago
I0504 04:29:46.527794       1 observer_polling.go:159] Starting file observer
W0504 04:29:46.556828       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-54-209.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 04:29:46.556990       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786356852951355392junit26 hours ago
I0503 12:25:02.954515       1 observer_polling.go:159] Starting file observer
W0503 12:25:02.991033       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-21-10.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 12:25:02.991220       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786172814592577536junit38 hours ago
I0503 00:59:16.456403       1 observer_polling.go:159] Starting file observer
W0503 00:59:16.464680       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-57-9.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:59:16.464910       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786096821811023872junit43 hours ago
I0502 19:42:35.143614       1 observer_polling.go:159] Starting file observer
W0502 19:42:35.160458       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-70-201.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 19:42:35.160680       1 builder.go:299] check-endpoints version 4.17.0-202404300240.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786052283297959936junit46 hours ago
I0502 16:52:26.680520       1 observer_polling.go:159] Starting file observer
W0502 16:52:26.692456       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-65-205.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:52:26.692565       1 builder.go:299] check-endpoints version 4.17.0-202404300240.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

periodic-ci-openshift-release-master-ci-4.16-e2e-aws-ovn-upgrade-out-of-change (all) - 7 runs, 0% failed, 71% of runs match
#1786596376432349184junit10 hours ago
I0504 04:29:13.776083       1 observer_polling.go:159] Starting file observer
W0504 04:29:13.797714       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-116-76.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 04:29:13.797934       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786325479804899328junit28 hours ago
May 03 10:51:04.966 - 27s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-48-135.ec2.internal" not ready since 2024-05-03 10:49:04 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 10:51:32.324 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-48-135.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 10:51:24.060285       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 10:51:24.060514       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714733484 cert, and key in /tmp/serving-cert-4018023718/serving-signer.crt, /tmp/serving-cert-4018023718/serving-signer.key\nStaticPodsDegraded: I0503 10:51:24.410722       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 10:51:24.412078       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-48-135.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 10:51:24.412485       1 builder.go:299] check-endpoints version 4.16.0-202404292211.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 10:51:24.413307       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4018023718/tls.crt::/tmp/serving-cert-4018023718/tls.key"\nStaticPodsDegraded: F0503 10:51:24.557175       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 10:57:07.028 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-64-249.ec2.internal" not ready since 2024-05-03 10:55:06 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786325479804899328junit28 hours ago
May 03 11:03:12.394 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-59-164.ec2.internal" not ready since 2024-05-03 11:03:05 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 11:03:28.046 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-59-164.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 11:03:19.340737       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 11:03:19.341199       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714734199 cert, and key in /tmp/serving-cert-3394239179/serving-signer.crt, /tmp/serving-cert-3394239179/serving-signer.key\nStaticPodsDegraded: I0503 11:03:19.561275       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 11:03:19.562615       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-59-164.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 11:03:19.562749       1 builder.go:299] check-endpoints version 4.16.0-202404292211.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 11:03:19.563379       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3394239179/tls.crt::/tmp/serving-cert-3394239179/tls.key"\nStaticPodsDegraded: F0503 11:03:19.851086       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786198211745026048junit36 hours ago
May 03 02:22:34.341 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-50-250.ec2.internal" not ready since 2024-05-03 02:22:06 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-50-250.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 02:22:29.123469       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 02:22:29.123708       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714702949 cert, and key in /tmp/serving-cert-659472466/serving-signer.crt, /tmp/serving-cert-659472466/serving-signer.key\nStaticPodsDegraded: I0503 02:22:29.643795       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 02:22:29.653440       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-50-250.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 02:22:29.653604       1 builder.go:299] check-endpoints version 4.16.0-202404292211.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 02:22:29.670464       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-659472466/tls.crt::/tmp/serving-cert-659472466/tls.key"\nStaticPodsDegraded: F0503 02:22:29.965467       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 02:22:34.341 - 3s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-50-250.ec2.internal" not ready since 2024-05-03 02:22:06 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-50-250.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 02:22:29.123469       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 02:22:29.123708       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714702949 cert, and key in /tmp/serving-cert-659472466/serving-signer.crt, /tmp/serving-cert-659472466/serving-signer.key\nStaticPodsDegraded: I0503 02:22:29.643795       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 02:22:29.653440       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-50-250.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 02:22:29.653604       1 builder.go:299] check-endpoints version 4.16.0-202404292211.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 02:22:29.670464       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-659472466/tls.crt::/tmp/serving-cert-659472466/tls.key"\nStaticPodsDegraded: F0503 02:22:29.965467       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 2 lines not shown

#1786120870515707904junit41 hours ago
May 02 21:16:35.770 - 22s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-34-136.ec2.internal" not ready since 2024-05-02 21:14:35 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 21:16:58.250 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-34-136.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 21:16:51.354519       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 21:16:51.355323       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714684611 cert, and key in /tmp/serving-cert-4070146000/serving-signer.crt, /tmp/serving-cert-4070146000/serving-signer.key\nStaticPodsDegraded: I0502 21:16:51.726975       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 21:16:51.728323       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-34-136.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 21:16:51.728571       1 builder.go:299] check-endpoints version 4.16.0-202404292211.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0502 21:16:51.729393       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4070146000/tls.crt::/tmp/serving-cert-4070146000/tls.key"\nStaticPodsDegraded: F0502 21:16:52.039811       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 21:21:46.188 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-93-192.ec2.internal" not ready since 2024-05-02 21:21:21 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 5 lines not shown

#1786049300631916544junit45 hours ago
I0502 16:58:53.789572       1 observer_polling.go:159] Starting file observer
W0502 16:58:53.816536       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-106-46.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:58:53.816740       1 builder.go:299] check-endpoints version 4.16.0-202404292211.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

periodic-ci-openshift-knative-eventing-kafka-broker-release-v1.11-412-test-reconciler-keda-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786615801868980224junit10 hours ago
# step graph.Run multi-stage test test-reconciler-keda-aws-412-c - test-reconciler-keda-aws-412-c-knative-must-gather container test
2a.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.132.178.231:6443: connect: connection refused
ClusterOperators:
#1786615801868980224junit10 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-1k8jn0by-d4b2a.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.132.178.231:6443: connect: connection refused
pull-ci-openshift-cluster-etcd-operator-master-e2e-aws-etcd-recovery (all) - 5 runs, 20% failed, 100% of failures match = 20% impact
#1786602420269223936junit10 hours ago
cause/Error code/1 reason/ContainerExit I0504 04:42:39.625627       1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.
E0504 04:42:39.642502       1 cluster.go:181] "Failed to get API Group-Resources" err="Get \"https://api-int.ci-op-qqmncgxf-33625.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s\": dial tcp 10.0.65.213:6443: connect: connection refused" logger="CCCMOConfigSyncControllers"
E0504 04:42:39.642620       1 main.go:130] "unable to start manager" err="Get \"https://api-int.ci-op-qqmncgxf-33625.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s\": dial tcp 10.0.65.213:6443: connect: connection refused" logger="CCCMOConfigSyncControllers.setup"

... 1 lines not shown

periodic-ci-openshift-knative-eventing-release-v1.14-412-test-reconciler-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786613033523482624junit11 hours ago
# step graph.Run multi-stage test test-reconciler-aws-412-c - test-reconciler-aws-412-c-knative-must-gather container test
2c.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 18.220.53.206:6443: connect: connection refused
ClusterOperators:
#1786613033523482624junit11 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-47f89ty7-36f2c.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 18.220.53.206:6443: connect: connection refused
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 11 runs, 9% failed, 400% of failures match = 36% impact
#1786595676767916032junit11 hours ago
I0504 04:55:37.620037       1 observer_polling.go:159] Starting file observer
W0504 04:55:37.633557       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-88.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 04:55:37.633701       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786317173250068480junit29 hours ago
I0503 10:08:54.499270       1 observer_polling.go:159] Starting file observer
W0503 10:08:54.516561       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-47.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:08:54.516690       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786293276312080384junit31 hours ago
May 03 08:26:36.023 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-36-112.ec2.internal" not ready since 2024-05-03 08:26:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 08:26:49.408 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-36-112.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 08:26:42.259944       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 08:26:42.260187       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714724802 cert, and key in /tmp/serving-cert-2939290898/serving-signer.crt, /tmp/serving-cert-2939290898/serving-signer.key\nStaticPodsDegraded: I0503 08:26:42.476965       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 08:26:42.478508       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-36-112.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 08:26:42.478620       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 08:26:42.479332       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2939290898/tls.crt::/tmp/serving-cert-2939290898/tls.key"\nStaticPodsDegraded: F0503 08:26:42.723508       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 08:32:18.032 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-116-213.ec2.internal" not ready since 2024-05-03 08:30:18 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786293276312080384junit31 hours ago
I0503 08:32:42.905112       1 observer_polling.go:159] Starting file observer
W0503 08:32:42.911779       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-116-213.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 08:32:42.911910       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786048969294483456junit47 hours ago
I0502 16:48:31.105788       1 observer_polling.go:159] Starting file observer
W0502 16:48:31.122820       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-33.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:48:31.122969       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

periodic-ci-openshift-multiarch-master-nightly-4.17-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 9 runs, 0% failed, 44% of runs match
#1786596703625809920junit11 hours ago
I0504 04:51:21.154836       1 observer_polling.go:159] Starting file observer
W0504 04:51:21.173858       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-0-201.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 04:51:21.173956       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786324186231214080junit29 hours ago
I0503 10:38:20.239961       1 observer_polling.go:159] Starting file observer
W0503 10:38:20.254121       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-40-222.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:38:20.254281       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786298691917713408junit30 hours ago
I0503 09:09:12.456161       1 observer_polling.go:159] Starting file observer
W0503 09:09:12.470540       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-57-8.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 09:09:12.470657       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786173131178643456junit39 hours ago
May 03 00:59:07.041 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-121-208.us-east-2.compute.internal" not ready since 2024-05-03 00:58:43 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 00:59:16.337 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-121-208.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 00:59:09.768187       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 00:59:09.768438       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714697949 cert, and key in /tmp/serving-cert-2282341496/serving-signer.crt, /tmp/serving-cert-2282341496/serving-signer.key\nStaticPodsDegraded: I0503 00:59:09.998995       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 00:59:10.000662       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-208.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 00:59:10.000782       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 00:59:10.001493       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2282341496/tls.crt::/tmp/serving-cert-2282341496/tls.key"\nStaticPodsDegraded: F0503 00:59:10.105882       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786173131178643456junit39 hours ago
I0503 00:59:08.194078       1 observer_polling.go:159] Starting file observer
W0503 00:59:08.207130       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-208.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 00:59:08.207268       1 builder.go:299] check-endpoints version 4.17.0-202405020641.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
periodic-ci-openshift-hypershift-release-4.16-periodics-e2e-aws-ovn-conformance (all) - 71 runs, 18% failed, 8% of failures match = 1% impact
#1786596125499723776junit11 hours ago
time="2024-05-04T04:17:37Z" level=warning msg="config was nil" func=DecodeProvider
  May  4 04:17:37.602: INFO: error accessing microshift-version configmap: Get "https://a949cb7d24aec4cf3b55af4b9bdad383-fcf2209a1de6e472.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kube-public/configmaps/microshift-version": dial tcp 18.210.165.94:6443: connect: connection refused
error: Get "https://a949cb7d24aec4cf3b55af4b9bdad383-fcf2209a1de6e472.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kube-public/configmaps/microshift-version": dial tcp 18.210.165.94:6443: connect: connection refused

... 1 lines not shown

periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade (all) - 6 runs, 67% failed, 125% of failures match = 83% impact
#1786572866863501312junit11 hours ago
May 04 03:39:27.405 E ns/openshift-ovn-kubernetes pod/ovnkube-node-gnwrp node/ip-10-0-157-78.ec2.internal uid/60221057-5b81-433c-af44-befe2113a17b container/ovn-acl-logging reason/ContainerExit code/1 cause/Error cat: /run/ovn/ovn-controller.pid: No such file or directory\n
May 04 03:39:31.435 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-157-78.ec2.internal node/ip-10-0-157-78.ec2.internal uid/ba000b03-32ec-4840-a51c-ede315f87722 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0504 03:39:30.198329       1 cmd.go:216] Using insecure, self-signed certificates\nI0504 03:39:30.258569       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714793970 cert, and key in /tmp/serving-cert-456469645/serving-signer.crt, /tmp/serving-cert-456469645/serving-signer.key\nI0504 03:39:30.785333       1 observer_polling.go:159] Starting file observer\nW0504 03:39:30.804170       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-157-78.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0504 03:39:30.804279       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0504 03:39:30.815500       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-456469645/tls.crt::/tmp/serving-cert-456469645/tls.key"\nF0504 03:39:30.925062       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 04 03:39:32.382 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-157-78.ec2.internal node/ip-10-0-157-78.ec2.internal uid/ba000b03-32ec-4840-a51c-ede315f87722 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0504 03:39:30.198329       1 cmd.go:216] Using insecure, self-signed certificates\nI0504 03:39:30.258569       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714793970 cert, and key in /tmp/serving-cert-456469645/serving-signer.crt, /tmp/serving-cert-456469645/serving-signer.key\nI0504 03:39:30.785333       1 observer_polling.go:159] Starting file observer\nW0504 03:39:30.804170       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-157-78.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0504 03:39:30.804279       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0504 03:39:30.815500       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-456469645/tls.crt::/tmp/serving-cert-456469645/tls.key"\nF0504 03:39:30.925062       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1786526482051371008junit14 hours ago
May 04 00:20:31.127 E ns/openshift-dns pod/dns-default-ng4gq node/ip-10-0-221-98.ec2.internal uid/c6efdd6f-b472-4fbb-b290-99a528f86be0 container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 04 00:20:32.164 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-221-98.ec2.internal node/ip-10-0-221-98.ec2.internal uid/e59c5176-85ec-469b-ba75-d2c0150abcbd container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0504 00:20:25.927303       1 cmd.go:216] Using insecure, self-signed certificates\nI0504 00:20:25.936778       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714782025 cert, and key in /tmp/serving-cert-768102821/serving-signer.crt, /tmp/serving-cert-768102821/serving-signer.key\nI0504 00:20:26.595198       1 observer_polling.go:159] Starting file observer\nW0504 00:20:26.604128       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-221-98.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0504 00:20:26.604263       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0504 00:20:26.604838       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-768102821/tls.crt::/tmp/serving-cert-768102821/tls.key"\nW0504 00:20:31.529733       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0504 00:20:31.529832       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
May 04 00:20:32.190 E ns/openshift-multus pod/network-metrics-daemon-tx88w node/ip-10-0-221-98.ec2.internal uid/517cb8a6-7622-4439-bfae-505346b51e48 container/network-metrics-daemon reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

#1786480165287628800junit17 hours ago
May 03 21:14:30.633 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-204-164.us-west-1.compute.internal uid/653d3b03-8431-401a-bd8b-700c41ce6ba4 container/alertmanager reason/ContainerExit code/1 cause/Error ts=2024-05-03T21:14:26.328Z caller=main.go:231 level=info msg="Starting Alertmanager" version="(version=0.24.0, branch=release-4.12, revision=914cad827e9a177b29b23e02eb48b4065da8dca2)"\nts=2024-05-03T21:14:26.328Z caller=main.go:232 level=info build_context="(go=go1.19.13 X:strictfipsruntime, user=root@06d38c1ed95c, date=20231101-03:59:15)"\nts=2024-05-03T21:14:26.556Z caller=cluster.go:680 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s\nts=2024-05-03T21:14:26.596Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/config_out/alertmanager.env.yaml\nts=2024-05-03T21:14:26.596Z caller=coordinator.go:118 level=error component=configuration msg="Loading configuration file failed" file=/etc/alertmanager/config_out/alertmanager.env.yaml err="open /etc/alertmanager/config_out/alertmanager.env.yaml: no such file or directory"\nts=2024-05-03T21:14:26.596Z caller=cluster.go:689 level=info component=cluster msg="gossip not settled but continuing anyway" polls=0 elapsed=40.1315ms\n
May 03 21:14:33.718 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-209-70.us-west-1.compute.internal node/ip-10-0-209-70.us-west-1.compute.internal uid/c845ea63-e466-4909-be7b-b50917a95386 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 21:14:31.526607       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 21:14:31.543644       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714770871 cert, and key in /tmp/serving-cert-3322206361/serving-signer.crt, /tmp/serving-cert-3322206361/serving-signer.key\nI0503 21:14:32.094896       1 observer_polling.go:159] Starting file observer\nW0503 21:14:32.132308       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-209-70.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 21:14:32.132434       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0503 21:14:32.132893       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3322206361/tls.crt::/tmp/serving-cert-3322206361/tls.key"\nF0503 21:14:32.326509       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 21:14:37.086 E ns/e2e-k8s-sig-apps-daemonset-upgrade-4247 pod/ds1-kmvfw node/ip-10-0-209-70.us-west-1.compute.internal uid/97dc6024-601c-4863-b7b8-d500a04ee4df container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

#1786434241026854912junit20 hours ago
May 03 18:06:51.285 E ns/openshift-multus pod/network-metrics-daemon-wq7fb node/ip-10-0-254-7.us-east-2.compute.internal uid/7990fcee-e93b-4202-8de3-2723876d8ef1 container/network-metrics-daemon reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 18:06:52.216 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-254-7.us-east-2.compute.internal node/ip-10-0-254-7.us-east-2.compute.internal uid/592f7853-7e71-4aff-b8ce-056ba3b6d376 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 18:06:50.756878       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 18:06:50.771648       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714759610 cert, and key in /tmp/serving-cert-3868059507/serving-signer.crt, /tmp/serving-cert-3868059507/serving-signer.key\nI0503 18:06:51.076237       1 observer_polling.go:159] Starting file observer\nW0503 18:06:51.091608       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-254-7.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 18:06:51.091734       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0503 18:06:51.092875       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3868059507/tls.crt::/tmp/serving-cert-3868059507/tls.key"\nF0503 18:06:51.606382       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 18:06:52.984 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is degraded

... 3 lines not shown

#1786387458158497792junit23 hours ago
May 03 15:09:58.535 E ns/openshift-ovn-kubernetes pod/ovnkube-master-cq57f node/ip-10-0-167-58.us-west-2.compute.internal uid/029ba247-dbd7-4a47-8078-15f7fbbab4b9 container/sbdb reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 15:10:01.556 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-58.us-west-2.compute.internal node/ip-10-0-167-58.us-west-2.compute.internal uid/8ba0e98a-d0ae-4c68-9403-5cdcde74e76c container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 15:09:56.671326       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 15:09:56.671564       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714748996 cert, and key in /tmp/serving-cert-611309390/serving-signer.crt, /tmp/serving-cert-611309390/serving-signer.key\nI0503 15:09:57.191722       1 observer_polling.go:159] Starting file observer\nW0503 15:09:57.208736       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-167-58.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 15:09:57.208902       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0503 15:09:57.227518       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-611309390/tls.crt::/tmp/serving-cert-611309390/tls.key"\nW0503 15:10:00.904136       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0503 15:10:00.904513       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
May 03 15:10:02.550 E ns/openshift-e2e-loki pod/loki-promtail-vjzxx node/ip-10-0-167-58.us-west-2.compute.internal uid/4c221ccf-a98d-4ab8-9f8b-9768d61eb3ca container/prod-bearer-token reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1786387458158497792junit23 hours ago
May 03 15:10:02.550 E ns/openshift-e2e-loki pod/loki-promtail-vjzxx node/ip-10-0-167-58.us-west-2.compute.internal uid/4c221ccf-a98d-4ab8-9f8b-9768d61eb3ca container/promtail reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 15:10:02.601 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-58.us-west-2.compute.internal node/ip-10-0-167-58.us-west-2.compute.internal uid/8ba0e98a-d0ae-4c68-9403-5cdcde74e76c container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 15:09:56.671326       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 15:09:56.671564       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714748996 cert, and key in /tmp/serving-cert-611309390/serving-signer.crt, /tmp/serving-cert-611309390/serving-signer.key\nI0503 15:09:57.191722       1 observer_polling.go:159] Starting file observer\nW0503 15:09:57.208736       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-167-58.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 15:09:57.208902       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0503 15:09:57.227518       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-611309390/tls.crt::/tmp/serving-cert-611309390/tls.key"\nW0503 15:10:00.904136       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0503 15:10:00.904513       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
May 03 15:10:03.590 E ns/openshift-multus pod/multus-additional-cni-plugins-p8vbx node/ip-10-0-167-58.us-west-2.compute.internal uid/6cac7b16-89e8-41bb-a5f9-d3cdad827e00 container/kube-multus-additional-cni-plugins reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
pull-ci-openshift-etcd-openshift-4.16-e2e-aws-ovn-upgrade (all) - 3 runs, 0% failed, 100% of runs match
#1786583287959916544junit11 hours ago
May 04 04:13:29.675 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-119-172.us-west-2.compute.internal" not ready since 2024-05-04 04:13:21 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:13:45.069 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-119-172.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:13:36.821452       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:13:36.821631       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714796016 cert, and key in /tmp/serving-cert-3501923946/serving-signer.crt, /tmp/serving-cert-3501923946/serving-signer.key\nStaticPodsDegraded: I0504 04:13:37.048165       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:13:37.049588       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-119-172.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:13:37.049743       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0504 04:13:37.050731       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3501923946/tls.crt::/tmp/serving-cert-3501923946/tls.key"\nStaticPodsDegraded: F0504 04:13:37.269422       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 04 04:18:58.162 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-42-49.us-west-2.compute.internal" not ready since 2024-05-04 04:18:37 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786583287959916544junit11 hours ago
I0504 04:13:35.508767       1 observer_polling.go:159] Starting file observer
W0504 04:13:35.517759       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-119-172.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 04:13:35.517887       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1786043347534614528junit47 hours ago
May 02 16:21:24.100 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-126-11.ec2.internal" not ready since 2024-05-02 16:19:23 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:21:57.620 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-126-11.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:21:50.583559       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:21:50.583831       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666910 cert, and key in /tmp/serving-cert-4194771952/serving-signer.crt, /tmp/serving-cert-4194771952/serving-signer.key\nStaticPodsDegraded: I0502 16:21:50.834485       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:21:50.836160       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-126-11.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:21:50.836302       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 16:21:50.838263       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4194771952/tls.crt::/tmp/serving-cert-4194771952/tls.key"\nStaticPodsDegraded: F0502 16:21:51.034604       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 16:27:13.092 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-119-202.ec2.internal" not ready since 2024-05-02 16:26:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786036895499685888junit2 days ago
cause/Error code/2 reason/ContainerExit rving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714660777\" (2024-05-02 13:39:36 +0000 UTC to 2025-05-02 13:39:36 +0000 UTC (now=2024-05-02 14:44:14.598092982 +0000 UTC))"
E0502 14:48:50.104968       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-vrhvxqs8-68bf6.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.77.239:6443: connect: connection refused
I0502 14:49:49.584590       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786036895499685888junit2 days ago
I0502 14:50:00.727816       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 14:54:21.101459       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-vrhvxqs8-68bf6.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.77.239:6443: connect: connection refused
I0502 14:54:54.065454       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
periodic-ci-openshift-multiarch-master-nightly-4.15-ocp-e2e-upgrade-aws-ovn-arm64 (all) - 6 runs, 0% failed, 50% of runs match
#1786586031210893312junit11 hours ago
May 04 04:10:54.531 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-114-15.us-west-1.compute.internal" not ready since 2024-05-04 04:08:54 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 04:11:23.315 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-114-15.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 04:11:15.251850       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 04:11:15.252255       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714795875 cert, and key in /tmp/serving-cert-3779253514/serving-signer.crt, /tmp/serving-cert-3779253514/serving-signer.key\nStaticPodsDegraded: I0504 04:11:16.994318       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 04:11:17.018554       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-15.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 04:11:17.018806       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0504 04:11:17.032936       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3779253514/tls.crt::/tmp/serving-cert-3779253514/tls.key"\nStaticPodsDegraded: F0504 04:11:17.727855       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 04:16:53.432 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-127-139.us-west-1.compute.internal" not ready since 2024-05-04 04:16:33 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786477843987828736junit19 hours ago
May 03 20:33:07.393 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-58-156.us-east-2.compute.internal" not ready since 2024-05-03 20:32:56 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 20:33:19.887 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-58-156.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 20:33:07.888573       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 20:33:07.888897       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714768387 cert, and key in /tmp/serving-cert-1312847480/serving-signer.crt, /tmp/serving-cert-1312847480/serving-signer.key\nStaticPodsDegraded: I0503 20:33:09.234471       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 20:33:09.250750       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-58-156.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 20:33:09.250919       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 20:33:09.279674       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1312847480/tls.crt::/tmp/serving-cert-1312847480/tls.key"\nStaticPodsDegraded: F0503 20:33:10.782366       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 20:38:24.361 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-116-156.us-east-2.compute.internal" not ready since 2024-05-03 20:36:24 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786477843987828736junit19 hours ago
May 03 20:44:27.521 - 21s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-74-142.us-east-2.compute.internal" not ready since 2024-05-03 20:44:26 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 20:44:49.266 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-74-142.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 20:44:38.117307       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 20:44:38.117642       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714769078 cert, and key in /tmp/serving-cert-2122609283/serving-signer.crt, /tmp/serving-cert-2122609283/serving-signer.key\nStaticPodsDegraded: I0503 20:44:39.253026       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 20:44:39.272984       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-74-142.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 20:44:39.273165       1 builder.go:299] check-endpoints version 4.15.0-202405021635.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 20:44:39.300346       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2122609283/tls.crt::/tmp/serving-cert-2122609283/tls.key"\nStaticPodsDegraded: F0503 20:44:39.925060       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786048957516877824junit47 hours ago
May 02 16:15:15.160 - 41s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-56-179.ec2.internal" not ready since 2024-05-02 16:13:15 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:15:56.570 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-56-179.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:15:45.226033       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:15:45.226320       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666545 cert, and key in /tmp/serving-cert-1452059338/serving-signer.crt, /tmp/serving-cert-1452059338/serving-signer.key\nStaticPodsDegraded: I0502 16:15:46.207932       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:15:46.251959       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-56-179.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:15:46.252195       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:15:46.303550       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1452059338/tls.crt::/tmp/serving-cert-1452059338/tls.key"\nStaticPodsDegraded: F0502 16:15:48.822655       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 16:20:58.171 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-116-53.ec2.internal" not ready since 2024-05-02 16:18:58 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

periodic-ci-openshift-knative-eventing-release-v1.12-412-test-e2e-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786596229539434496junit11 hours ago
# step graph.Run multi-stage test test-e2e-aws-412-c - test-e2e-aws-412-c-knative-must-gather container test
87.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 18.189.215.57:6443: connect: connection refused
ClusterOperators:
#1786596229539434496junit11 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-9n2qjtys-6fd87.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 18.189.215.57:6443: connect: connection refused
periodic-ci-openshift-knative-eventing-release-v1.12-412-test-conformance-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786591894139047936junit11 hours ago
# step graph.Run multi-stage test test-conformance-aws-412-c - test-conformance-aws-412-c-knative-must-gather container test
c1.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.133.200.147:6443: connect: connection refused
ClusterOperators:
#1786591894139047936junit11 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-xp9zw1fm-510c1.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.133.200.147:6443: connect: connection refused
periodic-ci-rh-ecosystem-edge-recert-main-4.16-e2e-aws-ovn-single-node-recert-serial (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#1786572776119734272junit12 hours ago
2024-05-04T02:34:09Z node/ip-10-0-88-48.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bkqsnlnv-303b9.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-48.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2024-05-04T03:36:22Z node/ip-10-0-88-48.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bkqsnlnv-303b9.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-48.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.13.158:6443: connect: connection refused
2024-05-04T03:36:22Z node/ip-10-0-88-48.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bkqsnlnv-303b9.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-48.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.68.210:6443: connect: connection refused

... 14 lines not shown

#1786210342200676352junit36 hours ago
2024-05-03T04:04:37Z node/ip-10-0-47-49.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-th1mcvk0-303b9.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-49.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.0.111:6443: connect: connection refused
2024-05-03T04:04:37Z node/ip-10-0-47-49.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-th1mcvk0-303b9.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-49.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.112.107:6443: connect: connection refused

... 14 lines not shown

periodic-ci-openshift-knative-eventing-kafka-broker-release-v1.14-412-test-e2e-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786581576763576320junit12 hours ago
# step graph.Run multi-stage test test-e2e-aws-412-c - test-e2e-aws-412-c-knative-must-gather container test
r9-7317f.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.133.248.42:6443: connect: connection refused
ClusterOperators:
#1786581576763576320junit12 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-5mrj72r9-7317f.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.133.248.42:6443: connect: connection refused
periodic-ci-openshift-knative-eventing-release-v1.14-412-test-conformance-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786574781051572224junit12 hours ago
---
urceVersion=0": dial tcp 52.206.94.184:6443: connect: connection refused
E0504 03:08:50.033734     222 reflector.go:140] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.ci-op-f0sx2xg9-27621.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 52.206.94.184:6443: connect: connection refused

... 2 lines not shown

periodic-ci-openshift-knative-eventing-kafka-broker-release-v1.12-412-test-e2e-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786574781089320960junit12 hours ago
# step graph.Run multi-stage test test-e2e-aws-412-c - test-e2e-aws-412-c-knative-must-gather container test
w3-73925.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.143.106.81:6443: connect: connection refused
ClusterOperators:
#1786574781089320960junit12 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-cbz8qww3-73925.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.143.106.81:6443: connect: connection refused
periodic-ci-openshift-knative-eventing-release-v1.12-412-test-encryption-auth-e2e-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786587615814750208junit12 hours ago
# step graph.Run multi-stage test test-encryption-auth-e2e-aws-412-c - test-encryption-auth-e2e-aws-412-c-knative-must-gather container test
d0.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.134.182.147:6443: connect: connection refused
ClusterOperators:
#1786587615814750208junit12 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-8czd5ll4-661d0.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.134.182.147:6443: connect: connection refused
pull-ci-openshift-cluster-etcd-operator-master-e2e-aws-ovn-single-node (all) - 5 runs, 20% failed, 100% of failures match = 20% impact
#1786595973842079744junit12 hours ago
# step graph.Run multi-stage test e2e-aws-ovn-single-node - e2e-aws-ovn-single-node-gather-audit-logs container test
api?timeout=32s": dial tcp 157.254.217.104:6443: connect: connection refused
E0504 03:50:32.560546      28 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-5rsjt48n-61c58.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 157.254.217.104:6443: connect: connection refused

... 3 lines not shown

pull-ci-openshift-cluster-etcd-operator-master-e2e-aws-ovn-serial (all) - 5 runs, 20% failed, 100% of failures match = 20% impact
#1786595973808525312junit12 hours ago
# step graph.Run multi-stage test e2e-aws-ovn-serial - e2e-aws-ovn-serial-gather-audit-logs container test
dev.rhcloud.com:6443/api?timeout=32s": dial tcp 52.52.45.201:6443: connect: connection refused
E0504 03:50:39.694863      36 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-5rsjt48n-1993e.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 52.52.45.201:6443: connect: connection refused

... 3 lines not shown

periodic-ci-openshift-multiarch-master-nightly-4.13-upgrade-from-stable-4.12-ocp-e2e-aws-sdn-arm64 (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#1786555905291063296junit13 hours ago
May 04 01:54:51.514 E ns/openshift-multus pod/multus-additional-cni-plugins-qwrsp node/ip-10-0-243-25.ec2.internal uid/b6250de7-e93d-4e43-bda8-7a57059df031 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 04 01:55:07.573 E ns/openshift-sdn pod/sdn-controller-pjc6r node/ip-10-0-243-25.ec2.internal uid/cfe1ec53-011b-4d18-8050-6034008fdaf1 container/sdn-controller reason/ContainerExit code/2 cause/Error I0504 01:03:03.741873       1 server.go:27] Starting HTTP metrics server\nI0504 01:03:03.741971       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0504 01:03:03.745858       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7jwfrsvg-e6c5d.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.219.186:6443: connect: connection refused\nE0504 01:04:32.122647       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0504 01:05:08.722721       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7jwfrsvg-e6c5d.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.152.134:6443: connect: connection refused\n
May 04 01:55:07.573 E ns/openshift-sdn pod/sdn-controller-pjc6r node/ip-10-0-243-25.ec2.internal uid/cfe1ec53-011b-4d18-8050-6034008fdaf1 container/sdn-controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1786555905291063296junit13 hours ago
May 04 01:55:27.091 E ns/openshift-multus pod/multus-additional-cni-plugins-8phvq node/ip-10-0-152-74.ec2.internal uid/861caed5-2315-4d10-b93d-9872a2a49f3e container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 04 01:55:28.089 E ns/openshift-sdn pod/sdn-controller-7js59 node/ip-10-0-152-74.ec2.internal uid/5397bcbd-cb41-4465-a24a-bc9c1e3394f7 container/sdn-controller reason/ContainerExit code/2 cause/Error I0504 00:53:27.366472       1 server.go:27] Starting HTTP metrics server\nI0504 00:53:27.366566       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0504 01:02:05.042980       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0504 01:04:03.127565       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0504 01:04:52.265639       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7jwfrsvg-e6c5d.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.219.186:6443: connect: connection refused\nE0504 01:05:38.656004       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7jwfrsvg-e6c5d.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.219.186:6443: connect: connection refused\n
May 04 01:55:31.893 E ns/openshift-multus pod/multus-admission-controller-64f46858bf-dws9n node/ip-10-0-243-25.ec2.internal uid/b171e46c-4806-41de-aa40-ddcc05037b82 container/multus-admission-controller reason/ContainerExit code/137 cause/Error
#1786046166568275968junit46 hours ago
May 02 16:40:22.762 E ns/openshift-sdn pod/sdn-controller-5qlw7 node/ip-10-0-128-105.us-west-2.compute.internal uid/b6024a81-f33d-4714-811f-4f8034a1d959 container/sdn-controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 16:40:30.831 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-105.us-west-2.compute.internal node/ip-10-0-128-105.us-west-2.compute.internal uid/ac9ae640-241e-4256-bc55-27914d6c67e7 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 16:40:26.984943       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 16:40:26.992076       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714668026 cert, and key in /tmp/serving-cert-2576235865/serving-signer.crt, /tmp/serving-cert-2576235865/serving-signer.key\nI0502 16:40:28.683733       1 observer_polling.go:159] Starting file observer\nW0502 16:40:28.698947       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-128-105.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 16:40:28.699093       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0502 16:40:28.717251       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2576235865/tls.crt::/tmp/serving-cert-2576235865/tls.key"\nF0502 16:40:30.048452       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 16:40:33.853 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-105.us-west-2.compute.internal node/ip-10-0-128-105.us-west-2.compute.internal uid/ac9ae640-241e-4256-bc55-27914d6c67e7 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 16:40:26.984943       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 16:40:26.992076       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714668026 cert, and key in /tmp/serving-cert-2576235865/serving-signer.crt, /tmp/serving-cert-2576235865/serving-signer.key\nI0502 16:40:28.683733       1 observer_polling.go:159] Starting file observer\nW0502 16:40:28.698947       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-128-105.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 16:40:28.699093       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0502 16:40:28.717251       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2576235865/tls.crt::/tmp/serving-cert-2576235865/tls.key"\nF0502 16:40:30.048452       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

periodic-ci-openshift-knative-eventing-release-next-412-test-e2e-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786572267027697664junit13 hours ago
# step graph.Run multi-stage test test-e2e-aws-412-c - test-e2e-aws-412-c-knative-must-gather container test
verless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 18.221.235.220:6443: connect: connection refused
ClusterOperators:
#1786572267027697664junit13 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-wgxf7rmx-b2754.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 18.221.235.220:6443: connect: connection refused
periodic-ci-openshift-knative-eventing-kafka-broker-release-next-412-test-e2e-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786572270433472512junit13 hours ago
# step graph.Run multi-stage test test-e2e-aws-412-c - test-e2e-aws-412-c-knative-must-gather container test
ci-op-47pirik5-da782.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.136.79.3:6443: connect: connection refused
ClusterOperators:
#1786572270433472512junit13 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-47pirik5-da782.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.136.79.3:6443: connect: connection refused
periodic-ci-openshift-knative-serving-release-next-412-test-e2e-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786567231212097536junit13 hours ago
# step graph.Run multi-stage test test-e2e-aws-412-c - test-e2e-aws-412-c-knative-must-gather container test
verless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 18.219.140.185:6443: connect: connection refused
ClusterOperators:
#1786567231212097536junit13 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-fc5zgxqt-47e83.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 18.219.140.185:6443: connect: connection refused
pull-ci-openshift-cluster-ingress-operator-master-e2e-aws-ovn-upgrade (all) - 6 runs, 17% failed, 400% of failures match = 67% impact
#1786562828652515328junit13 hours ago
May 04 02:45:51.612 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-16-194.us-west-2.compute.internal" not ready since 2024-05-04 02:43:51 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 02:46:25.225 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-16-194.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 02:46:17.522449       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 02:46:17.522689       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714790777 cert, and key in /tmp/serving-cert-817385881/serving-signer.crt, /tmp/serving-cert-817385881/serving-signer.key\nStaticPodsDegraded: I0504 02:46:17.702029       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 02:46:17.703432       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-16-194.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 02:46:17.703582       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0504 02:46:17.704174       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-817385881/tls.crt::/tmp/serving-cert-817385881/tls.key"\nStaticPodsDegraded: F0504 02:46:17.986546       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 02:51:35.086 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-99-23.us-west-2.compute.internal" not ready since 2024-05-04 02:51:16 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786562828652515328junit13 hours ago
I0504 02:46:16.435088       1 observer_polling.go:159] Starting file observer
W0504 02:46:16.445840       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-16-194.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0504 02:46:16.446014       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1786519816375373824junit16 hours ago
May 03 23:43:30.672 - 30s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-57-245.ec2.internal" not ready since 2024-05-03 23:43:25 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 23:44:00.714 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-57-245.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 23:43:52.578937       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 23:43:52.579482       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714779832 cert, and key in /tmp/serving-cert-3046370597/serving-signer.crt, /tmp/serving-cert-3046370597/serving-signer.key\nStaticPodsDegraded: I0503 23:43:52.827376       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 23:43:52.828815       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-57-245.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 23:43:52.828936       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 23:43:52.829558       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3046370597/tls.crt::/tmp/serving-cert-3046370597/tls.key"\nStaticPodsDegraded: F0503 23:43:53.070638       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 23:49:12.433 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-108-237.ec2.internal" not ready since 2024-05-03 23:48:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786519816375373824junit16 hours ago
May 03 23:54:24.625 - 24s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-13-55.ec2.internal" not ready since 2024-05-03 23:52:24 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 23:54:49.476 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-13-55.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 23:54:38.775592       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 23:54:38.776305       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714780478 cert, and key in /tmp/serving-cert-2439940732/serving-signer.crt, /tmp/serving-cert-2439940732/serving-signer.key\nStaticPodsDegraded: I0503 23:54:39.366852       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 23:54:39.382952       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-13-55.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 23:54:39.383243       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 23:54:39.413465       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2439940732/tls.crt::/tmp/serving-cert-2439940732/tls.key"\nStaticPodsDegraded: F0503 23:54:39.543971       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786281839338459136junit32 hours ago
I0503 08:07:07.199141       1 observer_polling.go:159] Starting file observer
W0503 08:07:07.233015       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-183.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 08:07:07.233518       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786105399494053888junit43 hours ago
I0502 20:23:05.983628       1 observer_polling.go:159] Starting file observer
W0502 20:23:05.993643       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-254.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 20:23:05.993792       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

periodic-ci-openshift-knative-eventing-kafka-broker-release-next-412-test-conformance-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786567231170154496junit13 hours ago
# step graph.Run multi-stage test test-conformance-aws-412-c - test-conformance-aws-412-c-knative-must-gather container test
c9-7b86e.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.133.71.119:6443: connect: connection refused
ClusterOperators:
#1786567231170154496junit13 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-fyw63dc9-7b86e.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.133.71.119:6443: connect: connection refused
periodic-ci-openshift-knative-eventing-release-next-412-test-conformance-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786567231077879808junit13 hours ago
# step graph.Run multi-stage test test-conformance-aws-412-c - test-conformance-aws-412-c-knative-must-gather container test
27c0r5pk-eeca5.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.14.182.58:6443: connect: connection refused
ClusterOperators:
#1786567231077879808junit13 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-27c0r5pk-eeca5.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.14.182.58:6443: connect: connection refused
periodic-ci-openshift-knative-eventing-istio-release-next-412-e2e-tests-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786567231124017152junit14 hours ago
# step graph.Run multi-stage test e2e-tests-aws-412-c - e2e-tests-aws-412-c-knative-must-gather container test
b9.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 18.225.22.201:6443: connect: connection refused
ClusterOperators:
#1786567231124017152junit14 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-cpd54ct6-472b9.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 18.225.22.201:6443: connect: connection refused
periodic-ci-openshift-knative-eventing-release-next-412-test-encryption-auth-e2e-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786562449470656512junit14 hours ago
# step graph.Run multi-stage test test-encryption-auth-e2e-aws-412-c - test-encryption-auth-e2e-aws-412-c-knative-must-gather container test
//api.ci-op-rl2hrjkk-5d3a9.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.19.6.37:6443: connect: connection refused
ClusterOperators:
#1786562449470656512junit14 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-rl2hrjkk-5d3a9.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.19.6.37:6443: connect: connection refused
periodic-ci-rh-ecosystem-edge-recert-main-4.14-e2e-aws-ovn-single-node-recert-serial (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#1786545087094722560junit14 hours ago
2024-05-04T01:36:47Z node/ip-10-0-43-244.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xqllmkq6-fd652.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-244.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.73.215:6443: connect: connection refused
2024-05-04T01:36:47Z node/ip-10-0-43-244.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xqllmkq6-fd652.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-244.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.73.215:6443: connect: connection refused

... 14 lines not shown

#1786182658447904768junit38 hours ago
2024-05-03T01:30:27Z node/ip-10-0-5-214.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v4k0vkx8-fd652.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-214.ec2.internal?timeout=10s - dial tcp 10.0.8.121:6443: connect: connection refused
2024-05-03T01:30:27Z node/ip-10-0-5-214.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v4k0vkx8-fd652.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-214.ec2.internal?timeout=10s - dial tcp 10.0.70.197:6443: connect: connection refused

... 14 lines not shown

pull-ci-openshift-origin-master-e2e-aws-ovn-single-node-upgrade (all) - 14 runs, 86% failed, 92% of failures match = 79% impact
#1786527656196444160junit14 hours ago
2024-05-03T23:39:19Z node/ip-10-0-121-56.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i3t573d9-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-56.us-east-2.compute.internal?timeout=10s - unexpected EOF
2024-05-03T23:41:58Z node/ip-10-0-121-56.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i3t573d9-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-56.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.41.252:6443: connect: connection refused
2024-05-03T23:41:58Z node/ip-10-0-121-56.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i3t573d9-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-56.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.69.14:6443: connect: connection refused

... 4 lines not shown

#1786452943478722560junit19 hours ago
2024-05-03T18:47:41Z node/ip-10-0-72-14.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-q1xfq7k6-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-14.us-west-2.compute.internal?timeout=10s - unexpected EOF
2024-05-03T18:47:41Z node/ip-10-0-72-14.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-q1xfq7k6-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-14.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.112.175:6443: connect: connection refused
2024-05-03T18:50:31Z node/ip-10-0-72-14.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-q1xfq7k6-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-14.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.112.175:6443: connect: connection refused

... 5 lines not shown

#1786428427687956480junit21 hours ago
2024-05-03T17:14:25Z node/ip-10-0-111-209.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0fxi0iv1-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-209.us-east-2.compute.internal?timeout=10s - unexpected EOF
2024-05-03T17:17:12Z node/ip-10-0-111-209.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0fxi0iv1-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-209.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.16.133:6443: connect: connection refused
2024-05-03T17:17:12Z node/ip-10-0-111-209.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0fxi0iv1-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-209.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.86.44:6443: connect: connection refused

... 4 lines not shown

#1786427723632087040junit21 hours ago
I0503 17:08:34.713177       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 17:09:47.515692       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-566mfyvb-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.48.3:6443: connect: connection refused
I0503 17:10:14.999699       1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159
#1786427723632087040junit21 hours ago
I0503 17:38:46.201210       1 observer_polling.go:159] Starting file observer
W0503 17:38:46.210652       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-58-59.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 17:38:46.210788       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786379816241467392junit24 hours ago
2024-05-03T13:53:18Z node/ip-10-0-51-93.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6yfy8r1t-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-93.ec2.internal?timeout=10s - unexpected EOF
2024-05-03T13:55:55Z node/ip-10-0-51-93.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6yfy8r1t-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-93.ec2.internal?timeout=10s - dial tcp 10.0.119.3:6443: connect: connection refused
2024-05-03T13:55:55Z node/ip-10-0-51-93.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6yfy8r1t-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-93.ec2.internal?timeout=10s - dial tcp 10.0.56.213:6443: connect: connection refused

... 4 lines not shown

#1786374508450418688junit24 hours ago
2024-05-03T13:41:24Z node/ip-10-0-94-152.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttvp1lqn-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-94-152.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.4.166:6443: connect: connection refused
2024-05-03T13:41:24Z node/ip-10-0-94-152.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttvp1lqn-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-94-152.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.4.166:6443: connect: connection refused

... 4 lines not shown

#1786274233505026048junit31 hours ago
2024-05-03T06:41:21Z node/ip-10-0-32-47.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmv3l4wc-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-47.us-east-2.compute.internal?timeout=10s - unexpected EOF
2024-05-03T06:43:57Z node/ip-10-0-32-47.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmv3l4wc-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-47.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.50.78:6443: connect: connection refused
2024-05-03T06:43:57Z node/ip-10-0-32-47.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmv3l4wc-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-47.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.70.13:6443: connect: connection refused

... 4 lines not shown

#1786210716827521024junit35 hours ago
2024-05-03T02:45:44Z node/ip-10-0-60-82.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmv3l4wc-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-82.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2024-05-03T02:47:36Z node/ip-10-0-60-82.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmv3l4wc-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-82.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.26.211:6443: connect: connection refused
2024-05-03T02:47:36Z node/ip-10-0-60-82.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmv3l4wc-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-82.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.26.211:6443: connect: connection refused

... 4 lines not shown

#1786121987580497920junit41 hours ago
I0502 21:28:21.518949       1 observer_polling.go:159] Starting file observer
W0502 21:28:21.521462       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-105.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 21:28:21.521627       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786121987580497920junit41 hours ago
2024-05-02T20:59:55Z node/ip-10-0-22-105.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2mxfbf0c-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-105.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.3.202:6443: connect: connection refused
2024-05-02T20:59:55Z node/ip-10-0-22-105.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2mxfbf0c-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-105.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.72.38:6443: connect: connection refused

... 4 lines not shown

#1786080816137244672junit44 hours ago
2024-05-02T18:10:28Z node/ip-10-0-9-62.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bcy3sl37-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-62.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2024-05-02T18:13:28Z node/ip-10-0-9-62.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bcy3sl37-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-62.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.37.180:6443: connect: connection refused
2024-05-02T18:13:28Z node/ip-10-0-9-62.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bcy3sl37-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-62.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.77.246:6443: connect: connection refused

... 4 lines not shown

#1786056212882657280junit44 hours ago
2024-05-02T17:16:41Z node/ip-10-0-36-68.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5jzp0ir1-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-68.us-east-2.compute.internal?timeout=10s - unexpected EOF
2024-05-02T17:19:15Z node/ip-10-0-36-68.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5jzp0ir1-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-68.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.62.187:6443: connect: connection refused
2024-05-02T17:19:15Z node/ip-10-0-36-68.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5jzp0ir1-c3e3c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-68.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.83.177:6443: connect: connection refused

... 4 lines not shown

periodic-ci-rh-ecosystem-edge-recert-main-4.15-e2e-aws-ovn-single-node-recert-serial (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1786523697415196672junit15 hours ago
2024-05-04T00:23:46Z node/ip-10-0-103-82.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-qppcmkb4-6ea59.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-82.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.69.229:6443: connect: connection refused
2024-05-04T00:23:46Z node/ip-10-0-103-82.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-qppcmkb4-6ea59.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-82.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.48.198:6443: connect: connection refused

... 14 lines not shown

#1786161476642279424junit39 hours ago
May 03 00:09:29.427 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity: failed to get current state of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity: Get "https://api-int.ci-op-9rzz18gk-6ea59.aws-2.ci.openshift.org:6443/apis/apps/v1/namespaces/openshift-network-node-identity/daemonsets/network-node-identity": dial tcp 10.0.125.191:6443: connect: connection refused
1 tests failed during this blip (2024-05-03 00:09:29.427236112 +0000 UTC m=+2850.455829412 to 2024-05-03 00:09:29.427236112 +0000 UTC m=+2850.455829412): [sig-cli] oc adm cluster-role-reapers [Serial][apigroup:authorization.openshift.io][apigroup:user.openshift.io] [Suite:openshift/conformance/serial] (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)

... 2 lines not shown

pull-ci-openshift-cluster-kube-apiserver-operator-release-4.15-e2e-aws-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786523061864894464junit15 hours ago
May 04 00:02:55.672 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-49-2.ec2.internal" not ready since 2024-05-04 00:00:55 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 04 00:03:32.512 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-49-2.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0504 00:03:20.753997       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0504 00:03:20.754452       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714781000 cert, and key in /tmp/serving-cert-2025387145/serving-signer.crt, /tmp/serving-cert-2025387145/serving-signer.key\nStaticPodsDegraded: I0504 00:03:21.235200       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0504 00:03:21.269952       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-49-2.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0504 00:03:21.270251       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1931-gf9223be-f9223beff\nStaticPodsDegraded: I0504 00:03:21.290904       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2025387145/tls.crt::/tmp/serving-cert-2025387145/tls.key"\nStaticPodsDegraded: F0504 00:03:21.625889       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 04 00:08:40.523 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-102-253.ec2.internal" not ready since 2024-05-04 00:08:33 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
openshift-machine-config-operator-4343-nightly-4.16-e2e-aws-sdn-upgrade (all) - 10 runs, 10% failed, 1000% of failures match = 100% impact
#1786481535390584832junit16 hours ago
I0503 20:36:37.845711       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 20:36:39.513925       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-d37qr9th-9aa84.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.33.57:6443: connect: connection refused
I0503 20:36:54.719317       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206

... 2 lines not shown

#1786481537106055168junit17 hours ago
I0503 20:32:36.320725       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 20:36:53.873937       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-d37qr9th-1aa8f.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.11.147:6443: connect: connection refused
I0503 20:38:00.115825       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786481537106055168junit17 hours ago
I0503 20:38:32.119828       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 20:40:46.121866       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-d37qr9th-1aa8f.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.107.241:6443: connect: connection refused
I0503 20:41:02.597282       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786481554785046528junit17 hours ago
I0503 21:41:13.618453       1 observer_polling.go:159] Starting file observer
W0503 21:41:13.638131       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-112-230.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 21:41:13.638269       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786481552234909696junit17 hours ago
May 03 21:35:02.437 - 21s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-51-205.us-west-2.compute.internal" not ready since 2024-05-03 21:33:02 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:35:24.132 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-51-205.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:35:19.969233       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:35:19.969615       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714772119 cert, and key in /tmp/serving-cert-530490213/serving-signer.crt, /tmp/serving-cert-530490213/serving-signer.key\nStaticPodsDegraded: I0503 21:35:20.482533       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:35:20.497970       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-51-205.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:35:20.498120       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 21:35:20.523560       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-530490213/tls.crt::/tmp/serving-cert-530490213/tls.key"\nStaticPodsDegraded: F0503 21:35:20.830836       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 21:41:26.400 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-111-4.us-west-2.compute.internal" not ready since 2024-05-03 21:41:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786481542097276928junit17 hours ago
I0503 21:32:14.149874       1 observer_polling.go:159] Starting file observer
W0503 21:32:14.165482       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-1-190.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 21:32:14.165617       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786481547134636032junit17 hours ago
May 03 21:31:13.452 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-45-227.us-west-2.compute.internal" not ready since 2024-05-03 21:29:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:31:42.784 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-45-227.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:31:39.222436       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:31:39.225525       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714771899 cert, and key in /tmp/serving-cert-3052162405/serving-signer.crt, /tmp/serving-cert-3052162405/serving-signer.key\nStaticPodsDegraded: I0503 21:31:40.008521       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:31:40.017617       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-45-227.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:31:40.017744       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 21:31:40.035801       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3052162405/tls.crt::/tmp/serving-cert-3052162405/tls.key"\nStaticPodsDegraded: F0503 21:31:40.395838       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 21:37:20.442 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-18-57.us-west-2.compute.internal" not ready since 2024-05-03 21:35:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786481533721251840junit17 hours ago
I0503 20:26:14.982037       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714767687\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714767687\" (2024-05-03 19:21:26 +0000 UTC to 2025-05-03 19:21:26 +0000 UTC (now=2024-05-03 20:26:14.982017003 +0000 UTC))"
E0503 20:30:37.564762       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-d37qr9th-c2218.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.127.86:6443: connect: connection refused
I0503 20:31:12.349099       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786481533721251840junit17 hours ago
I0503 21:43:47.432277       1 observer_polling.go:159] Starting file observer
W0503 21:43:47.450469       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-38-54.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 21:43:47.451059       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786481539580694528junit17 hours ago
May 03 21:39:07.846 - 26s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-70-123.ec2.internal" not ready since 2024-05-03 21:37:07 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:39:34.280 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-70-123.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:39:29.762319       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:39:29.764953       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714772369 cert, and key in /tmp/serving-cert-1158156952/serving-signer.crt, /tmp/serving-cert-1158156952/serving-signer.key\nStaticPodsDegraded: I0503 21:39:30.343038       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:39:30.363638       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-70-123.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:39:30.368084       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 21:39:30.391987       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1158156952/tls.crt::/tmp/serving-cert-1158156952/tls.key"\nStaticPodsDegraded: F0503 21:39:30.603329       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 21:45:16.206 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-24-158.ec2.internal" not ready since 2024-05-03 21:44:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786481549663801344junit17 hours ago
May 03 21:33:28.025 - 18s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-50-93.ec2.internal" not ready since 2024-05-03 21:31:28 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:33:46.478 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-50-93.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:33:43.966436       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:33:43.975815       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714772023 cert, and key in /tmp/serving-cert-1557189787/serving-signer.crt, /tmp/serving-cert-1557189787/serving-signer.key\nStaticPodsDegraded: I0503 21:33:44.534997       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:33:44.549440       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-50-93.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:33:44.549576       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 21:33:44.570015       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1557189787/tls.crt::/tmp/serving-cert-1557189787/tls.key"\nStaticPodsDegraded: F0503 21:33:44.769940       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 21:39:05.029 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-28-119.ec2.internal" not ready since 2024-05-03 21:37:05 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786481549663801344junit17 hours ago
May 03 21:45:05.118 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-125-198.ec2.internal" not ready since 2024-05-03 21:44:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:45:11.451 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-125-198.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:45:08.392037       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:45:08.392352       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714772708 cert, and key in /tmp/serving-cert-2106312528/serving-signer.crt, /tmp/serving-cert-2106312528/serving-signer.key\nStaticPodsDegraded: I0503 21:45:08.950444       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:45:08.966889       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-125-198.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:45:08.967047       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 21:45:08.995363       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2106312528/tls.crt::/tmp/serving-cert-2106312528/tls.key"\nStaticPodsDegraded: F0503 21:45:09.124221       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786481544618053632junit17 hours ago
May 03 21:30:11.911 - 27s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-22-91.ec2.internal" not ready since 2024-05-03 21:28:11 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:30:39.148 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-22-91.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:30:35.555232       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:30:35.565116       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714771835 cert, and key in /tmp/serving-cert-1906957434/serving-signer.crt, /tmp/serving-cert-1906957434/serving-signer.key\nStaticPodsDegraded: I0503 21:30:36.070723       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:30:36.092523       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-91.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:30:36.092698       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 21:30:36.113287       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1906957434/tls.crt::/tmp/serving-cert-1906957434/tls.key"\nStaticPodsDegraded: F0503 21:30:36.457222       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 21:36:23.892 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-103-118.ec2.internal" not ready since 2024-05-03 21:36:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786481544618053632junit17 hours ago
I0503 21:36:26.218981       1 observer_polling.go:159] Starting file observer
W0503 21:36:26.229440       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-118.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 21:36:26.229597       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
pull-ci-openshift-ovn-kubernetes-release-4.14-4.14-upgrade-from-stable-4.13-local-gateway-e2e-aws-ovn-upgrade (all) - 3 runs, 33% failed, 200% of failures match = 67% impact
#1786478301104050176junit17 hours ago
May 03 21:32:54.732 - 26s   E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-156-157.us-west-1.compute.internal" not ready since 2024-05-03 21:30:54 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
May 03 21:45:39.687 - 5s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-217-220.us-west-1.compute.internal" not ready since 2024-05-03 21:45:22 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-217-220.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:45:34.424091       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:45:34.424564       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714772734 cert, and key in /tmp/serving-cert-3938857876/serving-signer.crt, /tmp/serving-cert-3938857876/serving-signer.key\nStaticPodsDegraded: I0503 21:45:34.811973       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:45:34.828959       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-217-220.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:45:34.829166       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1890-g2eab0f9-2eab0f9e2\nStaticPodsDegraded: I0503 21:45:34.846959       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3938857876/tls.crt::/tmp/serving-cert-3938857876/tls.key"\nStaticPodsDegraded: F0503 21:45:35.039298       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
#1786047988699762688junit45 hours ago
May 02 17:24:57.169 - 12s   E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-148-68.us-east-2.compute.internal" not ready since 2024-05-02 17:24:52 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?])
May 02 17:31:06.221 - 3s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-168-17.us-east-2.compute.internal" not ready since 2024-05-02 17:30:47 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-168-17.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 17:30:57.118561       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 17:30:57.118899       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714671057 cert, and key in /tmp/serving-cert-1875219698/serving-signer.crt, /tmp/serving-cert-1875219698/serving-signer.key\nStaticPodsDegraded: I0502 17:30:57.544923       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 17:30:57.560422       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-168-17.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 17:30:57.560555       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1890-g2eab0f9-2eab0f9e2\nStaticPodsDegraded: I0502 17:30:57.588724       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1875219698/tls.crt::/tmp/serving-cert-1875219698/tls.key"\nStaticPodsDegraded: F0502 17:30:57.783915       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
pull-ci-openshift-ovn-kubernetes-release-4.14-4.14-upgrade-from-stable-4.13-e2e-aws-ovn-upgrade (all) - 3 runs, 0% failed, 67% of runs match
#1786478301015969792junit17 hours ago
May 03 21:33:51.646 - 18s   E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-166-59.us-west-2.compute.internal" not ready since 2024-05-03 21:33:49 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?])
May 03 21:46:13.295 - 1s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-146-19.us-west-2.compute.internal" not ready since 2024-05-03 21:45:51 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-146-19.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:46:08.499456       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:46:08.499645       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714772768 cert, and key in /tmp/serving-cert-2701510505/serving-signer.crt, /tmp/serving-cert-2701510505/serving-signer.key\nStaticPodsDegraded: I0503 21:46:08.756764       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:46:08.759274       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-146-19.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:46:08.759390       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1890-g2eab0f9-2eab0f9e2\nStaticPodsDegraded: I0503 21:46:08.760159       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2701510505/tls.crt::/tmp/serving-cert-2701510505/tls.key"\nStaticPodsDegraded: F0503 21:46:09.101532       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
#1786047265496895488junit45 hours ago
May 02 17:02:16.188 - 638ms E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-133-159.us-west-1.compute.internal" not ready since 2024-05-02 17:01:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-133-159.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 17:02:09.155129       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 17:02:09.155407       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714669329 cert, and key in /tmp/serving-cert-3402544327/serving-signer.crt, /tmp/serving-cert-3402544327/serving-signer.key\nStaticPodsDegraded: I0502 17:02:09.447159       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 17:02:09.473013       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-133-159.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 17:02:09.473167       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1890-g2eab0f9-2eab0f9e2\nStaticPodsDegraded: I0502 17:02:09.492742       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3402544327/tls.crt::/tmp/serving-cert-3402544327/tls.key"\nStaticPodsDegraded: F0502 17:02:10.038444       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
May 02 17:14:22.952 - 6s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-188-83.us-west-1.compute.internal" not ready since 2024-05-02 17:14:02 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-188-83.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 17:14:13.100254       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 17:14:13.100677       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714670053 cert, and key in /tmp/serving-cert-391300014/serving-signer.crt, /tmp/serving-cert-391300014/serving-signer.key\nStaticPodsDegraded: I0502 17:14:13.684609       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 17:14:13.723022       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-188-83.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 17:14:13.723161       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1890-g2eab0f9-2eab0f9e2\nStaticPodsDegraded: I0502 17:14:13.781074       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-391300014/tls.crt::/tmp/serving-cert-391300014/tls.key"\nStaticPodsDegraded: F0502 17:14:14.297740       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:

... 1 lines not shown

pull-ci-openshift-machine-config-operator-master-e2e-aws-ovn-upgrade (all) - 19 runs, 5% failed, 1000% of failures match = 53% impact
#1786497154190151680junit17 hours ago
I0503 22:33:16.051036       1 observer_polling.go:159] Starting file observer
W0503 22:33:16.059890       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-112-102.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 22:33:16.060019       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786478107385925632junit18 hours ago
I0503 20:10:25.126150       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0503 20:13:38.582712       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-72t040y4-1d1d3.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.17.30:6443: connect: connection refused
I0503 20:13:45.666783       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786478107385925632junit18 hours ago
I0503 21:14:02.949694       1 observer_polling.go:159] Starting file observer
W0503 21:14:02.958579       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-126.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 21:14:02.958722       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786477333062881280junit18 hours ago
I0503 21:17:07.208541       1 observer_polling.go:159] Starting file observer
W0503 21:17:07.224198       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-120-125.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 21:17:07.224322       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786434185431355392junit21 hours ago
I0503 18:24:18.569884       1 observer_polling.go:159] Starting file observer
W0503 18:24:18.586535       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-106-167.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 18:24:18.586678       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786142676551208960junit40 hours ago
I0502 23:08:46.352428       1 observer_polling.go:159] Starting file observer
W0502 23:08:46.358841       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-182.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 23:08:46.358975       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786139503665090560junit41 hours ago
I0502 22:37:03.237833       1 observer_polling.go:159] Starting file observer
W0502 22:37:03.252711       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-6-153.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 22:37:03.252825       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786084328443219968junit44 hours ago
I0502 19:07:28.396121       1 observer_polling.go:159] Starting file observer
W0502 19:07:28.405661       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-109-20.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 19:07:28.405801       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786077667527757824junit45 hours ago
May 02 18:38:44.404 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-12-16.us-east-2.compute.internal" not ready since 2024-05-02 18:38:35 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 18:38:56.546 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-12-16.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 18:38:49.542793       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 18:38:49.543339       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714675129 cert, and key in /tmp/serving-cert-1960740718/serving-signer.crt, /tmp/serving-cert-1960740718/serving-signer.key\nStaticPodsDegraded: I0502 18:38:49.821220       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 18:38:49.822747       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-12-16.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 18:38:49.822896       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 18:38:49.823550       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1960740718/tls.crt::/tmp/serving-cert-1960740718/tls.key"\nStaticPodsDegraded: F0502 18:38:50.184979       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 18:44:30.448 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-103-120.us-east-2.compute.internal" not ready since 2024-05-02 18:44:07 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786060917608288256junit47 hours ago
Gathering artifacts ...
E0502 16:45:50.312503      32 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-crphr7h3-1d1d3.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=5s": dial tcp 34.230.151.27:6443: connect: connection refused
E0502 16:45:50.316659      32 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-crphr7h3-1d1d3.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=5s": dial tcp 34.230.151.27:6443: connect: connection refused

... 1 lines not shown

#1786040393272397824junit47 hours ago
I0502 16:09:12.366252       1 observer_polling.go:159] Starting file observer
W0502 16:09:12.384119       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-110-41.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:09:12.384264       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

pull-ci-openshift-machine-config-operator-master-e2e-aws-ovn-upgrade-out-of-change (all) - 18 runs, 17% failed, 267% of failures match = 44% impact
#1786497154454392832junit17 hours ago
May 03 22:34:48.896 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-72-134.us-west-1.compute.internal" not ready since 2024-05-03 22:34:35 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 22:34:59.382 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-72-134.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 22:34:51.211914       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 22:34:51.212095       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714775691 cert, and key in /tmp/serving-cert-3775593289/serving-signer.crt, /tmp/serving-cert-3775593289/serving-signer.key\nStaticPodsDegraded: I0503 22:34:51.430802       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 22:34:51.432173       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-72-134.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 22:34:51.432296       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 22:34:51.432909       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3775593289/tls.crt::/tmp/serving-cert-3775593289/tls.key"\nStaticPodsDegraded: F0503 22:34:51.580835       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786497154454392832junit17 hours ago
I0503 22:23:29.788043       1 observer_polling.go:159] Starting file observer
W0503 22:23:29.805030       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-18-119.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 22:23:29.805186       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786477362418814976junit18 hours ago
May 03 21:23:23.296 - 43s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-44-24.ec2.internal" not ready since 2024-05-03 21:21:23 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:24:06.491 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-44-24.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:23:58.472301       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:23:58.472574       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714771438 cert, and key in /tmp/serving-cert-2159534694/serving-signer.crt, /tmp/serving-cert-2159534694/serving-signer.key\nStaticPodsDegraded: I0503 21:23:58.724477       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:23:58.726080       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-44-24.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:23:58.726236       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 21:23:58.726776       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2159534694/tls.crt::/tmp/serving-cert-2159534694/tls.key"\nStaticPodsDegraded: F0503 21:23:59.118682       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786477362418814976junit18 hours ago
I0503 21:23:56.611812       1 observer_polling.go:159] Starting file observer
W0503 21:23:56.622425       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-44-24.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 21:23:56.622549       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786478107440451584junit18 hours ago
May 03 21:07:15.283 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-207.us-west-1.compute.internal" not ready since 2024-05-03 21:07:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:07:26.930 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-10-207.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:07:18.417646       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:07:18.417845       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714770438 cert, and key in /tmp/serving-cert-2962215374/serving-signer.crt, /tmp/serving-cert-2962215374/serving-signer.key\nStaticPodsDegraded: I0503 21:07:18.694142       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:07:18.695787       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-207.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:07:18.695902       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 21:07:18.696514       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2962215374/tls.crt::/tmp/serving-cert-2962215374/tls.key"\nStaticPodsDegraded: F0503 21:07:18.932118       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 21:12:27.283 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-54-69.us-west-1.compute.internal" not ready since 2024-05-03 21:10:27 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786478107440451584junit18 hours ago
I0503 21:07:17.265960       1 observer_polling.go:159] Starting file observer
W0503 21:07:17.282691       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-207.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 21:07:17.282816       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786434185435549696junit21 hours ago
I0503 18:09:58.552405       1 observer_polling.go:159] Starting file observer
W0503 18:09:58.573713       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-104-170.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 18:09:58.573892       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786139506169090048junit41 hours ago
namespace/openshift-cloud-controller-manager node/ip-10-0-24-74.ec2.internal pod/aws-cloud-controller-manager-6685455797-95lpd uid/fd91c232-20d1-488c-bd36-82e7a5d881ee container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.4.198:6443: connect: connection refused
I0502 21:34:47.094585       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786139506169090048junit41 hours ago
I0502 21:44:19.012519       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0502 21:44:28.551604       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-2nvzvcj5-12f99.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.108.149:6443: connect: connection refused
I0502 21:44:33.828927       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786084328510328832junit45 hours ago
May 02 19:02:34.369 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-95-204.ec2.internal" not ready since 2024-05-02 19:00:34 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 19:03:02.554 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-95-204.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 19:02:55.514900       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 19:02:55.515160       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714676575 cert, and key in /tmp/serving-cert-3061690671/serving-signer.crt, /tmp/serving-cert-3061690671/serving-signer.key\nStaticPodsDegraded: I0502 19:02:55.832752       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 19:02:55.834312       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-95-204.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 19:02:55.834421       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 19:02:55.835066       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3061690671/tls.crt::/tmp/serving-cert-3061690671/tls.key"\nStaticPodsDegraded: F0502 19:02:56.086884       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 19:08:40.486 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-107-146.ec2.internal" not ready since 2024-05-02 19:08:26 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-107-146.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 19:08:38.595590       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 19:08:38.596141       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714676918 cert, and key in /tmp/serving-cert-1803115877/serving-signer.crt, /tmp/serving-cert-1803115877/serving-signer.key\nStaticPodsDegraded: I0502 19:08:39.025777       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 19:08:39.042117       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-107-146.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 19:08:39.042272       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 19:08:39.069991       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1803115877/tls.crt::/tmp/serving-cert-1803115877/tls.key"\nStaticPodsDegraded: F0502 19:08:39.338766       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 2 lines not shown

#1786077667573895168junit45 hours ago
I0502 17:35:12.210090       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 17:42:14.380603       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-932f561y-12f99.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.127.22:6443: connect: connection refused
I0502 17:42:58.877483       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786077667573895168junit45 hours ago
I0502 18:44:17.914938       1 observer_polling.go:159] Starting file observer
W0502 18:44:17.930749       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-104-101.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 18:44:17.930890       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786040393360478208junit47 hours ago
I0502 16:11:06.518742       1 observer_polling.go:159] Starting file observer
W0502 16:11:06.536638       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-90.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:11:06.536802       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-aws-sdn-upgrade (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#1786480170312404992junit18 hours ago
May 03 21:03:37.886 E ns/openshift-e2e-loki pod/loki-promtail-88ns8 node/ip-10-0-163-78.us-west-1.compute.internal uid/e2953a91-abfb-4f6f-bbd1-24cbd7587dc3 container/prod-bearer-token reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 21:03:44.907 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-78.us-west-1.compute.internal node/ip-10-0-163-78.us-west-1.compute.internal uid/5841f748-9af9-4b83-9321-8b4b62a7cc0b container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 21:03:40.079047       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 21:03:40.079401       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714770220 cert, and key in /tmp/serving-cert-2109671784/serving-signer.crt, /tmp/serving-cert-2109671784/serving-signer.key\nI0503 21:03:40.426579       1 observer_polling.go:159] Starting file observer\nW0503 21:03:40.431013       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-163-78.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 21:03:40.431153       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0503 21:03:40.436363       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2109671784/tls.crt::/tmp/serving-cert-2109671784/tls.key"\nW0503 21:03:43.821610       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0503 21:03:43.821654       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
May 03 21:03:45.912 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-78.us-west-1.compute.internal node/ip-10-0-163-78.us-west-1.compute.internal uid/5841f748-9af9-4b83-9321-8b4b62a7cc0b container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 21:03:40.079047       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 21:03:40.079401       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714770220 cert, and key in /tmp/serving-cert-2109671784/serving-signer.crt, /tmp/serving-cert-2109671784/serving-signer.key\nI0503 21:03:40.426579       1 observer_polling.go:159] Starting file observer\nW0503 21:03:40.431013       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-163-78.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 21:03:40.431153       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0503 21:03:40.436363       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2109671784/tls.crt::/tmp/serving-cert-2109671784/tls.key"\nW0503 21:03:43.821610       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0503 21:03:43.821654       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n

... 1 lines not shown

#1786341754841075712junit27 hours ago
May 03 11:46:35.000 - 1s    E ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new reason/DisruptionBegan ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new stopped responding to GET requests over new connections: Get "https://test-disruption-new-openshift-image-registry.apps.ci-op-5w121w9l-a3bb2.aws-2.ci.openshift.org/healthz": read tcp 10.131.76.3:46180->52.73.187.152:443: read: connection reset by peer
May 03 11:46:37.970 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-235-204.ec2.internal node/ip-10-0-235-204.ec2.internal uid/051eed2f-4482-44b4-be9d-8d5e217e8869 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 11:46:36.411293       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 11:46:36.426303       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714736796 cert, and key in /tmp/serving-cert-226721582/serving-signer.crt, /tmp/serving-cert-226721582/serving-signer.key\nI0503 11:46:36.778609       1 observer_polling.go:159] Starting file observer\nW0503 11:46:36.786636       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-235-204.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 11:46:36.786744       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0503 11:46:36.802965       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-226721582/tls.crt::/tmp/serving-cert-226721582/tls.key"\nF0503 11:46:37.151415       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 11:46:38.962 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-235-204.ec2.internal node/ip-10-0-235-204.ec2.internal uid/051eed2f-4482-44b4-be9d-8d5e217e8869 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 11:46:36.411293       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 11:46:36.426303       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714736796 cert, and key in /tmp/serving-cert-226721582/serving-signer.crt, /tmp/serving-cert-226721582/serving-signer.key\nI0503 11:46:36.778609       1 observer_polling.go:159] Starting file observer\nW0503 11:46:36.786636       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-235-204.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 11:46:36.786744       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0503 11:46:36.802965       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-226721582/tls.crt::/tmp/serving-cert-226721582/tls.key"\nF0503 11:46:37.151415       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

pull-ci-openshift-images-master-e2e-aws-upgrade (all) - 4 runs, 50% failed, 100% of failures match = 50% impact
#1786492346217533440junit18 hours ago
May 03 21:51:03.511 - 22s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-22-119.ec2.internal" not ready since 2024-05-03 21:49:03 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:51:25.731 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-22-119.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:51:17.310807       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:51:17.311009       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714773077 cert, and key in /tmp/serving-cert-3097198000/serving-signer.crt, /tmp/serving-cert-3097198000/serving-signer.key\nStaticPodsDegraded: I0503 21:51:17.739977       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:51:17.741583       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-119.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:51:17.742013       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 21:51:17.742862       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3097198000/tls.crt::/tmp/serving-cert-3097198000/tls.key"\nStaticPodsDegraded: F0503 21:51:17.951551       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 21:56:51.174 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-86-235.ec2.internal" not ready since 2024-05-03 21:56:45 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786492346217533440junit18 hours ago
E0503 20:53:45.041043       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-8hhz7fvw-18df9.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0503 20:54:40.761450       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-8hhz7fvw-18df9.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.81.81:6443: connect: connection refused
I0503 20:54:42.650682       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786450511038255104junit20 hours ago
May 03 19:14:26.102 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-116-53.us-east-2.compute.internal" not ready since 2024-05-03 19:14:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:14:40.619 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-116-53.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:14:32.721044       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:14:32.721227       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714763672 cert, and key in /tmp/serving-cert-1625124148/serving-signer.crt, /tmp/serving-cert-1625124148/serving-signer.key\nStaticPodsDegraded: I0503 19:14:32.966193       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:14:32.967758       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-116-53.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:14:32.967891       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 19:14:32.968466       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1625124148/tls.crt::/tmp/serving-cert-1625124148/tls.key"\nStaticPodsDegraded: F0503 19:14:33.246927       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786450511038255104junit20 hours ago
I0503 19:14:31.230191       1 observer_polling.go:159] Starting file observer
W0503 19:14:31.253072       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-116-53.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:14:31.253192       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
openshift-oc-1752-nightly-4.16-e2e-aws-sdn-upgrade (all) - 89 runs, 21% failed, 311% of failures match = 66% impact
#1786462226085842944junit18 hours ago
May 03 20:38:19.135 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-5-242.ec2.internal" not ready since 2024-05-03 20:38:10 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 20:38:27.335 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-5-242.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 20:38:23.854111       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 20:38:23.854414       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714768703 cert, and key in /tmp/serving-cert-1124086950/serving-signer.crt, /tmp/serving-cert-1124086950/serving-signer.key\nStaticPodsDegraded: I0503 20:38:24.383588       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 20:38:24.398767       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-5-242.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 20:38:24.399033       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 20:38:24.427823       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1124086950/tls.crt::/tmp/serving-cert-1124086950/tls.key"\nStaticPodsDegraded: F0503 20:38:24.755570       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 20:43:46.311 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-81-62.ec2.internal" not ready since 2024-05-03 20:43:37 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786462226085842944junit18 hours ago
May 03 20:49:47.311 - 2s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-85-180.ec2.internal" not ready since 2024-05-03 20:49:34 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 20:49:49.962 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-85-180.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 20:49:46.879231       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 20:49:46.879551       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714769386 cert, and key in /tmp/serving-cert-2923705806/serving-signer.crt, /tmp/serving-cert-2923705806/serving-signer.key\nStaticPodsDegraded: I0503 20:49:47.286298       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 20:49:47.286350       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-85-180.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 20:49:47.286527       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 20:49:47.318240       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2923705806/tls.crt::/tmp/serving-cert-2923705806/tls.key"\nStaticPodsDegraded: F0503 20:49:47.736674       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786462116656451584junit18 hours ago
I0503 20:46:22.334022       1 observer_polling.go:159] Starting file observer
W0503 20:46:22.363946       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-5-84.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 20:46:22.364068       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786462120414547968junit18 hours ago
I0503 20:34:15.534997       1 observer_polling.go:159] Starting file observer
W0503 20:34:15.556828       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-0-32.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 20:34:15.556961       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786462117105242112junit18 hours ago
I0503 19:25:24.834101       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714764024\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714764024\" (2024-05-03 18:20:24 +0000 UTC to 2025-05-03 18:20:24 +0000 UTC (now=2024-05-03 19:25:24.834080548 +0000 UTC))"
E0503 19:29:46.345627       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-lq00ym0t-fcad4.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.36.143:6443: connect: connection refused
E0503 19:30:17.464818       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-lq00ym0t-fcad4.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.36.143:6443: connect: connection refused

... 1 lines not shown

#1786462118724243456junit18 hours ago
I0503 20:36:14.653408       1 observer_polling.go:159] Starting file observer
W0503 20:36:14.671346       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-25-101.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 20:36:14.671476       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786462122905964544junit18 hours ago
I0503 19:36:42.709008       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0503 19:38:45.017496       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-lq00ym0t-30ec7.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.109.204:6443: connect: connection refused
E0503 19:38:57.380218       1 reflector.go:147] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Failed to watch *v1.ConfigMap: unknown (get configmaps)
#1786462122905964544junit18 hours ago
I0503 20:45:52.502540       1 observer_polling.go:159] Starting file observer
W0503 20:45:52.527678       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-150.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 20:45:52.527956       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786462117549838336junit18 hours ago
I0503 20:38:33.039909       1 observer_polling.go:159] Starting file observer
W0503 20:38:33.056923       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-181.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 20:38:33.057098       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786462118069932032junit18 hours ago
I0503 20:40:20.498623       1 observer_polling.go:159] Starting file observer
W0503 20:40:20.515003       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-0-34.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 20:40:20.515102       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786462248781221888junit18 hours ago
cause/Error code/2 reason/ContainerExit ent@1714763985\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714763985\" (2024-05-03 18:19:45 +0000 UTC to 2025-05-03 18:19:45 +0000 UTC (now=2024-05-03 19:24:31.220678531 +0000 UTC))"
E0503 19:29:57.802439       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-lq00ym0t-a3398.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.102.175:6443: connect: connection refused
I0503 19:30:24.940833       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786462248781221888junit18 hours ago
I0503 19:31:07.916526       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 19:33:28.425841       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-lq00ym0t-a3398.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.102.175:6443: connect: connection refused
I0503 19:33:50.759773       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786462125439324160junit18 hours ago
May 03 20:26:14.935 - 23s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-126-97.ec2.internal" not ready since 2024-05-03 20:24:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 20:26:38.870 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-126-97.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 20:26:35.858286       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 20:26:35.858551       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714767995 cert, and key in /tmp/serving-cert-2747256743/serving-signer.crt, /tmp/serving-cert-2747256743/serving-signer.key\nStaticPodsDegraded: I0503 20:26:36.295819       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 20:26:36.313457       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-126-97.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 20:26:36.313602       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 20:26:36.338078       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2747256743/tls.crt::/tmp/serving-cert-2747256743/tls.key"\nStaticPodsDegraded: F0503 20:26:36.603252       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 20:31:50.712 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-76-111.ec2.internal" not ready since 2024-05-03 20:31:50 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786462125439324160junit18 hours ago
May 03 20:37:56.136 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-43-168.ec2.internal" not ready since 2024-05-03 20:37:36 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 20:38:02.935 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-43-168.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 20:37:59.453418       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 20:37:59.453705       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714768679 cert, and key in /tmp/serving-cert-1142728119/serving-signer.crt, /tmp/serving-cert-1142728119/serving-signer.key\nStaticPodsDegraded: I0503 20:38:00.119665       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 20:38:00.130210       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-43-168.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 20:38:00.130401       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 20:38:00.147592       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1142728119/tls.crt::/tmp/serving-cert-1142728119/tls.key"\nStaticPodsDegraded: F0503 20:38:00.495852       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786408101490987008junit21 hours ago
May 03 16:53:29.838 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-0-114.us-west-1.compute.internal" not ready since 2024-05-03 16:51:29 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 16:53:58.040 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-0-114.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 16:53:54.450525       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 16:53:54.454029       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714755234 cert, and key in /tmp/serving-cert-3836675357/serving-signer.crt, /tmp/serving-cert-3836675357/serving-signer.key\nStaticPodsDegraded: I0503 16:53:55.238689       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 16:53:55.254890       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-0-114.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 16:53:55.255045       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 16:53:55.280030       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3836675357/tls.crt::/tmp/serving-cert-3836675357/tls.key"\nStaticPodsDegraded: F0503 16:53:55.452777       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 16:59:16.840 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-15-220.us-west-1.compute.internal" not ready since 2024-05-03 16:57:16 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786408111460847616junit22 hours ago
May 03 16:53:40.221 - 25s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-91-94.us-west-2.compute.internal" not ready since 2024-05-03 16:51:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 16:54:05.438 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-91-94.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 16:54:00.877933       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 16:54:00.878703       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714755240 cert, and key in /tmp/serving-cert-3106330223/serving-signer.crt, /tmp/serving-cert-3106330223/serving-signer.key\nStaticPodsDegraded: I0503 16:54:01.680520       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 16:54:01.712045       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-91-94.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 16:54:01.712186       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 16:54:01.742676       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3106330223/tls.crt::/tmp/serving-cert-3106330223/tls.key"\nStaticPodsDegraded: F0503 16:54:02.029034       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 16:59:42.297 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-87-75.us-west-2.compute.internal" not ready since 2024-05-03 16:59:39 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786408102375985152junit22 hours ago
May 03 17:01:52.151 - 3s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-107-207.us-east-2.compute.internal" not ready since 2024-05-03 17:01:29 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 17:01:55.262 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-107-207.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 17:01:52.779103       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 17:01:52.779759       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714755712 cert, and key in /tmp/serving-cert-3950045608/serving-signer.crt, /tmp/serving-cert-3950045608/serving-signer.key\nStaticPodsDegraded: I0503 17:01:53.318598       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 17:01:53.338326       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-107-207.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 17:01:53.338478       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 17:01:53.368569       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3950045608/tls.crt::/tmp/serving-cert-3950045608/tls.key"\nStaticPodsDegraded: F0503 17:01:53.688514       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 17:08:02.117 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-25-48.us-east-2.compute.internal" not ready since 2024-05-03 17:07:52 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786408102375985152junit22 hours ago
I0503 17:01:53.318598       1 observer_polling.go:159] Starting file observer
W0503 17:01:53.338326       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-107-207.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 17:01:53.338478       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786408104745766912junit22 hours ago
May 03 16:47:09.628 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-67-79.us-west-1.compute.internal" not ready since 2024-05-03 16:46:59 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 16:47:17.788 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-67-79.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 16:47:12.663080       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 16:47:12.663304       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714754832 cert, and key in /tmp/serving-cert-3534625457/serving-signer.crt, /tmp/serving-cert-3534625457/serving-signer.key\nStaticPodsDegraded: I0503 16:47:13.183001       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 16:47:13.198489       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-67-79.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 16:47:13.198629       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 16:47:13.221396       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3534625457/tls.crt::/tmp/serving-cert-3534625457/tls.key"\nStaticPodsDegraded: F0503 16:47:13.462335       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 16:52:46.626 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-93-96.us-west-1.compute.internal" not ready since 2024-05-03 16:52:44 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786408104745766912junit22 hours ago
May 03 16:58:53.936 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-17-44.us-west-1.compute.internal" not ready since 2024-05-03 16:58:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 16:59:00.341 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-17-44.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 16:58:56.360283       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 16:58:56.360580       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714755536 cert, and key in /tmp/serving-cert-4089738444/serving-signer.crt, /tmp/serving-cert-4089738444/serving-signer.key\nStaticPodsDegraded: I0503 16:58:56.827371       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 16:58:56.844053       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-17-44.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 16:58:56.844215       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 16:58:56.870166       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4089738444/tls.crt::/tmp/serving-cert-4089738444/tls.key"\nStaticPodsDegraded: F0503 16:58:57.278884       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786408108956848128junit22 hours ago
May 03 16:54:49.446 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-117-56.us-west-2.compute.internal" not ready since 2024-05-03 16:52:49 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 16:55:21.634 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-117-56.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 16:55:18.386013       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 16:55:18.386333       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714755318 cert, and key in /tmp/serving-cert-3771969312/serving-signer.crt, /tmp/serving-cert-3771969312/serving-signer.key\nStaticPodsDegraded: I0503 16:55:18.825503       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 16:55:18.846954       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-117-56.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 16:55:18.847182       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 16:55:18.870876       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3771969312/tls.crt::/tmp/serving-cert-3771969312/tls.key"\nStaticPodsDegraded: F0503 16:55:19.233424       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 17:01:14.048 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-69-205.us-west-2.compute.internal" not ready since 2024-05-03 17:00:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786408101923000320junit22 hours ago
I0503 15:53:12.047380       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 15:53:19.156768       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-6p7x3fib-c757d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.104.26:6443: connect: connection refused
I0503 15:53:31.217946       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786408101923000320junit22 hours ago
I0503 16:56:14.005684       1 observer_polling.go:159] Starting file observer
W0503 16:56:14.025070       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-183.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 16:56:14.025382       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786408102837358592junit22 hours ago
May 03 16:47:12.608 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-40-59.ec2.internal" not ready since 2024-05-03 16:47:01 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 16:47:18.678 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-40-59.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 16:47:15.024464       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 16:47:15.024856       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714754835 cert, and key in /tmp/serving-cert-1642754272/serving-signer.crt, /tmp/serving-cert-1642754272/serving-signer.key\nStaticPodsDegraded: I0503 16:47:15.486142       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 16:47:15.501647       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-40-59.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 16:47:15.501856       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 16:47:15.536303       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1642754272/tls.crt::/tmp/serving-cert-1642754272/tls.key"\nStaticPodsDegraded: F0503 16:47:15.869389       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 16:52:54.107 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-118-132.ec2.internal" not ready since 2024-05-03 16:52:34 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786408102837358592junit22 hours ago
May 03 16:58:26.744 - 18s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-72-146.ec2.internal" not ready since 2024-05-03 16:56:26 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 16:58:44.858 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-72-146.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 16:58:40.899006       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 16:58:40.899556       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714755520 cert, and key in /tmp/serving-cert-2708757533/serving-signer.crt, /tmp/serving-cert-2708757533/serving-signer.key\nStaticPodsDegraded: I0503 16:58:41.327666       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 16:58:41.342910       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-72-146.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 16:58:41.343102       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 16:58:41.362641       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2708757533/tls.crt::/tmp/serving-cert-2708757533/tls.key"\nStaticPodsDegraded: F0503 16:58:41.716585       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786408113134374912junit22 hours ago
I0503 15:46:49.344936       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0503 15:49:47.069177       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-6p7x3fib-a3398.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.29.243:6443: connect: connection refused
I0503 15:50:06.989755       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206

... 3 lines not shown

#1786408106423488512junit22 hours ago
May 03 16:51:40.236 - 1s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-76-184.ec2.internal" not ready since 2024-05-03 16:51:15 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 16:51:41.907 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-76-184.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 16:51:38.980570       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 16:51:38.988720       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714755098 cert, and key in /tmp/serving-cert-2695289944/serving-signer.crt, /tmp/serving-cert-2695289944/serving-signer.key\nStaticPodsDegraded: I0503 16:51:39.826225       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 16:51:39.844121       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-76-184.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 16:51:39.844239       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 16:51:39.875310       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2695289944/tls.crt::/tmp/serving-cert-2695289944/tls.key"\nStaticPodsDegraded: F0503 16:51:40.081341       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 16:56:56.150 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-69-251.ec2.internal" not ready since 2024-05-03 16:56:54 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786408103294537728junit22 hours ago
May 03 16:46:45.654 - 3s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-47-210.us-east-2.compute.internal" not ready since 2024-05-03 16:46:33 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 16:46:49.299 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-47-210.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 16:46:46.413995       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 16:46:46.414395       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714754806 cert, and key in /tmp/serving-cert-1045370119/serving-signer.crt, /tmp/serving-cert-1045370119/serving-signer.key\nStaticPodsDegraded: I0503 16:46:46.765958       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 16:46:46.781517       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-47-210.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 16:46:46.781694       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 16:46:46.800500       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1045370119/tls.crt::/tmp/serving-cert-1045370119/tls.key"\nStaticPodsDegraded: F0503 16:46:47.279180       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 16:52:39.650 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-49-33.us-east-2.compute.internal" not ready since 2024-05-03 16:52:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786408103294537728junit22 hours ago
I0503 16:46:46.765958       1 observer_polling.go:159] Starting file observer
W0503 16:46:46.781517       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-47-210.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 16:46:46.781694       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786365875561959424junit24 hours ago
May 03 14:03:38.502 - 2s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-60-19.us-west-2.compute.internal" not ready since 2024-05-03 14:03:23 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 14:03:40.627 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-60-19.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 14:03:36.941442       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 14:03:36.948605       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714745016 cert, and key in /tmp/serving-cert-3333480216/serving-signer.crt, /tmp/serving-cert-3333480216/serving-signer.key\nStaticPodsDegraded: I0503 14:03:37.359448       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 14:03:37.394595       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-60-19.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 14:03:37.394931       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 14:03:37.420567       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3333480216/tls.crt::/tmp/serving-cert-3333480216/tls.key"\nStaticPodsDegraded: F0503 14:03:37.897904       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 14:09:14.165 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-124-224.us-west-2.compute.internal" not ready since 2024-05-03 14:09:13 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786365885938667520junit24 hours ago
May 03 13:59:29.257 - 7s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-39-174.us-west-2.compute.internal" not ready since 2024-05-03 13:59:20 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 13:59:36.323 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-39-174.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 13:59:34.139900       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 13:59:34.146151       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714744774 cert, and key in /tmp/serving-cert-1432408431/serving-signer.crt, /tmp/serving-cert-1432408431/serving-signer.key\nStaticPodsDegraded: I0503 13:59:34.537026       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 13:59:34.552036       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-39-174.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 13:59:34.552169       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 13:59:34.574346       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1432408431/tls.crt::/tmp/serving-cert-1432408431/tls.key"\nStaticPodsDegraded: F0503 13:59:35.007971       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 14:05:48.260 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-45-127.us-west-2.compute.internal" not ready since 2024-05-03 14:05:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786366010933121024junit24 hours ago
I0503 14:10:20.268710       1 observer_polling.go:159] Starting file observer
W0503 14:10:20.281980       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-10.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:10:20.282162       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786365998379569152junit24 hours ago
May 03 13:58:23.161 - 5s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-3-206.us-west-1.compute.internal" not ready since 2024-05-03 13:57:58 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 13:58:28.338 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-3-206.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 13:58:24.750695       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 13:58:24.751110       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714744704 cert, and key in /tmp/serving-cert-3733371387/serving-signer.crt, /tmp/serving-cert-3733371387/serving-signer.key\nStaticPodsDegraded: I0503 13:58:25.635114       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 13:58:25.649105       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-3-206.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 13:58:25.649232       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 13:58:25.664809       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3733371387/tls.crt::/tmp/serving-cert-3733371387/tls.key"\nStaticPodsDegraded: F0503 13:58:25.954170       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 14:04:00.167 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-102-71.us-west-1.compute.internal" not ready since 2024-05-03 14:02:00 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786365998379569152junit24 hours ago
I0503 12:53:28.662041       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 12:57:06.878260       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-0h0q2dzw-36c0c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.31.211:6443: connect: connection refused
I0503 12:57:20.889092       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786365874651795456junit25 hours ago
May 03 13:54:24.093 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-50-158.us-west-2.compute.internal" not ready since 2024-05-03 13:52:24 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 13:54:52.172 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-50-158.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 13:54:47.785294       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 13:54:47.788579       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714744487 cert, and key in /tmp/serving-cert-741984477/serving-signer.crt, /tmp/serving-cert-741984477/serving-signer.key\nStaticPodsDegraded: I0503 13:54:48.216444       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 13:54:48.246325       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-50-158.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 13:54:48.246519       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 13:54:48.271807       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-741984477/tls.crt::/tmp/serving-cert-741984477/tls.key"\nStaticPodsDegraded: F0503 13:54:48.624944       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 14:00:46.048 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-1-202.us-west-2.compute.internal" not ready since 2024-05-03 14:00:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786365874651795456junit25 hours ago
May 03 14:06:08.607 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-67-215.us-west-2.compute.internal" not ready since 2024-05-03 14:04:08 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 14:06:36.663 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-67-215.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 14:06:31.747706       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 14:06:31.748135       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714745191 cert, and key in /tmp/serving-cert-2936852934/serving-signer.crt, /tmp/serving-cert-2936852934/serving-signer.key\nStaticPodsDegraded: I0503 14:06:32.313557       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 14:06:32.330414       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-67-215.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 14:06:32.330553       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 14:06:32.347540       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2936852934/tls.crt::/tmp/serving-cert-2936852934/tls.key"\nStaticPodsDegraded: F0503 14:06:32.731672       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786366017644007424junit25 hours ago
I0503 14:01:20.475646       1 observer_polling.go:159] Starting file observer
W0503 14:01:20.491837       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-108.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:01:20.492223       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786366003391762432junit25 hours ago
I0503 13:53:55.573725       1 observer_polling.go:159] Starting file observer
W0503 13:53:55.601583       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-53-112.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 13:53:55.601760       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786365878397308928junit25 hours ago
May 03 13:54:57.433 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-209.us-west-1.compute.internal" not ready since 2024-05-03 13:54:40 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 13:55:09.339 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-100-209.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 13:55:04.364278       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 13:55:04.364666       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714744504 cert, and key in /tmp/serving-cert-1757830746/serving-signer.crt, /tmp/serving-cert-1757830746/serving-signer.key\nStaticPodsDegraded: I0503 13:55:04.634875       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 13:55:04.646969       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-209.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 13:55:04.647105       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 13:55:04.658274       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1757830746/tls.crt::/tmp/serving-cert-1757830746/tls.key"\nStaticPodsDegraded: F0503 13:55:04.907368       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 14:00:38.442 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-117-173.us-west-1.compute.internal" not ready since 2024-05-03 13:58:38 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786365876010749952junit25 hours ago
I0503 14:08:04.638753       1 observer_polling.go:159] Starting file observer
W0503 14:08:04.654684       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-110-16.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:08:04.654954       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786365880909697024junit25 hours ago
May 03 13:52:03.477 - 25s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-60-228.us-east-2.compute.internal" not ready since 2024-05-03 13:50:03 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 13:52:29.422 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-60-228.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 13:52:26.800024       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 13:52:26.800400       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714744346 cert, and key in /tmp/serving-cert-2308924932/serving-signer.crt, /tmp/serving-cert-2308924932/serving-signer.key\nStaticPodsDegraded: I0503 13:52:27.381634       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 13:52:27.407807       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-60-228.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 13:52:27.407988       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 13:52:27.430202       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2308924932/tls.crt::/tmp/serving-cert-2308924932/tls.key"\nStaticPodsDegraded: F0503 13:52:27.810420       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 13:58:31.472 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-119-219.us-east-2.compute.internal" not ready since 2024-05-03 13:58:24 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786301551598374912junit29 hours ago
May 03 09:43:46.072 - 5s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-36-203.us-west-2.compute.internal" not ready since 2024-05-03 09:43:23 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 09:43:51.792 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-36-203.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 09:43:47.228002       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 09:43:47.228332       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714729427 cert, and key in /tmp/serving-cert-4008136106/serving-signer.crt, /tmp/serving-cert-4008136106/serving-signer.key\nStaticPodsDegraded: I0503 09:43:47.902251       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 09:43:47.922546       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-36-203.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 09:43:47.922655       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 09:43:47.947959       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4008136106/tls.crt::/tmp/serving-cert-4008136106/tls.key"\nStaticPodsDegraded: F0503 09:43:48.106452       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 09:49:43.882 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-74-58.us-west-2.compute.internal" not ready since 2024-05-03 09:47:43 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786301676135649280junit29 hours ago
May 03 09:40:34.750 - 27s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-35-51.us-west-1.compute.internal" not ready since 2024-05-03 09:38:34 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 09:41:01.840 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-35-51.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 09:40:56.849696       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 09:40:56.849958       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714729256 cert, and key in /tmp/serving-cert-378689060/serving-signer.crt, /tmp/serving-cert-378689060/serving-signer.key\nStaticPodsDegraded: I0503 09:40:57.294715       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 09:40:57.311638       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-35-51.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 09:40:57.311776       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 09:40:57.339855       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-378689060/tls.crt::/tmp/serving-cert-378689060/tls.key"\nStaticPodsDegraded: F0503 09:40:57.565796       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 09:46:46.749 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-125-196.us-west-1.compute.internal" not ready since 2024-05-03 09:46:43 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786301562037997568junit29 hours ago
E0503 08:42:12.155983       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-nm4f6zcb-4205c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0503 08:43:04.194672       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-nm4f6zcb-4205c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.91.138:6443: connect: connection refused
I0503 08:43:11.968983       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786301562037997568junit29 hours ago
I0503 09:41:49.911725       1 observer_polling.go:159] Starting file observer
W0503 09:41:49.927434       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-42-130.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 09:41:49.927738       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786301662709682176junit29 hours ago
May 03 09:41:04.144 - 27s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-16-144.us-west-1.compute.internal" not ready since 2024-05-03 09:41:02 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 09:41:31.499 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-16-144.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 09:41:27.100908       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 09:41:27.101393       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714729287 cert, and key in /tmp/serving-cert-378770786/serving-signer.crt, /tmp/serving-cert-378770786/serving-signer.key\nStaticPodsDegraded: I0503 09:41:27.597055       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 09:41:27.618990       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-16-144.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 09:41:27.619199       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 09:41:27.645665       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-378770786/tls.crt::/tmp/serving-cert-378770786/tls.key"\nStaticPodsDegraded: F0503 09:41:27.998615       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 09:47:34.643 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-85-10.us-west-1.compute.internal" not ready since 2024-05-03 09:47:27 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786301671094095872junit29 hours ago
May 03 09:46:11.219 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-115-237.us-west-1.compute.internal" not ready since 2024-05-03 09:46:01 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 09:46:19.990 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-115-237.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 09:46:15.108877       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 09:46:15.109251       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714729575 cert, and key in /tmp/serving-cert-1756300599/serving-signer.crt, /tmp/serving-cert-1756300599/serving-signer.key\nStaticPodsDegraded: I0503 09:46:15.742770       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 09:46:15.754476       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-115-237.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 09:46:15.754594       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 09:46:15.778444       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1756300599/tls.crt::/tmp/serving-cert-1756300599/tls.key"\nStaticPodsDegraded: F0503 09:46:16.011579       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 09:51:38.849 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-94-243.us-west-1.compute.internal" not ready since 2024-05-03 09:49:38 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786301552051359744junit29 hours ago
May 03 09:40:07.491 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-41-161.us-west-2.compute.internal" not ready since 2024-05-03 09:38:07 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 09:40:38.942 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-41-161.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 09:40:34.213492       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 09:40:34.214273       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714729234 cert, and key in /tmp/serving-cert-108069111/serving-signer.crt, /tmp/serving-cert-108069111/serving-signer.key\nStaticPodsDegraded: I0503 09:40:34.821558       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 09:40:34.845176       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-41-161.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 09:40:34.845312       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 09:40:34.880558       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-108069111/tls.crt::/tmp/serving-cert-108069111/tls.key"\nStaticPodsDegraded: F0503 09:40:35.170336       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 09:46:14.358 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-109-144.us-west-2.compute.internal" not ready since 2024-05-03 09:46:11 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786301673623261184junit29 hours ago
I0503 09:43:51.755237       1 observer_polling.go:159] Starting file observer
W0503 09:43:51.775056       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-110-58.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 09:43:51.775212       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786301559529803776junit29 hours ago
I0503 08:32:32.053854       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714724863\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714724863\" (2024-05-03 07:27:42 +0000 UTC to 2025-05-03 07:27:42 +0000 UTC (now=2024-05-03 08:32:32.053836232 +0000 UTC))"
E0503 08:37:04.033127       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-nm4f6zcb-30ec7.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.6.175:6443: connect: connection refused
I0503 08:37:15.380019       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786301559529803776junit29 hours ago
I0503 09:40:10.851450       1 observer_polling.go:159] Starting file observer
W0503 09:40:10.867521       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-47-128.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 09:40:10.867741       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786301564554579968junit29 hours ago
May 03 09:36:49.188 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-97-34.us-west-2.compute.internal" not ready since 2024-05-03 09:34:49 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 09:37:18.596 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-97-34.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 09:37:13.543927       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 09:37:13.568145       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714729033 cert, and key in /tmp/serving-cert-2915075387/serving-signer.crt, /tmp/serving-cert-2915075387/serving-signer.key\nStaticPodsDegraded: I0503 09:37:14.290441       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 09:37:14.307094       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-97-34.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 09:37:14.307233       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 09:37:14.334448       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2915075387/tls.crt::/tmp/serving-cert-2915075387/tls.key"\nStaticPodsDegraded: F0503 09:37:14.614531       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 09:43:03.212 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-96-168.us-west-2.compute.internal" not ready since 2024-05-03 09:42:52 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786301678648037376junit29 hours ago
I0503 09:36:19.602997       1 observer_polling.go:159] Starting file observer
W0503 09:36:19.620288       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-29-145.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 09:36:19.620718       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786106957191450624junit41 hours ago
I0502 21:03:19.254897       1 observer_polling.go:159] Starting file observer
W0502 21:03:19.280426       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-122-176.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 21:03:19.280543       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786106960098103296junit41 hours ago
May 02 20:56:06.309 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-5-82.us-west-2.compute.internal" not ready since 2024-05-02 20:55:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 20:56:15.902 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-5-82.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 20:56:12.078779       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 20:56:12.079250       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714683372 cert, and key in /tmp/serving-cert-4274628746/serving-signer.crt, /tmp/serving-cert-4274628746/serving-signer.key\nStaticPodsDegraded: I0502 20:56:12.837020       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 20:56:12.847923       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-5-82.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 20:56:12.848034       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 20:56:12.861848       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4274628746/tls.crt::/tmp/serving-cert-4274628746/tls.key"\nStaticPodsDegraded: F0502 20:56:13.385520       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 21:02:06.328 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-81-192.us-west-2.compute.internal" not ready since 2024-05-02 21:01:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786106958562988032junit41 hours ago
May 02 20:58:13.026 - 26s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-28-103.us-west-2.compute.internal" not ready since 2024-05-02 20:56:12 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 20:58:39.845 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-28-103.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 20:58:36.015098       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 20:58:36.015420       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714683516 cert, and key in /tmp/serving-cert-1999071775/serving-signer.crt, /tmp/serving-cert-1999071775/serving-signer.key\nStaticPodsDegraded: I0502 20:58:36.678819       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 20:58:36.688306       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-28-103.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 20:58:36.688450       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 20:58:36.706546       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1999071775/tls.crt::/tmp/serving-cert-1999071775/tls.key"\nStaticPodsDegraded: F0502 20:58:36.989425       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 21:04:10.023 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-125-35.us-west-2.compute.internal" not ready since 2024-05-02 21:04:00 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786106961784213504junit42 hours ago
May 02 20:53:39.015 - 26s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-22-245.us-west-1.compute.internal" not ready since 2024-05-02 20:51:38 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 20:54:05.122 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-22-245.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 20:54:01.336075       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 20:54:01.336470       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714683241 cert, and key in /tmp/serving-cert-3760252304/serving-signer.crt, /tmp/serving-cert-3760252304/serving-signer.key\nStaticPodsDegraded: I0502 20:54:02.018209       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 20:54:02.034870       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-245.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 20:54:02.035022       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 20:54:02.061263       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3760252304/tls.crt::/tmp/serving-cert-3760252304/tls.key"\nStaticPodsDegraded: F0502 20:54:02.301032       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 20:59:19.011 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-127-209.us-west-1.compute.internal" not ready since 2024-05-02 20:58:58 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786106961784213504junit42 hours ago
May 02 21:05:00.161 - 4s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-73-94.us-west-1.compute.internal" not ready since 2024-05-02 21:04:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 21:05:05.117 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-73-94.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 21:05:01.598361       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 21:05:01.598692       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714683901 cert, and key in /tmp/serving-cert-1472672239/serving-signer.crt, /tmp/serving-cert-1472672239/serving-signer.key\nStaticPodsDegraded: I0502 21:05:02.157368       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 21:05:02.186483       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-73-94.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 21:05:02.186678       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 21:05:02.220916       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1472672239/tls.crt::/tmp/serving-cert-1472672239/tls.key"\nStaticPodsDegraded: F0502 21:05:02.607206       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786106956738465792junit42 hours ago
I0502 20:53:36.984078       1 observer_polling.go:159] Starting file observer
W0502 20:53:37.001382       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-5-161.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 20:53:37.001523       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786106968516071424junit42 hours ago
I0502 20:59:36.475406       1 observer_polling.go:159] Starting file observer
W0502 20:59:36.489039       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-126-121.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 20:59:36.489200       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786106965970128896junit42 hours ago
I0502 20:00:56.341173       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 20:00:56.458549       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-8c5t02r4-4205c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.26.157:6443: connect: connection refused
I0502 20:00:57.840835       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786106965970128896junit42 hours ago
I0502 20:56:57.447097       1 observer_polling.go:159] Starting file observer
W0502 20:56:57.461437       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-38.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 20:56:57.461568       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786106963449352192junit42 hours ago
I0502 20:56:16.624790       1 observer_polling.go:159] Starting file observer
W0502 20:56:16.638500       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-251.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 20:56:16.638637       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786106957648629760junit42 hours ago
May 02 20:53:00.339 - 19s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-75-206.ec2.internal" not ready since 2024-05-02 20:52:53 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 20:53:19.784 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-75-206.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 20:53:17.562897       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 20:53:17.563275       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714683197 cert, and key in /tmp/serving-cert-3447408035/serving-signer.crt, /tmp/serving-cert-3447408035/serving-signer.key\nStaticPodsDegraded: I0502 20:53:18.165539       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 20:53:18.179704       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-75-206.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 20:53:18.179834       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 20:53:18.200438       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3447408035/tls.crt::/tmp/serving-cert-3447408035/tls.key"\nStaticPodsDegraded: F0502 20:53:18.375795       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 20:58:32.332 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-124-71.ec2.internal" not ready since 2024-05-02 20:56:32 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786106958105808896junit43 hours ago
# step graph.Run multi-stage test e2e-aws-sdn-upgrade-3 - e2e-aws-sdn-upgrade-3-gather-audit-logs container test
I group list: Get "https://api.ci-op-8c5t02r4-4a71e.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 54.71.159.162:6443: connect: connection refused
E0502 20:34:55.642650      33 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-8c5t02r4-4a71e.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 54.71.159.162:6443: connect: connection refused

... 3 lines not shown

#1786032147832770560junit46 hours ago
May 02 16:05:38.872 - 7s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-60-246.us-west-2.compute.internal" not ready since 2024-05-02 16:05:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:05:46.354 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-60-246.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:05:40.542513       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:05:40.542937       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714665940 cert, and key in /tmp/serving-cert-1353395211/serving-signer.crt, /tmp/serving-cert-1353395211/serving-signer.key\nStaticPodsDegraded: I0502 16:05:41.207326       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:05:41.233024       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-60-246.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:05:41.233166       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 16:05:41.258816       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1353395211/tls.crt::/tmp/serving-cert-1353395211/tls.key"\nStaticPodsDegraded: F0502 16:05:41.899886       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 16:11:21.480 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-104-141.us-west-2.compute.internal" not ready since 2024-05-02 16:11:03 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786032142816382976junit46 hours ago
May 02 16:02:10.514 - 24s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-30-90.us-west-1.compute.internal" not ready since 2024-05-02 16:00:10 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:02:35.042 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-30-90.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:02:32.170836       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:02:32.171097       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714665752 cert, and key in /tmp/serving-cert-2057944954/serving-signer.crt, /tmp/serving-cert-2057944954/serving-signer.key\nStaticPodsDegraded: I0502 16:02:32.445471       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:02:32.446918       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-30-90.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:02:32.447030       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 16:02:32.447621       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2057944954/tls.crt::/tmp/serving-cert-2057944954/tls.key"\nStaticPodsDegraded: F0502 16:02:32.698931       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 16:07:42.503 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-124-98.us-west-1.compute.internal" not ready since 2024-05-02 16:05:42 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786032142816382976junit46 hours ago
May 02 16:13:47.090 - 5s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-27-238.us-west-1.compute.internal" not ready since 2024-05-02 16:13:34 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:13:52.648 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-27-238.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:13:48.126904       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:13:48.127129       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666428 cert, and key in /tmp/serving-cert-4160273085/serving-signer.crt, /tmp/serving-cert-4160273085/serving-signer.key\nStaticPodsDegraded: I0502 16:13:48.914655       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:13:48.934282       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-238.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:13:48.934498       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 16:13:48.956626       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4160273085/tls.crt::/tmp/serving-cert-4160273085/tls.key"\nStaticPodsDegraded: F0502 16:13:49.267498       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786032145311993856junit46 hours ago
I0502 14:44:27.532271       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714660747\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714660747\" (2024-05-02 13:39:07 +0000 UTC to 2025-05-02 13:39:07 +0000 UTC (now=2024-05-02 14:44:27.532249118 +0000 UTC))"
E0502 14:49:00.034981       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-j3dmflpz-30ec7.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.84.128:6443: connect: connection refused
I0502 14:50:03.332927       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786032145311993856junit46 hours ago
I0502 16:00:57.777643       1 observer_polling.go:159] Starting file observer
W0502 16:00:57.796693       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-173.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:00:57.796866       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786032138261368832junit47 hours ago
May 02 15:50:28.353 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-5-157.us-west-1.compute.internal" not ready since 2024-05-02 15:48:28 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 15:50:57.596 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-5-157.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:50:54.047301       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:50:54.047709       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714665054 cert, and key in /tmp/serving-cert-2220686838/serving-signer.crt, /tmp/serving-cert-2220686838/serving-signer.key\nStaticPodsDegraded: I0502 15:50:55.038293       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:50:55.067920       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-5-157.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:50:55.068145       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:50:55.090087       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2220686838/tls.crt::/tmp/serving-cert-2220686838/tls.key"\nStaticPodsDegraded: F0502 15:50:55.553761       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 15:56:06.335 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-75-103.us-west-1.compute.internal" not ready since 2024-05-02 15:54:06 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786032136910802944junit47 hours ago
I0502 15:57:41.321496       1 observer_polling.go:159] Starting file observer
W0502 15:57:41.336573       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-127-177.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 15:57:41.336740       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786032150336770048junit47 hours ago
May 02 15:49:37.182 - 7s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-18-93.us-west-2.compute.internal" not ready since 2024-05-02 15:49:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 15:49:44.913 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-18-93.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:49:41.717184       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:49:41.717568       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714664981 cert, and key in /tmp/serving-cert-21663045/serving-signer.crt, /tmp/serving-cert-21663045/serving-signer.key\nStaticPodsDegraded: I0502 15:49:42.193470       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:49:42.212623       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-18-93.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:49:42.212752       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:49:42.244357       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-21663045/tls.crt::/tmp/serving-cert-21663045/tls.key"\nStaticPodsDegraded: F0502 15:49:42.550095       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 15:55:26.598 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-8-227.us-west-2.compute.internal" not ready since 2024-05-02 15:55:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786032140266246144junit47 hours ago
May 02 15:49:11.514 - 27s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-14-132.us-east-2.compute.internal" not ready since 2024-05-02 15:47:11 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 15:49:38.964 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-14-132.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:49:35.505006       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:49:35.505424       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714664975 cert, and key in /tmp/serving-cert-829164825/serving-signer.crt, /tmp/serving-cert-829164825/serving-signer.key\nStaticPodsDegraded: I0502 15:49:35.865752       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:49:35.899740       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-14-132.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:49:35.899912       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:49:35.919599       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-829164825/tls.crt::/tmp/serving-cert-829164825/tls.key"\nStaticPodsDegraded: F0502 15:49:36.465594       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 15:54:58.510 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-78-102.us-east-2.compute.internal" not ready since 2024-05-02 15:52:58 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786032140266246144junit47 hours ago
I0502 15:49:35.865752       1 observer_polling.go:159] Starting file observer
W0502 15:49:35.899740       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-14-132.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 15:49:35.899912       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786032138722742272junit47 hours ago
May 02 15:46:54.416 - 25s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-62-38.us-east-2.compute.internal" not ready since 2024-05-02 15:44:54 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 15:47:19.610 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-62-38.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:47:17.185210       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:47:17.185562       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714664837 cert, and key in /tmp/serving-cert-3021605452/serving-signer.crt, /tmp/serving-cert-3021605452/serving-signer.key\nStaticPodsDegraded: I0502 15:47:17.775120       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:47:17.792119       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-62-38.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:47:17.792274       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:47:17.817057       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3021605452/tls.crt::/tmp/serving-cert-3021605452/tls.key"\nStaticPodsDegraded: F0502 15:47:18.333971       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 15:52:41.410 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-107-84.us-east-2.compute.internal" not ready since 2024-05-02 15:50:41 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786032137409925120junit47 hours ago
I0502 15:52:14.505378       1 observer_polling.go:159] Starting file observer
W0502 15:52:14.524421       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-124-231.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 15:52:14.524595       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

openshift-oc-1752-ci-4.16-e2e-aws-ovn-upgrade (all) - 5 runs, 0% failed, 100% of runs match
#1786462218552872960junit18 hours ago
May 03 20:40:02.419 - 38s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-39-60.us-west-2.compute.internal" not ready since 2024-05-03 20:38:02 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 20:40:40.448 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-39-60.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 20:40:33.161088       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 20:40:33.161351       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714768833 cert, and key in /tmp/serving-cert-1276740147/serving-signer.crt, /tmp/serving-cert-1276740147/serving-signer.key\nStaticPodsDegraded: I0503 20:40:33.472949       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 20:40:33.474327       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-39-60.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 20:40:33.474454       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 20:40:33.475021       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1276740147/tls.crt::/tmp/serving-cert-1276740147/tls.key"\nStaticPodsDegraded: F0503 20:40:33.715497       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 20:46:26.404 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-6-177.us-west-2.compute.internal" not ready since 2024-05-03 20:46:16 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786462218552872960junit18 hours ago
I0503 20:40:31.273792       1 observer_polling.go:159] Starting file observer
W0503 20:40:31.286232       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-39-60.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 20:40:31.286498       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786408203731341312junit22 hours ago
I0503 16:44:38.600340       1 observer_polling.go:159] Starting file observer
W0503 16:44:38.627481       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-73-99.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 16:44:38.627638       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786365979068993536junit24 hours ago
I0503 14:12:05.372196       1 observer_polling.go:159] Starting file observer
W0503 14:12:05.379356       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-26-30.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 14:12:05.379482       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786301655155740672junit28 hours ago
I0503 09:52:45.264722       1 observer_polling.go:159] Starting file observer
W0503 09:52:45.280590       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-122-39.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 09:52:45.280702       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786107064129425408junit41 hours ago
May 02 21:10:21.695 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-28-179.us-west-2.compute.internal" not ready since 2024-05-02 21:10:10 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 21:10:34.401 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-28-179.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 21:10:25.232995       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 21:10:25.233693       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714684225 cert, and key in /tmp/serving-cert-971626633/serving-signer.crt, /tmp/serving-cert-971626633/serving-signer.key\nStaticPodsDegraded: I0502 21:10:25.525796       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 21:10:25.527496       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-28-179.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 21:10:25.527603       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 21:10:25.528198       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-971626633/tls.crt::/tmp/serving-cert-971626633/tls.key"\nStaticPodsDegraded: F0502 21:10:25.854315       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786107064129425408junit41 hours ago
cause/Error code/2 reason/ContainerExit ient@1714679127\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714679127\" (2024-05-02 18:45:27 +0000 UTC to 2025-05-02 18:45:27 +0000 UTC (now=2024-05-02 19:50:30.059765861 +0000 UTC))"
E0502 19:54:55.336810       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-9fms6417-7d06c.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.101.100:6443: connect: connection refused
I0502 19:55:08.016457       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
openshift-machine-config-operator-4343-ci-4.16-e2e-aws-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786481645285543936junit17 hours ago
May 03 21:30:34.188 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-103-49.us-east-2.compute.internal" not ready since 2024-05-03 21:30:24 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 21:30:47.262 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-103-49.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 21:30:39.243763       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 21:30:39.243969       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714771839 cert, and key in /tmp/serving-cert-4058023313/serving-signer.crt, /tmp/serving-cert-4058023313/serving-signer.key\nStaticPodsDegraded: I0503 21:30:39.445222       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 21:30:39.446784       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-49.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 21:30:39.446945       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 21:30:39.447613       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4058023313/tls.crt::/tmp/serving-cert-4058023313/tls.key"\nStaticPodsDegraded: F0503 21:30:39.591611       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 21:36:41.185 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-83-31.us-east-2.compute.internal" not ready since 2024-05-03 21:36:15 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786481645285543936junit17 hours ago
I0503 21:30:37.473398       1 observer_polling.go:159] Starting file observer
W0503 21:30:37.502974       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-49.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 21:30:37.503501       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
periodic-ci-openshift-release-master-ci-4.15-upgrade-from-stable-4.14-e2e-aws-ovn-upgrade (all) - 9 runs, 22% failed, 200% of failures match = 44% impact
#1786467038202433536junit18 hours ago
May 03 20:21:58.326 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-96-195.ec2.internal" not ready since 2024-05-03 20:21:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 20:22:05.320 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-96-195.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 20:22:01.126434       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 20:22:01.126637       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714767721 cert, and key in /tmp/serving-cert-3567943110/serving-signer.crt, /tmp/serving-cert-3567943110/serving-signer.key\nStaticPodsDegraded: I0503 20:22:01.452931       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 20:22:01.463451       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-96-195.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 20:22:01.463550       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1929-gf5c5a60-f5c5a609f\nStaticPodsDegraded: I0503 20:22:01.475079       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3567943110/tls.crt::/tmp/serving-cert-3567943110/tls.key"\nStaticPodsDegraded: F0503 20:22:01.936599       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 20:27:15.554 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-96-224.ec2.internal" not ready since 2024-05-03 20:25:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786352424085098496junit25 hours ago
May 03 12:54:23.156 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-12-183.us-west-1.compute.internal" not ready since 2024-05-03 12:54:10 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 12:54:31.554 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-12-183.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 12:54:23.501165       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 12:54:23.501519       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714740863 cert, and key in /tmp/serving-cert-2119254920/serving-signer.crt, /tmp/serving-cert-2119254920/serving-signer.key\nStaticPodsDegraded: I0503 12:54:23.873560       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 12:54:23.885526       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-12-183.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 12:54:23.885748       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1929-gf5c5a60-f5c5a609f\nStaticPodsDegraded: I0503 12:54:23.898781       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2119254920/tls.crt::/tmp/serving-cert-2119254920/tls.key"\nStaticPodsDegraded: F0503 12:54:24.280023       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 12:59:51.152 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-103-204.us-west-1.compute.internal" not ready since 2024-05-03 12:57:51 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786295895172583424junit29 hours ago
May 03 09:50:10.102 - 39s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-57-21.us-east-2.compute.internal" not ready since 2024-05-03 09:48:10 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 09:50:49.499 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-57-21.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 09:50:41.892836       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 09:50:41.893178       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714729841 cert, and key in /tmp/serving-cert-1335936735/serving-signer.crt, /tmp/serving-cert-1335936735/serving-signer.key\nStaticPodsDegraded: I0503 09:50:42.379926       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 09:50:42.387510       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-57-21.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 09:50:42.387617       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1929-gf5c5a60-f5c5a609f\nStaticPodsDegraded: I0503 09:50:42.399484       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1335936735/tls.crt::/tmp/serving-cert-1335936735/tls.key"\nStaticPodsDegraded: W0503 09:50:44.886295       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nStaticPodsDegraded: F0503 09:50:44.886353       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786138220761714688junit40 hours ago
May 02 22:34:53.687 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-37-83.us-east-2.compute.internal" not ready since 2024-05-02 22:32:53 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 22:35:25.118 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-37-83.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 22:35:20.547005       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 22:35:20.547483       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714689320 cert, and key in /tmp/serving-cert-1526817757/serving-signer.crt, /tmp/serving-cert-1526817757/serving-signer.key\nStaticPodsDegraded: I0502 22:35:21.048890       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 22:35:21.056470       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-37-83.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 22:35:21.056654       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1929-gf5c5a60-f5c5a609f\nStaticPodsDegraded: I0502 22:35:21.069578       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1526817757/tls.crt::/tmp/serving-cert-1526817757/tls.key"\nStaticPodsDegraded: F0502 22:35:21.394271       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 22:41:04.672 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-125-220.us-east-2.compute.internal" not ready since 2024-05-02 22:40:58 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786138220761714688junit40 hours ago
May 02 22:47:00.299 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-55-33.us-east-2.compute.internal" not ready since 2024-05-02 22:46:52 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 22:47:12.068 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-55-33.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 22:47:08.666817       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 22:47:08.667220       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714690028 cert, and key in /tmp/serving-cert-2922169847/serving-signer.crt, /tmp/serving-cert-2922169847/serving-signer.key\nStaticPodsDegraded: I0502 22:47:09.035921       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 22:47:09.044198       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-55-33.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 22:47:09.044307       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1929-gf5c5a60-f5c5a609f\nStaticPodsDegraded: I0502 22:47:09.058762       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2922169847/tls.crt::/tmp/serving-cert-2922169847/tls.key"\nStaticPodsDegraded: F0502 22:47:09.255803       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
periodic-ci-openshift-multiarch-master-nightly-4.16-upgrade-from-stable-4.15-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 11 runs, 18% failed, 450% of failures match = 82% impact
#1786475626979397632junit18 hours ago
I0503 21:03:31.284324       1 observer_polling.go:159] Starting file observer
W0503 21:03:31.304957       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-21-28.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 21:03:31.305089       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786382764384194560junit24 hours ago
I0503 15:01:26.732528       1 observer_polling.go:159] Starting file observer
W0503 15:01:26.743555       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-125-35.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 15:01:26.743681       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786317173128433664junit29 hours ago
I0503 10:51:31.241689       1 observer_polling.go:159] Starting file observer
W0503 10:51:31.263694       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-110-58.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:51:31.263850       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786293276500824064junit31 hours ago
I0503 09:02:46.020036       1 observer_polling.go:159] Starting file observer
W0503 09:02:46.031214       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-68-246.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 09:02:46.031322       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786263498334932992junit33 hours ago
May 03 07:04:37.055 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-36-5.ec2.internal" not ready since 2024-05-03 07:04:30 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 07:04:51.280 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-36-5.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 07:04:41.909103       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 07:04:41.912826       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714719881 cert, and key in /tmp/serving-cert-1488537369/serving-signer.crt, /tmp/serving-cert-1488537369/serving-signer.key\nStaticPodsDegraded: I0503 07:04:42.711554       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 07:04:42.719689       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-36-5.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 07:04:42.719811       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 07:04:42.739482       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1488537369/tls.crt::/tmp/serving-cert-1488537369/tls.key"\nStaticPodsDegraded: F0503 07:04:42.926427       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786263498334932992junit33 hours ago
I0503 07:04:42.711554       1 observer_polling.go:159] Starting file observer
W0503 07:04:42.719689       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-36-5.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 07:04:42.719811       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970
#1786231717929947136junit35 hours ago
I0503 04:59:50.573930       1 observer_polling.go:159] Starting file observer
W0503 04:59:50.591305       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-107.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 04:59:50.591446       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786180134085070848junit38 hours ago
I0503 01:29:52.562557       1 observer_polling.go:159] Starting file observer
W0503 01:29:52.573217       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-117-240.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 01:29:52.573336       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786112617345978368junit43 hours ago
I0502 20:53:38.460865       1 observer_polling.go:159] Starting file observer
W0502 20:53:38.477385       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-0-131.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 20:53:38.477528       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

#1786048968984104960junit47 hours ago
I0502 16:57:03.487387       1 observer_polling.go:159] Starting file observer
W0502 16:57:03.502339       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-124-189.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:57:03.502455       1 builder.go:299] check-endpoints version 4.16.0-202405020546.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970

... 3 lines not shown

release-openshift-origin-installer-e2e-aws-upgrade-4.11-to-4.12-to-4.13-to-4.14-ci (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1786440897559269376junit19 hours ago
May 03 20:59:02.863 - 62s   E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/4e83d302-fa3b-4c57-b411-4644936fdab7 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: error running request: 429 Too Many Requests: The apiserver is shutting down, please try again later.\n
May 03 21:00:04.863 - 3s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/a7fbcfd9-5b36-4670-af50-7e0ae316ff9a backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-z2c0wrkl-31f3b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 3.139.26.115:6443: connect: connection refused
May 03 21:00:07.864 - 17s   E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/4ad031bb-5b06-47bc-ba39-abfbfd705427 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-z2c0wrkl-31f3b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 3.139.26.115:6443: i/o timeout
#1786440897559269376junit19 hours ago
May 03 20:59:02.863 - 59s   E backend-disruption-name/kube-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/aa5404da-6361-41e3-aed5-50db206dd9ac backend-disruption-name/kube-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: error running request: 429 Too Many Requests: The apiserver is shutting down, please try again later.\n
May 03 21:00:01.863 - 7s    E backend-disruption-name/kube-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/9b514240-d281-4704-966d-cc5d04311065 backend-disruption-name/kube-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://api.ci-op-z2c0wrkl-31f3b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 3.139.26.115:6443: connect: connection refused
May 03 21:00:08.864 - 16s   E backend-disruption-name/kube-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/b2a6d5da-bafb-4fc7-bbad-fa9f38e12938 backend-disruption-name/kube-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://api.ci-op-z2c0wrkl-31f3b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 3.139.26.115:6443: i/o timeout
#1786078417209266176junit42 hours ago
May 02 20:57:42.114 - 60s   E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/103ceda3-0361-4484-abb6-866bebc5f078 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: error running request: 429 Too Many Requests: The apiserver is shutting down, please try again later.\n
May 02 20:58:43.114 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/7a25d854-ee8a-4d69-aad5-a70d27cf3c11 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-tvisgd2b-31f3b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 18.218.196.82:6443: connect: connection refused
May 02 20:58:44.114 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/0fb11885-bb7c-4800-8d75-37d41841dd5f backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: error running request: 429 Too Many Requests: The apiserver is shutting down, please try again later.\n

... 2 lines not shown

periodic-ci-openshift-release-master-ci-4.16-upgrade-from-stable-4.15-e2e-aws-ovn-upgrade (all) - 10 runs, 10% failed, 1000% of failures match = 100% impact
#1786455463139741696junit19 hours ago
I0503 19:38:16.071860       1 observer_polling.go:159] Starting file observer
W0503 19:38:16.094924       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-230.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:38:16.095096       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786455463034884096junit19 hours ago
I0503 19:47:54.977513       1 observer_polling.go:159] Starting file observer
W0503 19:47:54.995335       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-12-148.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:47:54.995513       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786455462766448640junit19 hours ago
I0503 19:46:46.883598       1 observer_polling.go:159] Starting file observer
W0503 19:46:46.893810       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-21.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:46:46.893934       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786455462963580928junit19 hours ago
May 03 19:53:20.713 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-104-111.ec2.internal" not ready since 2024-05-03 19:53:16 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:53:37.166 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-104-111.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:53:29.954433       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:53:29.954722       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714766009 cert, and key in /tmp/serving-cert-3865838185/serving-signer.crt, /tmp/serving-cert-3865838185/serving-signer.key\nStaticPodsDegraded: I0503 19:53:30.390584       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:53:30.392076       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-104-111.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:53:30.392180       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 19:53:30.392781       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3865838185/tls.crt::/tmp/serving-cert-3865838185/tls.key"\nStaticPodsDegraded: F0503 19:53:30.798680       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786455462963580928junit19 hours ago
I0503 19:53:28.957662       1 observer_polling.go:159] Starting file observer
W0503 19:53:28.970165       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-104-111.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:53:28.970335       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786455463185879040junit19 hours ago
I0503 19:52:11.398095       1 observer_polling.go:159] Starting file observer
W0503 19:52:11.418206       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-110-83.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:52:11.418340       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786455463097798656junit19 hours ago
May 03 19:34:07.320 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-31-1.us-east-2.compute.internal" not ready since 2024-05-03 19:33:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:34:16.590 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-31-1.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:34:10.743906       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:34:10.744164       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714764850 cert, and key in /tmp/serving-cert-435990150/serving-signer.crt, /tmp/serving-cert-435990150/serving-signer.key\nStaticPodsDegraded: I0503 19:34:11.106612       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:34:11.108100       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-31-1.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:34:11.108344       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 19:34:11.109027       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-435990150/tls.crt::/tmp/serving-cert-435990150/tls.key"\nStaticPodsDegraded: F0503 19:34:11.287245       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 19:39:44.322 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-92-22.us-east-2.compute.internal" not ready since 2024-05-03 19:37:44 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786455465270448128junit19 hours ago
I0503 19:49:11.619100       1 observer_polling.go:159] Starting file observer
W0503 19:49:11.645085       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-47-53.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:49:11.645326       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786455462867111936junit19 hours ago
I0503 19:46:27.927483       1 observer_polling.go:159] Starting file observer
W0503 19:46:27.942584       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-126-41.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:46:27.942754       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786455462913249280junit19 hours ago
I0503 19:53:45.991593       1 observer_polling.go:159] Starting file observer
W0503 19:53:46.011890       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-38-87.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:53:46.012005       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786455463001329664junit19 hours ago
I0503 19:43:03.030709       1 observer_polling.go:159] Starting file observer
W0503 19:43:03.041353       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-120-186.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:43:03.041561       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

periodic-ci-openshift-release-master-ci-4.16-upgrade-from-stable-4.15-e2e-aws-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786455462707728384junit19 hours ago
I0503 19:39:31.043775       1 observer_polling.go:159] Starting file observer
W0503 19:39:31.059888       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-172.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:39:31.060019       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786454989636374528junit19 hours ago
May 03 19:15:19.434 E ns/openshift-multus pod/multus-additional-cni-plugins-9vmw9 node/ip-10-0-133-7.us-west-2.compute.internal uid/7f83cd8e-d49e-439d-93cb-9d755a140680 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
May 03 19:15:23.472 E ns/openshift-sdn pod/sdn-controller-rlmc5 node/ip-10-0-133-7.us-west-2.compute.internal uid/a6a32a27-4657-40cc-b26b-c472f208b4ee container/sdn-controller reason/ContainerExit code/2 cause/Error I0503 18:14:41.330500       1 server.go:27] Starting HTTP metrics server\nI0503 18:14:41.330616       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0503 18:25:55.938282       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-6c9ypgpk-ee891.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.187.162:6443: connect: connection refused\nE0503 18:26:40.352833       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-6c9ypgpk-ee891.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.187.162:6443: connect: connection refused\nE0503 18:27:35.020811       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-6c9ypgpk-ee891.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.212.116:6443: connect: connection refused\n
May 03 19:15:26.567 E ns/openshift-network-diagnostics pod/network-check-target-t26rw node/ip-10-0-162-135.us-west-2.compute.internal uid/d872c85c-6c7b-4128-be12-53f8f5fdcb21 container/network-check-target-container reason/ContainerExit code/2 cause/Error

... 2 lines not shown

periodic-ci-openshift-release-master-ci-4.13-e2e-aws-ovn-upgrade (all) - 3 runs, 0% failed, 100% of runs match
#1786435446289469440junit20 hours ago
May 03 18:27:37.655 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-vxltk node/ip-10-0-162-244.us-west-2.compute.internal uid/6b1185e4-c9cc-4bd7-9d09-01c2364ea906 container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 18:27:39.633 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-162-244.us-west-2.compute.internal node/ip-10-0-162-244.us-west-2.compute.internal uid/11774c48-fc01-41e0-9831-fc74c5362e1e container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 18:27:38.789338       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 18:27:38.797794       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714760858 cert, and key in /tmp/serving-cert-3486350673/serving-signer.crt, /tmp/serving-cert-3486350673/serving-signer.key\nI0503 18:27:39.084808       1 observer_polling.go:159] Starting file observer\nW0503 18:27:39.107918       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-162-244.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 18:27:39.108156       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 18:27:39.125176       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3486350673/tls.crt::/tmp/serving-cert-3486350673/tls.key"\nF0503 18:27:39.430538       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 18:27:40.167 - 1s    E ns/openshift-authentication route/oauth-openshift disruption/ingress-to-oauth-server connection/new reason/DisruptionBegan ns/openshift-authentication route/oauth-openshift disruption/ingress-to-oauth-server connection/new stopped responding to GET requests over new connections: Get "https://oauth-openshift.apps.ci-op-70xbjyzm-852f6.aws-2.ci.openshift.org/healthz": read tcp 10.131.116.24:39042->44.236.156.10:443: read: connection reset by peer
#1786435446289469440junit20 hours ago
May 03 18:27:47.532 E ns/openshift-ovn-kubernetes pod/ovnkube-master-8gjvj node/ip-10-0-162-244.us-west-2.compute.internal uid/ccc5e65d-fac3-4760-abae-dbee5fa219dc container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 18:27:47.553 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-162-244.us-west-2.compute.internal node/ip-10-0-162-244.us-west-2.compute.internal uid/11774c48-fc01-41e0-9831-fc74c5362e1e container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 18:27:38.789338       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 18:27:38.797794       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714760858 cert, and key in /tmp/serving-cert-3486350673/serving-signer.crt, /tmp/serving-cert-3486350673/serving-signer.key\nI0503 18:27:39.084808       1 observer_polling.go:159] Starting file observer\nW0503 18:27:39.107918       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-162-244.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 18:27:39.108156       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 18:27:39.125176       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3486350673/tls.crt::/tmp/serving-cert-3486350673/tls.key"\nF0503 18:27:39.430538       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 18:27:52.640 E ns/openshift-e2e-loki pod/loki-promtail-4bdxg node/ip-10-0-162-244.us-west-2.compute.internal uid/b756bca9-fac4-4f85-b87b-5291b47eb9ee container/promtail reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1786344426067464192junit26 hours ago
May 03 12:16:04.611 E ns/openshift-e2e-loki pod/loki-promtail-n4c72 node/ip-10-0-148-181.us-east-2.compute.internal uid/65e1e1ea-44d5-4f16-a199-a6e9d12cadec container/prod-bearer-token reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 12:16:04.664 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-181.us-east-2.compute.internal node/ip-10-0-148-181.us-east-2.compute.internal uid/289c6b7a-7f05-438d-98cd-50c02252cf40 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 12:16:00.583720       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 12:16:00.584121       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714738560 cert, and key in /tmp/serving-cert-532218687/serving-signer.crt, /tmp/serving-cert-532218687/serving-signer.key\nI0503 12:16:01.108402       1 observer_polling.go:159] Starting file observer\nW0503 12:16:01.145541       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-148-181.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 12:16:01.145672       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 12:16:01.182426       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-532218687/tls.crt::/tmp/serving-cert-532218687/tls.key"\nW0503 12:16:04.430052       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0503 12:16:04.430085       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
May 03 12:16:05.646 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-181.us-east-2.compute.internal node/ip-10-0-148-181.us-east-2.compute.internal uid/289c6b7a-7f05-438d-98cd-50c02252cf40 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 12:16:00.583720       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 12:16:00.584121       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714738560 cert, and key in /tmp/serving-cert-532218687/serving-signer.crt, /tmp/serving-cert-532218687/serving-signer.key\nI0503 12:16:01.108402       1 observer_polling.go:159] Starting file observer\nW0503 12:16:01.145541       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-148-181.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 12:16:01.145672       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 12:16:01.182426       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-532218687/tls.crt::/tmp/serving-cert-532218687/tls.key"\nW0503 12:16:04.430052       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0503 12:16:04.430085       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n

... 1 lines not shown

#1786060148360351744junit45 hours ago
May 02 17:44:06.332 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-qrgn8 node/ip-10-0-240-254.us-west-2.compute.internal uid/34c45932-5d7d-4998-ac65-eeed1e477912 container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 17:44:08.343 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-240-254.us-west-2.compute.internal node/ip-10-0-240-254.us-west-2.compute.internal uid/1d022fcb-563c-4bd8-98d5-ddbe17f71eaf container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 17:44:06.432114       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 17:44:06.432713       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714671846 cert, and key in /tmp/serving-cert-614820122/serving-signer.crt, /tmp/serving-cert-614820122/serving-signer.key\nI0502 17:44:07.161230       1 observer_polling.go:159] Starting file observer\nW0502 17:44:07.173041       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-240-254.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 17:44:07.173212       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 17:44:07.190610       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-614820122/tls.crt::/tmp/serving-cert-614820122/tls.key"\nF0502 17:44:07.480310       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 17:44:12.898 E ns/openshift-network-diagnostics pod/network-check-target-kk8v4 node/ip-10-0-240-254.us-west-2.compute.internal uid/e5019da3-34eb-44f2-97f4-a695c9d81bc8 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1786060148360351744junit45 hours ago
May 02 17:44:12.918 E ns/openshift-dns pod/dns-default-bz457 node/ip-10-0-240-254.us-west-2.compute.internal uid/70401d7a-de99-48a1-a9e3-f10e75d8aff4 container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 17:44:12.961 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-240-254.us-west-2.compute.internal node/ip-10-0-240-254.us-west-2.compute.internal uid/1d022fcb-563c-4bd8-98d5-ddbe17f71eaf container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 17:44:06.432114       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 17:44:06.432713       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714671846 cert, and key in /tmp/serving-cert-614820122/serving-signer.crt, /tmp/serving-cert-614820122/serving-signer.key\nI0502 17:44:07.161230       1 observer_polling.go:159] Starting file observer\nW0502 17:44:07.173041       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-240-254.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 17:44:07.173212       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 17:44:07.190610       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-614820122/tls.crt::/tmp/serving-cert-614820122/tls.key"\nF0502 17:44:07.480310       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 17:44:12.989 E ns/openshift-multus pod/network-metrics-daemon-dmkhj node/ip-10-0-240-254.us-west-2.compute.internal uid/01230863-e971-4269-a03e-28a7f295bc80 container/network-metrics-daemon reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
pull-ci-openshift-cluster-kube-apiserver-operator-master-e2e-aws-ovn-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#1786456854818197504junit20 hours ago
May 03 19:29:02.715 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-47-216.ec2.internal" not ready since 2024-05-03 19:27:02 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 19:29:30.737 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-47-216.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 19:29:24.934796       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 19:29:24.935027       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714764564 cert, and key in /tmp/serving-cert-3041744456/serving-signer.crt, /tmp/serving-cert-3041744456/serving-signer.key\nStaticPodsDegraded: I0503 19:29:25.224094       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 19:29:25.225439       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-47-216.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 19:29:25.225552       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-gcbb2870-cbb28706b\nStaticPodsDegraded: I0503 19:29:25.226143       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3041744456/tls.crt::/tmp/serving-cert-3041744456/tls.key"\nStaticPodsDegraded: F0503 19:29:25.422816       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 19:34:19.715 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-26-235.ec2.internal" not ready since 2024-05-03 19:32:19 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786456854818197504junit20 hours ago
I0503 19:40:08.903480       1 observer_polling.go:159] Starting file observer
W0503 19:40:08.921909       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-102-127.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:40:08.922096       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-gcbb2870-cbb28706b
#1786131916857020416junit41 hours ago
May 02 22:10:09.224 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-95-34.us-west-1.compute.internal" not ready since 2024-05-02 22:10:00 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 22:10:23.542 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-95-34.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 22:10:15.793019       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 22:10:15.797227       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714687815 cert, and key in /tmp/serving-cert-3061288045/serving-signer.crt, /tmp/serving-cert-3061288045/serving-signer.key\nStaticPodsDegraded: I0502 22:10:16.064704       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 22:10:16.066151       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-95-34.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 22:10:16.066299       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-gca564bb-ca564bba8\nStaticPodsDegraded: I0502 22:10:16.066926       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3061288045/tls.crt::/tmp/serving-cert-3061288045/tls.key"\nStaticPodsDegraded: F0502 22:10:16.253011       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 22:15:33.063 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-68-63.us-west-1.compute.internal" not ready since 2024-05-02 22:15:25 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786131916857020416junit41 hours ago
I0502 22:04:51.747454       1 observer_polling.go:159] Starting file observer
W0502 22:04:51.757090       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-33-209.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 22:04:51.757229       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-gca564bb-ca564bba8
pull-ci-openshift-cluster-storage-operator-master-e2e-aws-ovn-upgrade (all) - 3 runs, 33% failed, 200% of failures match = 67% impact
#1786453089109151744junit20 hours ago
I0503 19:21:08.082421       1 observer_polling.go:159] Starting file observer
W0503 19:21:08.097938       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-126-94.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 19:21:08.098177       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786365121044418560junit26 hours ago
cause/Error code/2 reason/ContainerExit er-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0503 12:30:59.607181       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-3k2m3v86-dee26.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.116.63:6443: connect: connection refused
I0503 12:31:35.221337       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786365121044418560junit26 hours ago
I0503 13:41:55.823080       1 observer_polling.go:159] Starting file observer
W0503 13:41:55.841218       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-11-182.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 13:41:55.841466       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
periodic-ci-openshift-release-master-ci-4.13-upgrade-from-stable-4.12-e2e-aws-ovn-upgrade (all) - 3 runs, 0% failed, 100% of runs match
#1786435446339801088junit20 hours ago
May 03 18:22:02.639 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-wx5lb node/ip-10-0-169-147.us-west-2.compute.internal uid/8dca689e-b6ab-4e5b-bffb-a6ce4ad1c5e6 container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 18:22:07.325 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-147.us-west-2.compute.internal node/ip-10-0-169-147.us-west-2.compute.internal uid/f34578f7-61a6-43b4-a9df-72b8c609e09b container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 18:22:05.624810       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 18:22:05.631325       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714760525 cert, and key in /tmp/serving-cert-3558651837/serving-signer.crt, /tmp/serving-cert-3558651837/serving-signer.key\nI0503 18:22:05.966738       1 observer_polling.go:159] Starting file observer\nW0503 18:22:05.979850       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-169-147.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 18:22:05.980013       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 18:22:05.993797       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3558651837/tls.crt::/tmp/serving-cert-3558651837/tls.key"\nF0503 18:22:06.230041       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 18:22:10.408 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-147.us-west-2.compute.internal node/ip-10-0-169-147.us-west-2.compute.internal uid/f34578f7-61a6-43b4-a9df-72b8c609e09b container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 18:22:05.624810       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 18:22:05.631325       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714760525 cert, and key in /tmp/serving-cert-3558651837/serving-signer.crt, /tmp/serving-cert-3558651837/serving-signer.key\nI0503 18:22:05.966738       1 observer_polling.go:159] Starting file observer\nW0503 18:22:05.979850       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-169-147.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 18:22:05.980013       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 18:22:05.993797       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3558651837/tls.crt::/tmp/serving-cert-3558651837/tls.key"\nF0503 18:22:06.230041       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1786344425832583168junit26 hours ago
May 03 12:20:03.307 E ns/openshift-ovn-kubernetes pod/ovnkube-master-hvdzr node/ip-10-0-225-4.ec2.internal uid/88ed809c-1fc9-4a21-945d-b5c87ebe937a container/ovn-dbchecker reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 12:20:06.305 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-225-4.ec2.internal node/ip-10-0-225-4.ec2.internal uid/9bea3351-7492-42fb-bf03-61823fe53e24 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 12:20:04.648449       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 12:20:04.660240       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714738804 cert, and key in /tmp/serving-cert-13966958/serving-signer.crt, /tmp/serving-cert-13966958/serving-signer.key\nI0503 12:20:05.200006       1 observer_polling.go:159] Starting file observer\nW0503 12:20:05.221046       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-225-4.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 12:20:05.221400       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 12:20:05.248195       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-13966958/tls.crt::/tmp/serving-cert-13966958/tls.key"\nF0503 12:20:05.604799       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 12:20:12.427 E ns/openshift-network-diagnostics pod/network-check-target-6k4nh node/ip-10-0-225-4.ec2.internal uid/8d2d8bd9-0f79-4dd3-8683-251333891bbf container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 2 lines not shown

#1786060148444237824junit45 hours ago
May 02 18:01:17.364 E ns/openshift-ovn-kubernetes pod/ovnkube-master-2zf7f node/ip-10-0-153-82.ec2.internal uid/6eedfe0c-993e-48c6-bd52-e8f973c274c9 container/sbdb reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 18:01:22.377 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-82.ec2.internal node/ip-10-0-153-82.ec2.internal uid/6575f4a2-f144-43d0-92db-34869e300e06 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 18:01:20.732377       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 18:01:20.754216       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714672880 cert, and key in /tmp/serving-cert-2889825547/serving-signer.crt, /tmp/serving-cert-2889825547/serving-signer.key\nI0502 18:01:21.136942       1 observer_polling.go:159] Starting file observer\nW0502 18:01:21.153010       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-153-82.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 18:01:21.153232       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 18:01:21.166385       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2889825547/tls.crt::/tmp/serving-cert-2889825547/tls.key"\nF0502 18:01:21.770884       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 18:01:26.880 E ns/openshift-network-diagnostics pod/network-check-target-x4zdx node/ip-10-0-153-82.ec2.internal uid/a7d4bba1-c5e4-4472-814b-43807dffc2e0 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1786060148444237824junit45 hours ago
May 02 18:01:28.164 E ns/openshift-dns pod/dns-default-52ss6 node/ip-10-0-153-82.ec2.internal uid/6f582401-12a2-4033-bfa1-6125c6387634 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 18:01:28.247 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-82.ec2.internal node/ip-10-0-153-82.ec2.internal uid/6575f4a2-f144-43d0-92db-34869e300e06 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 18:01:20.732377       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 18:01:20.754216       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714672880 cert, and key in /tmp/serving-cert-2889825547/serving-signer.crt, /tmp/serving-cert-2889825547/serving-signer.key\nI0502 18:01:21.136942       1 observer_polling.go:159] Starting file observer\nW0502 18:01:21.153010       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-153-82.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 18:01:21.153232       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 18:01:21.166385       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2889825547/tls.crt::/tmp/serving-cert-2889825547/tls.key"\nF0502 18:01:21.770884       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 18:01:33.314 E ns/openshift-e2e-loki pod/loki-promtail-wx4bh node/ip-10-0-153-82.ec2.internal uid/067c313d-70b4-4c9e-aed4-cd484489f07a container/promtail reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
periodic-ci-openshift-release-master-ci-4.17-e2e-aws-sdn-upgrade-out-of-change (all) - 1 runs, 0% failed, 100% of runs match
#1786437876930580480junit20 hours ago
I0503 18:19:16.637053       1 observer_polling.go:159] Starting file observer
W0503 18:19:16.657051       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-102-215.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 18:19:16.657215       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

periodic-ci-openshift-release-master-ci-4.13-upgrade-from-stable-4.12-e2e-aws-sdn-upgrade (all) - 3 runs, 67% failed, 150% of failures match = 100% impact
#1786435446172028928junit21 hours ago
May 03 18:09:55.585 E ns/openshift-dns pod/node-resolver-xjb6n node/ip-10-0-214-93.ec2.internal uid/4afeb46b-8bb0-4dd0-a4ae-2965cbea3537 container/dns-node-resolver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 18:10:00.235 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-214-93.ec2.internal node/ip-10-0-214-93.ec2.internal uid/bf92813d-dbe8-4d21-a7df-0937317d1d32 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 18:09:58.552082       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 18:09:58.561940       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714759798 cert, and key in /tmp/serving-cert-1010613832/serving-signer.crt, /tmp/serving-cert-1010613832/serving-signer.key\nI0503 18:09:59.403533       1 observer_polling.go:159] Starting file observer\nW0503 18:09:59.420394       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-214-93.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 18:09:59.420530       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 18:09:59.426467       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1010613832/tls.crt::/tmp/serving-cert-1010613832/tls.key"\nF0503 18:09:59.706530       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 18:10:05.352 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-214-93.ec2.internal node/ip-10-0-214-93.ec2.internal uid/bf92813d-dbe8-4d21-a7df-0937317d1d32 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 18:09:58.552082       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 18:09:58.561940       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714759798 cert, and key in /tmp/serving-cert-1010613832/serving-signer.crt, /tmp/serving-cert-1010613832/serving-signer.key\nI0503 18:09:59.403533       1 observer_polling.go:159] Starting file observer\nW0503 18:09:59.420394       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-214-93.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 18:09:59.420530       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 18:09:59.426467       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1010613832/tls.crt::/tmp/serving-cert-1010613832/tls.key"\nF0503 18:09:59.706530       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1786344425979383808junit27 hours ago
May 03 11:54:38.635 E ns/openshift-multus pod/cni-sysctl-allowlist-ds-x8t7l node/ip-10-0-206-215.us-east-2.compute.internal uid/1b84b409-bd36-4672-a416-ca05fc24ff4b container/kube-multus-additional-cni-plugins reason/ContainerExit code/137 cause/Error
May 03 11:54:39.502 E ns/openshift-sdn pod/sdn-controller-4q9pc node/ip-10-0-142-30.us-east-2.compute.internal uid/323c5e88-8f03-4d33-a0a0-5d250607d23e container/sdn-controller reason/ContainerExit code/2 cause/Error I0503 10:53:15.149213       1 server.go:27] Starting HTTP metrics server\nI0503 10:53:15.149315       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0503 11:00:43.153241       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-whkmbwwn-0e208.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.151.188:6443: connect: connection refused\nE0503 11:01:10.944202       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-whkmbwwn-0e208.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.254.195:6443: connect: connection refused\n
May 03 11:54:46.079 E ns/openshift-multus pod/multus-admission-controller-78d56bcdfc-bxxts node/ip-10-0-204-135.us-east-2.compute.internal uid/89b139e4-1b9b-4b9b-b34d-7d9302e60f0b container/multus-admission-controller reason/ContainerExit code/137 cause/Error
#1786344425979383808junit27 hours ago
May 03 12:07:49.773 E ns/openshift-dns pod/node-resolver-4kb5f node/ip-10-0-142-30.us-east-2.compute.internal uid/465c2b9c-bd10-450c-be93-94a1572b1f5c container/dns-node-resolver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 12:07:52.809 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-30.us-east-2.compute.internal node/ip-10-0-142-30.us-east-2.compute.internal uid/ab920db8-8019-4a44-b75e-f0ac07c31f1f container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 12:07:51.413641       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 12:07:51.414135       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714738071 cert, and key in /tmp/serving-cert-569197217/serving-signer.crt, /tmp/serving-cert-569197217/serving-signer.key\nI0503 12:07:52.202707       1 observer_polling.go:159] Starting file observer\nW0503 12:07:52.219601       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-142-30.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 12:07:52.219809       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 12:07:52.232382       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-569197217/tls.crt::/tmp/serving-cert-569197217/tls.key"\nF0503 12:07:52.531766       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 12:07:53.874 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-30.us-east-2.compute.internal node/ip-10-0-142-30.us-east-2.compute.internal uid/ab920db8-8019-4a44-b75e-f0ac07c31f1f container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 12:07:51.413641       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 12:07:51.414135       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714738071 cert, and key in /tmp/serving-cert-569197217/serving-signer.crt, /tmp/serving-cert-569197217/serving-signer.key\nI0503 12:07:52.202707       1 observer_polling.go:159] Starting file observer\nW0503 12:07:52.219601       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-142-30.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 12:07:52.219809       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 12:07:52.232382       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-569197217/tls.crt::/tmp/serving-cert-569197217/tls.key"\nF0503 12:07:52.531766       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1786060148045778944junit45 hours ago
May 02 17:30:16.720 E ns/openshift-multus pod/cni-sysctl-allowlist-ds-fpxsb node/ip-10-0-159-181.ec2.internal uid/31b41d82-f72f-46b5-a6ba-a87f1258fcac container/kube-multus-additional-cni-plugins reason/ContainerExit code/137 cause/Error
May 02 17:30:24.570 E ns/openshift-sdn pod/sdn-controller-4pdcf node/ip-10-0-154-49.ec2.internal uid/083e91be-26f5-463f-be9e-6ae0e6ab4e62 container/sdn-controller reason/ContainerExit code/2 cause/Error I0502 16:28:10.269524       1 server.go:27] Starting HTTP metrics server\nI0502 16:28:10.269626       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0502 16:35:43.869724       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0502 16:45:02.398119       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-svnm059s-0e208.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.154.75:6443: connect: connection refused\n
May 02 17:30:30.670 E ns/openshift-sdn pod/sdn-wt95f node/ip-10-0-163-198.ec2.internal uid/92fd84dc-c968-4a0b-b554-70fa128f566a container/sdn reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 2 lines not shown

periodic-ci-openshift-release-master-nightly-4.17-e2e-aws-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786438128689483776junit21 hours ago
May 03 17:54:24.343 - 23s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-6-232.ec2.internal" not ready since 2024-05-03 17:52:24 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 17:54:48.025 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-6-232.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 17:54:43.997817       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 17:54:43.998037       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714758883 cert, and key in /tmp/serving-cert-4054385409/serving-signer.crt, /tmp/serving-cert-4054385409/serving-signer.key\nStaticPodsDegraded: I0503 17:54:44.452601       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 17:54:44.469905       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-6-232.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 17:54:44.470019       1 builder.go:299] check-endpoints version 4.17.0-202404300240.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970\nStaticPodsDegraded: I0503 17:54:44.491589       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4054385409/tls.crt::/tmp/serving-cert-4054385409/tls.key"\nStaticPodsDegraded: F0503 17:54:44.714964       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 18:00:01.068 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-96-64.ec2.internal" not ready since 2024-05-03 17:58:01 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

pull-ci-openshift-origin-master-e2e-aws-ovn-upgrade (all) - 11 runs, 18% failed, 400% of failures match = 73% impact
#1786427726140280832junit21 hours ago
1 tests failed during this blip (2024-05-03 17:49:45.480263458 +0000 UTC m=+2825.439448445 to 2024-05-03 17:49:45.480263458 +0000 UTC m=+2825.439448445): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 17:50:18.656 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-0-127.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 17:50:08.906641       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 17:50:08.907012       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714758608 cert, and key in /tmp/serving-cert-1556593655/serving-signer.crt, /tmp/serving-cert-1556593655/serving-signer.key\nStaticPodsDegraded: I0503 17:50:09.570728       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 17:50:09.588392       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-0-127.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 17:50:09.588535       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 17:50:09.613538       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1556593655/tls.crt::/tmp/serving-cert-1556593655/tls.key"\nStaticPodsDegraded: F0503 17:50:10.025840       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
1 tests failed during this blip (2024-05-03 17:50:18.656374447 +0000 UTC m=+2858.615559424 to 2024-05-03 17:50:18.656374447 +0000 UTC m=+2858.615559424): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1786427726140280832junit21 hours ago
1 tests failed during this blip (2024-05-03 18:00:58.082064339 +0000 UTC m=+3498.041249326 to 2024-05-03 18:00:58.082064339 +0000 UTC m=+3498.041249326): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 18:01:11.055 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-122-222.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 18:01:03.215926       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 18:01:03.216198       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714759263 cert, and key in /tmp/serving-cert-2998099910/serving-signer.crt, /tmp/serving-cert-2998099910/serving-signer.key\nStaticPodsDegraded: I0503 18:01:03.501116       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 18:01:03.502726       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-122-222.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 18:01:03.502853       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 18:01:03.503458       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2998099910/tls.crt::/tmp/serving-cert-2998099910/tls.key"\nStaticPodsDegraded: F0503 18:01:03.633452       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
1 tests failed during this blip (2024-05-03 18:01:11.055534716 +0000 UTC m=+3511.014719703 to 2024-05-03 18:01:11.055534716 +0000 UTC m=+3511.014719703): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1786428427725705216junit21 hours ago
May 03 17:59:27.155 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-103-222.ec2.internal" not ready since 2024-05-03 17:59:07 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 17:59:39.222 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-103-222.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 17:59:32.267043       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 17:59:32.267311       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714759172 cert, and key in /tmp/serving-cert-2550616452/serving-signer.crt, /tmp/serving-cert-2550616452/serving-signer.key\nStaticPodsDegraded: I0503 17:59:32.653938       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 17:59:32.655392       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-222.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 17:59:32.655513       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 17:59:32.656107       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2550616452/tls.crt::/tmp/serving-cert-2550616452/tls.key"\nStaticPodsDegraded: F0503 17:59:33.129281       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 18:05:05.104 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-35-31.ec2.internal" not ready since 2024-05-03 18:04:49 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786428427725705216junit21 hours ago
May 03 18:51:43.216 - 38s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-35-31.ec2.internal" not ready since 2024-05-03 18:49:43 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 18:52:22.004 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-35-31.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 18:52:13.468699       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 18:52:13.468934       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714762333 cert, and key in /tmp/serving-cert-2984994851/serving-signer.crt, /tmp/serving-cert-2984994851/serving-signer.key\nStaticPodsDegraded: I0503 18:52:13.745284       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 18:52:13.746709       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-35-31.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 18:52:13.746827       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 18:52:13.747453       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2984994851/tls.crt::/tmp/serving-cert-2984994851/tls.key"\nStaticPodsDegraded: F0503 18:52:13.979871       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 18:57:31.211 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-103-222.ec2.internal" not ready since 2024-05-03 18:57:22 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786379816291799040junit24 hours ago
May 03 14:45:27.069 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-110-15.us-east-2.compute.internal" not ready since 2024-05-03 14:45:08 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 14:45:42.017 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-110-15.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 14:45:34.306937       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 14:45:34.307195       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714747534 cert, and key in /tmp/serving-cert-169531004/serving-signer.crt, /tmp/serving-cert-169531004/serving-signer.key\nStaticPodsDegraded: I0503 14:45:34.594509       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 14:45:34.595920       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-110-15.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 14:45:34.596058       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 14:45:34.596683       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-169531004/tls.crt::/tmp/serving-cert-169531004/tls.key"\nStaticPodsDegraded: F0503 14:45:34.799069       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 15:29:32.608 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-38-200.us-east-2.compute.internal" not ready since 2024-05-03 15:27:32 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786379816291799040junit24 hours ago
May 03 15:35:37.147 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-110-15.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-110-15.us-east-2.compute.internal_openshift-kube-apiserver(615b11f2e278450dc6313802030df102)\nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 15:40:54.275 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-12-216.us-east-2.compute.internal" not ready since 2024-05-03 15:40:40 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-12-216.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 15:40:52.274294       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 15:40:52.274617       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714750852 cert, and key in /tmp/serving-cert-1659486900/serving-signer.crt, /tmp/serving-cert-1659486900/serving-signer.key\nStaticPodsDegraded: I0503 15:40:52.990386       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 15:40:53.005892       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-12-216.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 15:40:53.006017       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 15:40:53.036863       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1659486900/tls.crt::/tmp/serving-cert-1659486900/tls.key"\nStaticPodsDegraded: F0503 15:40:53.305805       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 15:40:54.275 - 7s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-12-216.us-east-2.compute.internal" not ready since 2024-05-03 15:40:40 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-12-216.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 15:40:52.274294       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 15:40:52.274617       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714750852 cert, and key in /tmp/serving-cert-1659486900/serving-signer.crt, /tmp/serving-cert-1659486900/serving-signer.key\nStaticPodsDegraded: I0503 15:40:52.990386       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 15:40:53.005892       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-12-216.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 15:40:53.006017       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 15:40:53.036863       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1659486900/tls.crt::/tmp/serving-cert-1659486900/tls.key"\nStaticPodsDegraded: F0503 15:40:53.305805       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 1 lines not shown

#1786374508488167424junit24 hours ago
I0503 13:18:03.689754       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 13:23:35.199133       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-ttvp1lqn-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.87.219:6443: connect: connection refused
I0503 13:24:04.396009       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786374508488167424junit24 hours ago
I0503 15:16:54.413089       1 observer_polling.go:159] Starting file observer
W0503 15:16:54.428193       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-32-102.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 15:16:54.428311       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786210716940767232junit35 hours ago
I0503 04:44:17.231378       1 observer_polling.go:159] Starting file observer
W0503 04:44:17.253204       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-109-237.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 04:44:17.253390       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786121987622440960junit41 hours ago
May 02 22:36:37.179 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-4-138.ec2.internal" not ready since 2024-05-02 22:36:35 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 22:37:08.727 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-4-138.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 22:37:00.034260       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 22:37:00.034452       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714689420 cert, and key in /tmp/serving-cert-2503029473/serving-signer.crt, /tmp/serving-cert-2503029473/serving-signer.key\nStaticPodsDegraded: I0502 22:37:00.499063       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 22:37:00.501232       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-138.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 22:37:00.501378       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 22:37:00.501952       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2503029473/tls.crt::/tmp/serving-cert-2503029473/tls.key"\nStaticPodsDegraded: F0502 22:37:00.716734       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 22:41:54.175 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-87-62.ec2.internal" not ready since 2024-05-02 22:39:54 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786080816179187712junit44 hours ago
namespace/openshift-cloud-controller-manager node/ip-10-0-112-29.ec2.internal pod/aws-cloud-controller-manager-87cb87bd6-qhkpz uid/4506f4e0-286d-4719-84a7-bf6049a3d188 container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit er-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.57.48:6443: connect: connection refused
I0502 17:48:40.983175       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786080816179187712junit44 hours ago
I0502 17:48:44.863350       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 17:48:44.895407       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-bcy3sl37-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.101.196:6443: connect: connection refused
I0502 17:52:50.832447       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1786056218775654400junit45 hours ago
May 02 17:51:11.699 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-12-127.ec2.internal" not ready since 2024-05-02 17:51:08 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 17:51:41.434 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-12-127.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 17:51:33.285379       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 17:51:33.285665       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714672293 cert, and key in /tmp/serving-cert-3241485916/serving-signer.crt, /tmp/serving-cert-3241485916/serving-signer.key\nStaticPodsDegraded: I0502 17:51:33.530202       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 17:51:33.531419       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-12-127.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 17:51:33.531542       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 17:51:33.532107       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3241485916/tls.crt::/tmp/serving-cert-3241485916/tls.key"\nStaticPodsDegraded: F0502 17:51:33.829601       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 17:56:38.689 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-93-18.ec2.internal" not ready since 2024-05-02 17:54:38 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786056218775654400junit45 hours ago
May 02 18:46:58.935 - 30s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-93-18.ec2.internal" not ready since 2024-05-02 18:44:58 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 18:47:29.930 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-93-18.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 18:47:21.828901       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 18:47:21.829173       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714675641 cert, and key in /tmp/serving-cert-715025394/serving-signer.crt, /tmp/serving-cert-715025394/serving-signer.key\nStaticPodsDegraded: I0502 18:47:22.159574       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 18:47:22.161063       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-93-18.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 18:47:22.161234       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 18:47:22.161798       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-715025394/tls.crt::/tmp/serving-cert-715025394/tls.key"\nStaticPodsDegraded: F0502 18:47:22.468488       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 18:52:53.940 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-84-247.ec2.internal" not ready since 2024-05-02 18:50:53 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
pull-ci-openshift-cluster-dns-operator-master-e2e-aws-ovn-upgrade (all) - 6 runs, 17% failed, 200% of failures match = 33% impact
#1786438931294720000junit21 hours ago
May 03 18:42:58.997 - 27s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-61-65.ec2.internal" not ready since 2024-05-03 18:40:58 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 18:43:26.925 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-61-65.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 18:43:17.149606       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 18:43:17.150241       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714761797 cert, and key in /tmp/serving-cert-909616645/serving-signer.crt, /tmp/serving-cert-909616645/serving-signer.key\nStaticPodsDegraded: I0503 18:43:17.691496       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 18:43:17.701060       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-61-65.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 18:43:17.701209       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0503 18:43:17.717645       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-909616645/tls.crt::/tmp/serving-cert-909616645/tls.key"\nStaticPodsDegraded: F0503 18:43:17.968525       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 18:48:38.258 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-107-69.ec2.internal" not ready since 2024-05-03 18:48:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786438931294720000junit21 hours ago
I0503 17:33:14.274395       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0503 17:40:15.422722       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-8w2i8jz1-25bb3.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.24.209:6443: connect: connection refused
I0503 17:40:31.972293       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786117981663662080junit42 hours ago
May 02 21:12:06.397 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-7-105.us-west-1.compute.internal" not ready since 2024-05-02 21:11:55 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 21:12:17.072 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-7-105.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 21:12:09.705746       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 21:12:09.705922       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714684329 cert, and key in /tmp/serving-cert-560906332/serving-signer.crt, /tmp/serving-cert-560906332/serving-signer.key\nStaticPodsDegraded: I0502 21:12:09.981258       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 21:12:09.982696       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-7-105.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 21:12:09.982803       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 21:12:09.983392       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-560906332/tls.crt::/tmp/serving-cert-560906332/tls.key"\nStaticPodsDegraded: F0502 21:12:10.201354       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 21:17:19.430 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-86-10.us-west-1.compute.internal" not ready since 2024-05-02 21:15:19 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786117981663662080junit42 hours ago
I0502 20:14:41.631944       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 20:14:52.142759       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-vv4ywlx4-25bb3.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.48.147:6443: connect: connection refused
release-openshift-origin-installer-e2e-aws-upgrade-4.12-to-4.13-to-4.14-to-4.15-ci (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1786400524522754048junit21 hours ago
1 tests failed during this blip (2024-05-03 17:24:56.159672572 +0000 UTC m=+8713.078939755 to 2024-05-03 17:24:56.159672572 +0000 UTC m=+8713.078939755): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 17:25:19.623 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-146-153.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 17:25:11.332001       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 17:25:11.332334       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714757111 cert, and key in /tmp/serving-cert-3185011631/serving-signer.crt, /tmp/serving-cert-3185011631/serving-signer.key\nStaticPodsDegraded: I0503 17:25:12.197933       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 17:25:12.207860       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-146-153.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 17:25:12.208008       1 builder.go:271] check-endpoints version 4.14.0-202404250639.p0.g2eab0f9.assembly.stream.el8-2eab0f9-2eab0f9e27db4399bf8885d62ca338c3d02fdd35\nStaticPodsDegraded: I0503 17:25:12.223061       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3185011631/tls.crt::/tmp/serving-cert-3185011631/tls.key"\nStaticPodsDegraded: F0503 17:25:12.515981       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready
1 tests failed during this blip (2024-05-03 17:25:19.623313854 +0000 UTC m=+8736.542581037 to 2024-05-03 17:25:19.623313854 +0000 UTC m=+8736.542581037): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
pull-ci-openshift-machine-config-operator-master-e2e-aws-ovn (all) - 19 runs, 0% failed, 11% of runs match
#1786454548974407680junit21 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-aws-console container test
E0503 18:32:38.742942      34 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-ycrpi25z-42626.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=5s": dial tcp 3.128.59.209:6443: connect: connection refused
E0503 18:32:38.770794      34 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-ycrpi25z-42626.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=5s": dial tcp 3.128.59.209:6443: connect: connection refused

... 4 lines not shown

#1786060862239281152junit47 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-aws-console container test
E0502 16:41:29.979303      36 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-crphr7h3-42626.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=5s": dial tcp 3.21.148.63:6443: connect: connection refused
The connection to the server api.ci-op-crphr7h3-42626.origin-ci-int-aws.dev.rhcloud.com:6443 was refused - did you specify the right host or port?
#1786060862239281152junit47 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-core-dump container test
m:6443/api?timeout=32s": dial tcp 18.223.193.121:6443: connect: connection refused
E0502 16:41:21.441760      32 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-crphr7h3-42626.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 18.223.193.121:6443: connect: connection refused

... 3 lines not shown

pull-ci-openshift-origin-master-e2e-aws-ovn-single-node-serial (all) - 12 runs, 42% failed, 180% of failures match = 75% impact
#1786428427662790656junit21 hours ago
May 03 18:04:15.510 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: Patch "https://api-int.ci-op-0fxi0iv1-a6aef.aws-2.ci.openshift.org:6443/apis/apps/v1/namespaces/openshift-network-operator/daemonsets/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 10.0.34.253:6443: connect: connection refused (exception: We are not worried about Degraded=True blips for stable-system tests yet.)
May 03 18:04:15.510 - 2s    E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: Patch "https://api-int.ci-op-0fxi0iv1-a6aef.aws-2.ci.openshift.org:6443/apis/apps/v1/namespaces/openshift-network-operator/daemonsets/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": dial tcp 10.0.34.253:6443: connect: connection refused (exception: We are not worried about Degraded=True blips for stable-system tests yet.)

... 1 lines not shown

#1786427721107116032junit22 hours ago
namespace/openshift-cloud-controller-manager node/ip-10-0-19-127.ec2.internal pod/aws-cloud-controller-manager-c8798cc88-m2xjk uid/02f04c8f-6bc6-48ee-b861-162a06e8b850 container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit g resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-566mfyvb-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.27.128:6443: connect: connection refused
I0503 16:52:43.831585       1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159
#1786427721107116032junit22 hours ago
2024-05-03T17:45:42Z node/ip-10-0-19-127.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-566mfyvb-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-127.ec2.internal?timeout=10s - dial tcp 10.0.27.128:6443: connect: connection refused
2024-05-03T17:45:42Z node/ip-10-0-19-127.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-566mfyvb-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-127.ec2.internal?timeout=10s - dial tcp 10.0.118.142:6443: connect: connection refused

... 4 lines not shown

#1786379816191135744junit25 hours ago
2024-05-03T14:57:13Z node/ip-10-0-102-132.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6yfy8r1t-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-132.ec2.internal?timeout=10s - dial tcp 10.0.86.37:6443: connect: connection refused
2024-05-03T14:57:13Z node/ip-10-0-102-132.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6yfy8r1t-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-132.ec2.internal?timeout=10s - dial tcp 10.0.62.147:6443: connect: connection refused

... 13 lines not shown

#1786374508425252864junit25 hours ago
I0503 13:25:20.848788       1 node_controller.go:267] Update 1 nodes status took 133.88652ms.
E0503 13:25:38.402319       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-ttvp1lqn-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.121.168:6443: connect: connection refused
I0503 13:26:04.965066       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786374508425252864junit25 hours ago
E0503 14:35:54.809725       1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0503 14:36:13.754420       1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-ttvp1lqn-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.121.168:6443: connect: connection refused
E0503 14:36:16.758624       1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-ttvp1lqn-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.121.168:6443: connect: connection refused

... 3 lines not shown

#1786274233467277312junit32 hours ago
E0503 07:37:59.269652       1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0503 07:38:07.873118       1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-fmv3l4wc-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.58.52:6443: connect: connection refused
E0503 07:38:10.877254       1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-fmv3l4wc-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.58.52:6443: connect: connection refused

... 3 lines not shown

#1786210716789772288junit35 hours ago
namespace/openshift-kube-controller-manager node/ip-10-0-115-83.us-west-1.compute.internal pod/kube-controller-manager-ip-10-0-115-83.us-west-1.compute.internal uid/a5863a4e-1519-4125-94d6-d2ceea80e88b container/kube-controller-manager mirror-uid/80795057bfc72d8d740edb73a81e9b7d restarted 1 times:
cause/Error code/1 reason/ContainerExit s/kube-controller-manager?timeout=6s": dial tcp 10.0.119.211:6443: connect: connection refused
W0503 04:05:15.540016       1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: Get "https://api-int.ci-op-fmv3l4wc-a6aef.aws-2.ci.openshift.org:6443/apis/random.numbers.com/v1/integers?resourceVersion=70797": dial tcp 10.0.32.243:6443: connect: connection refused

... 5 lines not shown

#1786121987542749184junit41 hours ago
I0502 20:44:47.406987       1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159
E0502 20:46:02.345975       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-2mxfbf0c-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.35.224:6443: connect: connection refused
I0502 20:46:33.308993       1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159
#1786121987542749184junit41 hours ago
namespace/openshift-kube-controller-manager node/ip-10-0-116-184.us-west-2.compute.internal pod/kube-controller-manager-ip-10-0-116-184.us-west-2.compute.internal uid/206a71e6-377a-43a3-8e4e-c89705afad21 container/kube-controller-manager mirror-uid/3d24a21c595a5e07b85271967caf00d9 restarted 1 times:
cause/Error code/1 reason/ContainerExit em/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.75.227:6443: connect: connection refused
W0502 22:08:21.484524       1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: Get "https://api-int.ci-op-2mxfbf0c-a6aef.aws-2.ci.openshift.org:6443/apis/awesome.bears.com/v3/pandas?resourceVersion=66444": dial tcp 10.0.35.224:6443: connect: connection refused

... 5 lines not shown

#1786080816078524416junit44 hours ago
namespace/openshift-kube-controller-manager node/ip-10-0-62-179.us-east-2.compute.internal pod/kube-controller-manager-ip-10-0-62-179.us-east-2.compute.internal uid/da4f6a61-f781-42ff-b799-a02a1e3e01ed container/kube-controller-manager mirror-uid/9fceadcd3fb6aa75f0c1a3fc1386b91b restarted 1 times:
cause/Error code/1 reason/ContainerExit kube-controller-manager?timeout=6s": dial tcp 10.0.80.174:6443: connect: connection refused
E0502 19:04:07.816513       1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.ci-op-bcy3sl37-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 10.0.80.174:6443: connect: connection refused

... 5 lines not shown

#1786056207006437376junit45 hours ago
I0502 17:00:29.105732       1 node_controller.go:267] Update 1 nodes status took 126.444484ms.
E0502 17:02:19.766170       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-j2bx0mxb-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.85.121:6443: connect: connection refused
I0502 17:02:45.105373       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786056207006437376junit45 hours ago
2024-05-02T18:03:20Z node/ip-10-0-15-206.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-j2bx0mxb-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-206.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.35.101:6443: connect: connection refused
2024-05-02T18:03:20Z node/ip-10-0-15-206.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-j2bx0mxb-a6aef.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-206.us-west-2.compute.internal?timeout=10s - dial tcp 10.0.85.121:6443: connect: connection refused

... 14 lines not shown

release-openshift-origin-installer-e2e-aws-disruptive-4.14 (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1786440897232113664junit21 hours ago
May 03 17:50:05.827 - 12s   E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/053bba33-284d-4c41-8d40-ed1fb435a5f6 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-6y6v6m98-a0f45.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
May 03 17:50:18.827 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/d7502fe7-1dc6-4313-96b7-e27eb9963c2f backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-6y6v6m98-a0f45.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 3.14.138.194:6443: connect: connection refused
May 03 17:50:19.827 - 18s   E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/9e574eb3-d639-465f-8348-6017d2cae87a backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-6y6v6m98-a0f45.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers

... 2 lines not shown

#1786078413878988800junit45 hours ago
May 02 18:08:12.364 - 2s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/733c4785-12ae-4fbd-838c-5bd0e7c96b4b backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-bpbhksy2-a0f45.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
May 02 18:08:15.364 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/2c82c491-a4be-4bac-8f80-fafef2114a22 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-bpbhksy2-a0f45.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 3.16.245.155:6443: connect: connection refused
May 02 18:08:16.364 - 18s   E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/ebeb05f7-c8e2-474b-a664-eb4a2c571730 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-bpbhksy2-a0f45.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers

... 2 lines not shown

release-openshift-origin-installer-e2e-aws-disruptive-4.12 (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1786440897131450368junit22 hours ago
May 03 17:44:03.249 - 28s   E alert/etcdInsufficientMembers ns/openshift-etcd ALERTS{alertname="etcdInsufficientMembers", alertstate="firing", endpoint="etcd-metrics", job="etcd", namespace="openshift-etcd", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
May 03 17:44:04.250 - 3s    E disruption/cache-kube-api connection/reused reason/DisruptionBegan disruption/cache-kube-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-ctytk7cs-90d06.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default?resourceVersion=0": dial tcp 35.162.2.110:6443: connect: connection refused
May 03 17:44:04.250 - 2s    E disruption/cache-oauth-api connection/new reason/DisruptionBegan disruption/cache-oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-ctytk7cs-90d06.origin-ci-int-aws.dev.rhcloud.com:6443/apis/oauth.openshift.io/v1/oauthclients?resourceVersion=0": dial tcp 35.162.2.110:6443: connect: connection refused

... 7 lines not shown

pull-ci-openshift-thanos-master-e2e-aws-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786393293588795392junit22 hours ago
I0503 14:24:57.172030       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0503 14:25:00.527479       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-2mfps085-6861b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.67.82:6443: connect: connection refused
I0503 14:28:07.346478       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786393293588795392junit22 hours ago
I0503 15:21:30.373180       1 observer_polling.go:159] Starting file observer
W0503 15:21:30.386823       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-39-198.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 15:21:30.386957       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
periodic-ci-openshift-release-master-nightly-4.16-e2e-aws-ovn-single-node (all) - 11 runs, 45% failed, 20% of failures match = 9% impact
#1786404463402029056junit23 hours ago
2024-05-03T15:22:30Z node/ip-10-0-2-65.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bdy3b5wl-f22a7.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-2-65.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.124.139:6443: connect: connection refused
2024-05-03T15:22:30Z node/ip-10-0-2-65.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bdy3b5wl-f22a7.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-2-65.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.124.139:6443: connect: connection refused

... 4 lines not shown

pull-ci-openshift-installer-master-e2e-aws-ovn-edge-zones (all) - 6 runs, 67% failed, 25% of failures match = 17% impact
#1786408519524683776junit23 hours ago
# step graph.Run multi-stage test e2e-aws-ovn-edge-zones - e2e-aws-ovn-edge-zones-gather-audit-logs container test
loud.com:6443/api?timeout=32s": dial tcp 52.34.218.171:6443: connect: connection refused
E0503 16:48:59.199835      35 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-t6fpjv48-4515a.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 52.34.218.171:6443: connect: connection refused

... 3 lines not shown

pull-ci-rh-ecosystem-edge-recert-main-e2e-aws-ovn-single-node-recert-serial (all) - 3 runs, 67% failed, 100% of failures match = 67% impact
#1786405151838310400junit23 hours ago
2024-05-03T15:26:01Z node/ip-10-0-77-246.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w1q6yw8x-54c4b.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-246.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2024-05-03T16:22:26Z node/ip-10-0-77-246.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w1q6yw8x-54c4b.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-246.ec2.internal?timeout=10s - dial tcp 10.0.16.198:6443: connect: connection refused
2024-05-03T16:22:26Z node/ip-10-0-77-246.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w1q6yw8x-54c4b.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-246.ec2.internal?timeout=10s - dial tcp 10.0.97.98:6443: connect: connection refused

... 14 lines not shown

#1786078917640065024junit45 hours ago
2024-05-02T17:55:37Z node/ip-10-0-30-97.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-p9v3f93l-54c4b.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-97.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2024-05-02T18:52:57Z node/ip-10-0-30-97.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-p9v3f93l-54c4b.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-97.ec2.internal?timeout=10s - dial tcp 10.0.127.220:6443: connect: connection refused
2024-05-02T18:52:57Z node/ip-10-0-30-97.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-p9v3f93l-54c4b.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-97.ec2.internal?timeout=10s - dial tcp 10.0.16.67:6443: connect: connection refused

... 14 lines not shown

pull-ci-openshift-openshift-controller-manager-master-e2e-aws-ovn-upgrade (all) - 2 runs, 0% failed, 50% of runs match
#1786415145392541696junit23 hours ago
I0503 17:04:25.709242       1 observer_polling.go:159] Starting file observer
W0503 17:04:25.718828       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-123.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 17:04:25.718955       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

pull-ci-openshift-installer-release-4.15-e2e-aws-ovn-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#1786382689847218176junit23 hours ago
May 03 16:03:21.387 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-115-254.us-west-2.compute.internal" not ready since 2024-05-03 16:01:21 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 16:03:53.270 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-115-254.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 16:03:44.824222       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 16:03:44.824490       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714752224 cert, and key in /tmp/serving-cert-787613674/serving-signer.crt, /tmp/serving-cert-787613674/serving-signer.key\nStaticPodsDegraded: I0503 16:03:45.426254       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 16:03:45.432512       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-115-254.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 16:03:45.432618       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1929-gf5c5a60-f5c5a609f\nStaticPodsDegraded: I0503 16:03:45.444428       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-787613674/tls.crt::/tmp/serving-cert-787613674/tls.key"\nStaticPodsDegraded: F0503 16:03:45.714251       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 16:08:58.397 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-91-153.us-west-2.compute.internal" not ready since 2024-05-03 16:06:58 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786031757347262464junit47 hours ago
May 02 16:25:25.296 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-18-39.us-west-2.compute.internal" not ready since 2024-05-02 16:23:25 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:25:59.723 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-18-39.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:25:50.948809       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:25:50.949233       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714667150 cert, and key in /tmp/serving-cert-3161689037/serving-signer.crt, /tmp/serving-cert-3161689037/serving-signer.key\nStaticPodsDegraded: I0502 16:25:51.307686       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:25:51.320191       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-18-39.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:25:51.320313       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1929-gf5c5a60-f5c5a609f\nStaticPodsDegraded: I0502 16:25:51.333779       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3161689037/tls.crt::/tmp/serving-cert-3161689037/tls.key"\nStaticPodsDegraded: F0502 16:25:51.541485       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 16:31:04.306 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-19-249.us-west-2.compute.internal" not ready since 2024-05-02 16:30:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786031757347262464junit47 hours ago
May 02 16:36:30.302 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-69-176.us-west-2.compute.internal" not ready since 2024-05-02 16:36:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:36:41.543 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-69-176.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:36:33.091332       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:36:33.091666       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714667793 cert, and key in /tmp/serving-cert-3948275086/serving-signer.crt, /tmp/serving-cert-3948275086/serving-signer.key\nStaticPodsDegraded: I0502 16:36:33.549185       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:36:33.557580       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-69-176.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:36:33.557715       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1929-gf5c5a60-f5c5a609f\nStaticPodsDegraded: I0502 16:36:33.569400       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3948275086/tls.crt::/tmp/serving-cert-3948275086/tls.key"\nStaticPodsDegraded: W0502 16:36:35.896237       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nStaticPodsDegraded: F0502 16:36:35.898590       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
periodic-ci-openshift-release-master-ci-4.16-e2e-aws-ovn-techpreview (all) - 7 runs, 14% failed, 100% of failures match = 14% impact
#1786404209994764288junit23 hours ago
2024-05-03T15:14:01Z node/ip-10-0-106-254.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6qhgwwxz-4b5e7.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-254.ec2.internal?timeout=10s - dial tcp 10.0.97.156:6443: i/o timeout
2024-05-03T15:14:01Z node/ip-10-0-106-254.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6qhgwwxz-4b5e7.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-254.ec2.internal?timeout=10s - dial tcp 10.0.56.36:6443: connect: connection refused
2024-05-03T15:14:01Z node/ip-10-0-106-254.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6qhgwwxz-4b5e7.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-254.ec2.internal?timeout=10s - dial tcp 10.0.97.156:6443: connect: connection refused

... 2 lines not shown

pull-ci-openshift-installer-release-4.15-okd-scos-e2e-aws-upgrade (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1786382711519186944junit24 hours ago
# step graph.Run multi-stage test e2e-aws-upgrade - e2e-aws-upgrade-gather-audit-logs container test
nt-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 3.13.19.153:6443: connect: connection refused
E0503 15:30:50.963659      32 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-5tljxibn-c958e.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 3.13.19.153:6443: connect: connection refused

... 3 lines not shown

#1786031548470923264junit47 hours ago
# step graph.Run multi-stage test e2e-aws-upgrade - e2e-aws-upgrade-gather-audit-logs container test
loud.com:6443/api?timeout=32s": dial tcp 54.188.91.207:6443: connect: connection refused
E0502 16:22:39.870718      30 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-6v6r41qp-c958e.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 54.188.91.207:6443: connect: connection refused

... 3 lines not shown

pull-ci-openshift-installer-release-4.15-okd-scos-e2e-aws-ovn (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1786382708998410240junit24 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-audit-logs container test
nt-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 18.118.90.9:6443: connect: connection refused
E0503 15:25:41.481342      36 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-5tljxibn-bcd4e.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 18.118.90.9:6443: connect: connection refused

... 3 lines not shown

#1786031447065235456junit47 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-audit-logs container test
dev.rhcloud.com:6443/api?timeout=32s": dial tcp 3.131.235.64:6443: connect: connection refused
E0502 16:24:38.055256      31 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-6v6r41qp-bcd4e.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 3.131.235.64:6443: connect: connection refused

... 3 lines not shown

pull-ci-openshift-knative-eventing-release-v1.14-412-test-conformance-aws-412 (all) - 5 runs, 20% failed, 100% of failures match = 20% impact
#1786402164529172480junit24 hours ago
# step graph.Run multi-stage test test-conformance-aws-412 - test-conformance-aws-412-knative-must-gather container test
tx-8904b.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.131.176.63:6443: connect: connection refused
ClusterOperators:
#1786402164529172480junit24 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-3ci2dhtx-8904b.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.131.176.63:6443: connect: connection refused
pull-ci-openshift-knative-eventing-release-v1.14-412-test-e2e-aws-412 (all) - 5 runs, 40% failed, 50% of failures match = 20% impact
#1786402164575309824junit24 hours ago
# step graph.Run multi-stage test test-e2e-aws-412 - test-e2e-aws-412-knative-must-gather container test
verless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 18.221.209.211:6443: connect: connection refused
ClusterOperators:
#1786402164575309824junit24 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-3ci2dhtx-feb82.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 18.221.209.211:6443: connect: connection refused
pull-ci-openshift-openshift-controller-manager-master-openshift-e2e-aws-ovn-builds-techpreview (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1786407185790537728junit25 hours ago
	clusteroperator/monitoring is not available () because
	clusteroperator/network is degraded because Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib: failed to apply / update (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib: Patch "https://api-int.ci-op-cvk2d1s9-c8cf4.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps/ovnkube-script-lib?fieldManager=cluster-network-operator%!F(MISSING)operconfig&force=true": dial tcp 10.0.101.51:6443: connect: connection refused
	clusteroperator/openshift-apiserver is progressing: APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation
#1786407185790537728junit25 hours ago
	clusteroperator/monitoring is not available () because
	clusteroperator/network is degraded because Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib: failed to apply / update (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib: Patch "https://api-int.ci-op-cvk2d1s9-c8cf4.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps/ovnkube-script-lib?fieldManager=cluster-network-operator%!F(MISSING)operconfig&force=true": dial tcp 10.0.101.51:6443: connect: connection refused
	clusteroperator/openshift-apiserver is progressing: APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation
pull-ci-openshift-origin-master-e2e-aws-ovn-cgroupsv2 (all) - 12 runs, 42% failed, 20% of failures match = 8% impact
#1786398913050185728junit25 hours ago
# step graph.Run multi-stage test e2e-aws-ovn-cgroupsv2 - e2e-aws-ovn-cgroupsv2-gather-audit-logs container test
dev.rhcloud.com:6443/api?timeout=32s": dial tcp 3.131.15.114:6443: connect: connection refused
E0503 15:13:38.635083      32 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-qmhmkwf5-a5578.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 3.131.15.114:6443: connect: connection refused

... 3 lines not shown

pull-ci-openshift-ovn-kubernetes-release-4.13-4.13-upgrade-from-stable-4.12-local-gateway-e2e-aws-ovn-upgrade (all) - 6 runs, 17% failed, 200% of failures match = 33% impact
#1786350941243445248junit26 hours ago
May 03 13:04:56.892 E ns/openshift-dns pod/node-resolver-8nwxg node/ip-10-0-197-250.us-west-2.compute.internal uid/e5a19c03-98dc-4c9c-b50a-5a694bf377a3 container/dns-node-resolver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 13:04:59.911 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-197-250.us-west-2.compute.internal node/ip-10-0-197-250.us-west-2.compute.internal uid/84587c9c-9121-4ec2-a2f2-62f9f76ea1d4 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 13:04:58.629470       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 13:04:58.640050       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714741498 cert, and key in /tmp/serving-cert-644602908/serving-signer.crt, /tmp/serving-cert-644602908/serving-signer.key\nI0503 13:04:59.181616       1 observer_polling.go:159] Starting file observer\nW0503 13:04:59.194647       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-197-250.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 13:04:59.194769       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 13:04:59.205265       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-644602908/tls.crt::/tmp/serving-cert-644602908/tls.key"\nF0503 13:04:59.570919       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 13:05:04.146 E ns/openshift-ovn-kubernetes pod/ovnkube-master-t6xm8 node/ip-10-0-197-250.us-west-2.compute.internal uid/1560649c-c6df-4f24-8d0d-2eee96d3b779 container/nbdb reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1786350941243445248junit26 hours ago
May 03 13:05:04.172 E ns/openshift-network-diagnostics pod/network-check-target-n9qqp node/ip-10-0-197-250.us-west-2.compute.internal uid/9dcae250-81f6-42e3-b91a-63b2c0dcd333 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 13:05:04.294 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-197-250.us-west-2.compute.internal node/ip-10-0-197-250.us-west-2.compute.internal uid/84587c9c-9121-4ec2-a2f2-62f9f76ea1d4 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 13:04:58.629470       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 13:04:58.640050       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714741498 cert, and key in /tmp/serving-cert-644602908/serving-signer.crt, /tmp/serving-cert-644602908/serving-signer.key\nI0503 13:04:59.181616       1 observer_polling.go:159] Starting file observer\nW0503 13:04:59.194647       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-197-250.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 13:04:59.194769       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0503 13:04:59.205265       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-644602908/tls.crt::/tmp/serving-cert-644602908/tls.key"\nF0503 13:04:59.570919       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 13:05:05.355 E ns/openshift-dns pod/dns-default-8psp9 node/ip-10-0-197-250.us-west-2.compute.internal uid/1b502ac4-d80a-4e36-bb28-74934a60524f container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1786099704832987136junit42 hours ago
May 02 20:25:10.657 E ns/openshift-multus pod/network-metrics-daemon-d5d8k node/ip-10-0-194-110.ec2.internal uid/f68eeb30-4143-4f1b-8d37-2b818920c2bc container/network-metrics-daemon reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 20:25:11.638 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-194-110.ec2.internal node/ip-10-0-194-110.ec2.internal uid/b3ef7d52-1a57-40a9-be3b-b4b65535868d container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 20:25:09.862982       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 20:25:09.880945       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714681509 cert, and key in /tmp/serving-cert-1342229272/serving-signer.crt, /tmp/serving-cert-1342229272/serving-signer.key\nI0502 20:25:10.456856       1 observer_polling.go:159] Starting file observer\nW0502 20:25:10.481757       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-194-110.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 20:25:10.482024       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 20:25:10.491686       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1342229272/tls.crt::/tmp/serving-cert-1342229272/tls.key"\nF0502 20:25:10.797971       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 20:25:12.654 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-194-110.ec2.internal node/ip-10-0-194-110.ec2.internal uid/b3ef7d52-1a57-40a9-be3b-b4b65535868d container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 20:25:09.862982       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 20:25:09.880945       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714681509 cert, and key in /tmp/serving-cert-1342229272/serving-signer.crt, /tmp/serving-cert-1342229272/serving-signer.key\nI0502 20:25:10.456856       1 observer_polling.go:159] Starting file observer\nW0502 20:25:10.481757       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-194-110.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 20:25:10.482024       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 20:25:10.491686       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1342229272/tls.crt::/tmp/serving-cert-1342229272/tls.key"\nF0502 20:25:10.797971       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 2 lines not shown

pull-ci-openshift-origin-master-e2e-aws-ovn-single-node (all) - 13 runs, 62% failed, 13% of failures match = 8% impact
#1786379816178552832junit26 hours ago
# step graph.Run multi-stage test e2e-aws-ovn-single-node - e2e-aws-ovn-single-node-gather-audit-logs container test
rent server API group list: Get "https://api.ci-op-6yfy8r1t-9a2d0.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 3.21.20.194:6443: connect: connection refused
E0503 14:13:03.595147      32 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-6yfy8r1t-9a2d0.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 3.21.20.194:6443: connect: connection refused

... 3 lines not shown

pull-ci-openshift-cluster-network-operator-master-e2e-aws-live-migration-sdn-ovn-rollback (all) - 1 runs, 0% failed, 100% of runs match
#1786349590404927488junit26 hours ago
E0503 11:24:12.181273       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-32jklxwj-d9c58.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0503 11:25:10.535062       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-32jklxwj-d9c58.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.9.38:6443: connect: connection refused
I0503 11:25:29.708076       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786349590404927488junit26 hours ago
I0503 12:05:54.525749       1 observer_polling.go:159] Starting file observer
W0503 12:05:54.539338       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-137.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 12:05:54.539468       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
release-openshift-origin-installer-e2e-aws-upgrade-4.14-to-4.15-to-4.16-to-4.17-ci (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1786319403889987584junit27 hours ago
1 tests failed during this blip (2024-05-03 10:36:09.369238453 +0000 UTC m=+3113.557383617 to 2024-05-03 10:36:09.369238453 +0000 UTC m=+3113.557383617): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 10:36:40.030 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-49-254.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 10:36:29.412839       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 10:36:29.424216       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714732589 cert, and key in /tmp/serving-cert-3141240727/serving-signer.crt, /tmp/serving-cert-3141240727/serving-signer.key\nStaticPodsDegraded: I0503 10:36:29.802673       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 10:36:29.821592       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-49-254.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 10:36:29.821711       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 10:36:29.842660       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3141240727/tls.crt::/tmp/serving-cert-3141240727/tls.key"\nStaticPodsDegraded: F0503 10:36:30.171769       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready
1 tests failed during this blip (2024-05-03 10:36:40.030629179 +0000 UTC m=+3144.218774353 to 2024-05-03 10:36:40.030629179 +0000 UTC m=+3144.218774353): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1786319403889987584junit27 hours ago
1 tests failed during this blip (2024-05-03 10:42:10.377081176 +0000 UTC m=+3474.565226350 to 2024-05-03 10:42:10.377081176 +0000 UTC m=+3474.565226350): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 10:42:22.487 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-53-85.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 10:42:12.540548       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 10:42:12.541514       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714732932 cert, and key in /tmp/serving-cert-724257396/serving-signer.crt, /tmp/serving-cert-724257396/serving-signer.key\nStaticPodsDegraded: I0503 10:42:13.052167       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 10:42:13.065408       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-53-85.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 10:42:13.065516       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 10:42:13.084730       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-724257396/tls.crt::/tmp/serving-cert-724257396/tls.key"\nStaticPodsDegraded: F0503 10:42:13.561038       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready
1 tests failed during this blip (2024-05-03 10:42:22.487565591 +0000 UTC m=+3486.675710765 to 2024-05-03 10:42:22.487565591 +0000 UTC m=+3486.675710765): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-installer-e2e-aws-ovn-arm64 (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1786352611964751872junit28 hours ago
# step graph.Run multi-stage test ocp-installer-e2e-aws-ovn-arm64 - ocp-installer-e2e-aws-ovn-arm64-gather-audit-logs container test
ps://api.ci-op-mkt3fnss-6129f.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 54.215.72.23:6443: connect: connection refused
E0503 12:15:29.103316      33 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-mkt3fnss-6129f.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 54.215.72.23:6443: connect: connection refused

... 3 lines not shown

pull-ci-openshift-ovn-kubernetes-release-4.12-e2e-aws-ovn-upgrade (all) - 4 runs, 25% failed, 400% of failures match = 100% impact
#1786318085276307456junit28 hours ago
May 03 10:51:46.110 E ns/openshift-ovn-kubernetes pod/ovnkube-master-skdzw node/ip-10-0-158-142.us-west-2.compute.internal uid/feb3563c-0608-4d95-9eb5-8e6b53e1080f container/sbdb reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 03 10:51:47.121 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-158-142.us-west-2.compute.internal node/ip-10-0-158-142.us-west-2.compute.internal uid/5f7eef3e-d2b5-4d9d-91ed-44eedd2d5315 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0503 10:51:45.394537       1 cmd.go:216] Using insecure, self-signed certificates\nI0503 10:51:45.409186       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714733505 cert, and key in /tmp/serving-cert-257853563/serving-signer.crt, /tmp/serving-cert-257853563/serving-signer.key\nI0503 10:51:45.781624       1 observer_polling.go:159] Starting file observer\nW0503 10:51:45.789388       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-158-142.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0503 10:51:45.789516       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0503 10:51:45.797146       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-257853563/tls.crt::/tmp/serving-cert-257853563/tls.key"\nF0503 10:51:46.444955       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 03 10:51:50.654 E ns/openshift-network-diagnostics pod/network-check-target-q9fkg node/ip-10-0-158-142.us-west-2.compute.internal uid/647d95f3-5bfd-4668-9aa2-2fa99fe16313 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 2 lines not shown

#1786123079072616448junit41 hours ago
May 02 21:51:51.810 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-wmtkg node/ip-10-0-240-187.ec2.internal uid/24970fbb-6220-490f-8ba4-768d88349b19 container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 21:51:58.751 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-240-187.ec2.internal node/ip-10-0-240-187.ec2.internal uid/f7583d21-c762-4115-8d07-4fa5e15bb486 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 21:51:55.861265       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 21:51:55.879530       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714686715 cert, and key in /tmp/serving-cert-1317406685/serving-signer.crt, /tmp/serving-cert-1317406685/serving-signer.key\nI0502 21:51:56.403921       1 observer_polling.go:159] Starting file observer\nW0502 21:51:56.412577       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-240-187.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 21:51:56.412736       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0502 21:51:56.432324       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1317406685/tls.crt::/tmp/serving-cert-1317406685/tls.key"\nF0502 21:51:56.839770       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 21:51:59.854 E ns/e2e-k8s-sig-apps-daemonset-upgrade-557 pod/ds1-jhwtt node/ip-10-0-240-187.ec2.internal uid/7878e33f-fc08-40db-9d44-a04137c92da6 container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

#1786099847825199104junit42 hours ago
May 02 20:27:54.490 E ns/e2e-k8s-sig-apps-daemonset-upgrade-2600 pod/ds1-dw4lz node/ip-10-0-238-60.ec2.internal uid/6f8009b7-fc43-429f-bc1e-c78779d3bfd9 container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 20:27:55.572 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-238-60.ec2.internal node/ip-10-0-238-60.ec2.internal uid/6f822736-202c-4117-a940-eab5f8e3801d container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 20:27:49.985594       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 20:27:49.990646       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714681669 cert, and key in /tmp/serving-cert-3183054900/serving-signer.crt, /tmp/serving-cert-3183054900/serving-signer.key\nI0502 20:27:50.393546       1 observer_polling.go:159] Starting file observer\nW0502 20:27:50.406441       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-238-60.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 20:27:50.409344       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0502 20:27:50.409930       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3183054900/tls.crt::/tmp/serving-cert-3183054900/tls.key"\nW0502 20:27:54.778529       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0502 20:27:54.778592       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
May 02 20:27:56.955 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-238-60.ec2.internal node/ip-10-0-238-60.ec2.internal uid/6f822736-202c-4117-a940-eab5f8e3801d container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 20:27:49.985594       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 20:27:49.990646       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714681669 cert, and key in /tmp/serving-cert-3183054900/serving-signer.crt, /tmp/serving-cert-3183054900/serving-signer.key\nI0502 20:27:50.393546       1 observer_polling.go:159] Starting file observer\nW0502 20:27:50.406441       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-238-60.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 20:27:50.409344       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0502 20:27:50.409930       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3183054900/tls.crt::/tmp/serving-cert-3183054900/tls.key"\nW0502 20:27:54.778529       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0502 20:27:54.778592       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n

... 1 lines not shown

#1786056753591357440junit45 hours ago
May 02 17:53:58.673 E ns/openshift-dns pod/dns-default-7hrlz node/ip-10-0-214-11.us-east-2.compute.internal uid/07edd9b5-f341-4b08-b66e-bfda7cdbe90c container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 17:54:00.681 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-214-11.us-east-2.compute.internal node/ip-10-0-214-11.us-east-2.compute.internal uid/10f5e413-0c43-4958-8c8c-c484e88add0d container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 17:53:55.480846       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 17:53:55.514041       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714672435 cert, and key in /tmp/serving-cert-597832365/serving-signer.crt, /tmp/serving-cert-597832365/serving-signer.key\nI0502 17:53:56.020195       1 observer_polling.go:159] Starting file observer\nW0502 17:53:56.043325       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-214-11.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 17:53:56.043458       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0502 17:53:56.048656       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-597832365/tls.crt::/tmp/serving-cert-597832365/tls.key"\nW0502 17:54:00.041396       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0502 17:54:00.041445       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
May 02 17:54:01.805 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-214-11.us-east-2.compute.internal node/ip-10-0-214-11.us-east-2.compute.internal uid/10f5e413-0c43-4958-8c8c-c484e88add0d container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 17:53:55.480846       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 17:53:55.514041       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714672435 cert, and key in /tmp/serving-cert-597832365/serving-signer.crt, /tmp/serving-cert-597832365/serving-signer.key\nI0502 17:53:56.020195       1 observer_polling.go:159] Starting file observer\nW0502 17:53:56.043325       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-214-11.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 17:53:56.043458       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0502 17:53:56.048656       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-597832365/tls.crt::/tmp/serving-cert-597832365/tls.key"\nW0502 17:54:00.041396       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0502 17:54:00.041445       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n

... 1 lines not shown

periodic-ci-openshift-release-master-nightly-4.17-upgrade-from-stable-4.16-e2e-aws-upgrade-ovn-single-node (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786318229602308096junit28 hours ago
2024-05-03T09:42:54Z node/ip-10-0-60-230.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fth4tz1t-25845.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-230.us-east-2.compute.internal?timeout=10s - unexpected EOF
2024-05-03T09:45:21Z node/ip-10-0-60-230.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fth4tz1t-25845.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-230.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.106.37:6443: connect: connection refused
2024-05-03T09:45:21Z node/ip-10-0-60-230.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fth4tz1t-25845.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-230.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.16.201:6443: connect: connection refused

... 4 lines not shown

periodic-ci-openshift-multiarch-master-nightly-4.15-upgrade-from-nightly-4.14-ocp-e2e-aws-sdn-arm64 (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786325444308504576junit28 hours ago
May 03 11:02:04.903 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-27-209.ec2.internal" not ready since 2024-05-03 11:00:04 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 11:02:42.716 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-27-209.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 11:02:32.176113       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 11:02:32.176368       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714734152 cert, and key in /tmp/serving-cert-613612902/serving-signer.crt, /tmp/serving-cert-613612902/serving-signer.key\nStaticPodsDegraded: I0503 11:02:33.261743       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 11:02:33.271533       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-209.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 11:02:33.271658       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0503 11:02:33.282409       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-613612902/tls.crt::/tmp/serving-cert-613612902/tls.key"\nStaticPodsDegraded: W0503 11:02:38.324467       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nStaticPodsDegraded: F0503 11:02:38.324504       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
periodic-ci-openshift-release-master-ci-4.17-e2e-aws-sdn-upgrade-rollback (all) - 1 runs, 0% failed, 100% of runs match
#1786317915088228352junit28 hours ago
I0503 09:21:44.714035       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714728099\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714728099\" (2024-05-03 08:21:39 +0000 UTC to 2025-05-03 08:21:39 +0000 UTC (now=2024-05-03 09:21:44.714014253 +0000 UTC))"
E0503 09:21:49.625322       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-8i3z50s3-4ee4a.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.118.59:6443: connect: connection refused
periodic-ci-openshift-release-master-ci-4.17-e2e-aws-upgrade-ovn-single-node (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786317917600616448junit28 hours ago
2024-05-03T09:40:30Z node/ip-10-0-116-199.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-85isf3lx-f5ed7.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-199.us-east-2.compute.internal?timeout=10s - unexpected EOF
2024-05-03T09:40:30Z node/ip-10-0-116-199.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-85isf3lx-f5ed7.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-199.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.15.1:6443: connect: connection refused
2024-05-03T09:43:10Z node/ip-10-0-116-199.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-85isf3lx-f5ed7.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-199.us-east-2.compute.internal?timeout=10s - dial tcp 10.0.15.1:6443: connect: connection refused

... 5 lines not shown

periodic-ci-openshift-release-master-ci-4.17-e2e-aws-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786317911934111744junit28 hours ago
I0503 10:24:34.771919       1 observer_polling.go:159] Starting file observer
W0503 10:24:34.795230       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-31-145.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:24:34.795350       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

periodic-ci-openshift-release-master-nightly-4.17-upgrade-from-stable-4.15-e2e-aws-ovn-upgrade-paused (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786318227089920000junit29 hours ago
May 03 10:42:14.769 - 17s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-102-196.us-west-2.compute.internal" not ready since 2024-05-03 10:42:07 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 10:42:31.879 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-102-196.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 10:42:23.532486       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 10:42:23.532703       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714732943 cert, and key in /tmp/serving-cert-128184644/serving-signer.crt, /tmp/serving-cert-128184644/serving-signer.key\nStaticPodsDegraded: I0503 10:42:23.757444       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 10:42:23.758696       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-102-196.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 10:42:23.758795       1 builder.go:299] check-endpoints version 4.16.0-202404221110.p0.g65eb450.assembly.stream.el9-65eb450-65eb450da8c1c674c106d4856fb6a28474ca089d\nStaticPodsDegraded: I0503 10:42:23.759356       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-128184644/tls.crt::/tmp/serving-cert-128184644/tls.key"\nStaticPodsDegraded: F0503 10:42:24.032353       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 03 10:49:28.893 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-40-113.us-west-2.compute.internal" not ready since 2024-05-03 10:49:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786318227089920000junit29 hours ago
I0503 10:35:17.870829       1 observer_polling.go:159] Starting file observer
W0503 10:35:17.898584       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-218.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:35:17.898713       1 builder.go:299] check-endpoints version 4.16.0-202404221110.p0.g65eb450.assembly.stream.el9-65eb450-65eb450da8c1c674c106d4856fb6a28474ca089d
periodic-ci-openshift-release-master-ci-4.17-upgrade-from-stable-4.16-e2e-aws-ovn-uwm (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786317946969133056junit29 hours ago
I0503 09:14:49.417330       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714727413\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714727413\" (2024-05-03 08:10:13 +0000 UTC to 2025-05-03 08:10:13 +0000 UTC (now=2024-05-03 09:14:49.417309052 +0000 UTC))"
E0503 09:19:13.234277       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-7x8i7608-1bb91.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.37.101:6443: connect: connection refused
I0503 09:19:32.487588       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786317946969133056junit29 hours ago
I0503 10:35:31.482989       1 observer_polling.go:159] Starting file observer
W0503 10:35:31.508370       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-1-158.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:35:31.508506       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
periodic-ci-openshift-release-master-ci-4.17-upgrade-from-stable-4.16-e2e-aws-sdn-upgrade-workload (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786317951180214272junit29 hours ago
cause/Error code/2 reason/ContainerExit uest canceled (Client.Timeout exceeded while awaiting headers)
E0503 09:23:44.637907       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-90j9mnb2-9423d.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.49.117:6443: connect: connection refused
E0503 09:23:50.132169       1 reflector.go:147] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Failed to watch *v1.ConfigMap: unknown (get configmaps)
#1786317951180214272junit29 hours ago
I0503 10:37:16.440625       1 observer_polling.go:159] Starting file observer
W0503 10:37:16.452004       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-120-19.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0503 10:37:16.452225       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-ovn-single-node (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1786300277532397568junit30 hours ago
    }
    Get "https://api.ci-op-mnj5dr5g-b48f0.aws-2.ci.openshift.org:6443/api/v1/namespaces/e2e-test-ns-global-2tnnc/pods/test-ipv4-podxdcc8": dial tcp 54.88.79.72:6443: connect: connection refused
occurred
#1786300277532397568junit30 hours ago
    }
    Get "https://api.ci-op-mnj5dr5g-b48f0.aws-2.ci.openshift.org:6443/apis/build.openshift.io/v1/namespaces/e2e-test-build-volumes-lq9mp/builds/mys2itest-1": dial tcp 54.88.79.72:6443: connect: connection refused
occurred
pull-ci-openshift-cluster-node-tuning-operator-master-e2e-aws-ovn (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1786276293201891328junit33 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-audit-logs container test
s.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 44.226.147.8:6443: connect: connection refused
E0503 07:18:17.828101      33 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-pljkblj3-51d6e.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 44.226.147.8:6443: connect: connection refused

... 3 lines not shown

periodic-ci-openshift-release-master-ci-4.9-e2e-aws-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786213109237551104junit35 hours ago
May 03 03:30:51.852 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-5f64b4fb9f-f4rzf node/ip-10-0-149-128.us-west-2.compute.internal container/aws-ebs-csi-driver-operator reason/ContainerExit code/1 cause/Error
May 03 03:30:54.857 E ns/openshift-sdn pod/sdn-controller-dpc5d node/ip-10-0-149-128.us-west-2.compute.internal container/sdn-controller reason/ContainerExit code/2 cause/Error I0503 02:23:38.932116       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0503 02:29:38.716448       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field\nE0503 02:31:34.210052       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-qqi3rsv3-093d7.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.187.11:6443: connect: connection refused\n
May 03 03:31:00.514 E ns/openshift-multus pod/multus-additional-cni-plugins-c92z5 node/ip-10-0-156-221.us-west-2.compute.internal container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
periodic-ci-openshift-multiarch-master-nightly-4.17-ocp-e2e-aws-ovn-arm64-single-node (all) - 7 runs, 71% failed, 20% of failures match = 14% impact
#1786240652044931072junit35 hours ago
error: creating temp namespace: Post "https://api.ci-op-i1xgs00h-b677e.aws-2.ci.openshift.org:6443/api/v1/namespaces": dial tcp 44.239.65.173:6443: connect: connection refused
{"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-05-03T04:23:37Z"}
#1786240652044931072junit35 hours ago
level=info msg=Pulling debug logs from the bootstrap machine
level=error msg=Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get "https://api.ci-op-i1xgs00h-b677e.aws-2.ci.openshift.org:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 54.245.140.56:6443: connect: connection refused - error from a previous attempt: unexpected EOF
level=error msg=Bootstrap failed to complete: context deadline exceeded
pull-ci-openshift-origin-release-4.15-e2e-aws-ovn-upgrade (all) - 2 runs, 0% failed, 50% of runs match
#1786194771811766272junit36 hours ago
May 03 02:39:42.738 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-14-196.ec2.internal" not ready since 2024-05-03 02:37:42 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 02:40:11.778 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-14-196.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 02:40:03.546650       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 02:40:03.546938       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714704003 cert, and key in /tmp/serving-cert-4000974848/serving-signer.crt, /tmp/serving-cert-4000974848/serving-signer.key\nStaticPodsDegraded: I0503 02:40:03.847139       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 02:40:03.861405       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-14-196.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 02:40:03.861582       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1929-gf5c5a60-f5c5a609f\nStaticPodsDegraded: I0503 02:40:03.871682       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4000974848/tls.crt::/tmp/serving-cert-4000974848/tls.key"\nStaticPodsDegraded: F0503 02:40:04.222632       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 02:45:29.742 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-7-77.ec2.internal" not ready since 2024-05-03 02:45:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786194771811766272junit36 hours ago
May 03 02:50:47.292 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-127-135.ec2.internal" not ready since 2024-05-03 02:50:40 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 03 02:51:01.559 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-127-135.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 02:50:54.317297       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 02:50:54.317695       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714704654 cert, and key in /tmp/serving-cert-2044955003/serving-signer.crt, /tmp/serving-cert-2044955003/serving-signer.key\nStaticPodsDegraded: I0503 02:50:55.074343       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 02:50:55.091987       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-127-135.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 02:50:55.092142       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1929-gf5c5a60-f5c5a609f\nStaticPodsDegraded: I0503 02:50:55.108817       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2044955003/tls.crt::/tmp/serving-cert-2044955003/tls.key"\nStaticPodsDegraded: F0503 02:50:55.444585       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 03 03:34:37.449 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-7-77.ec2.internal" not ready since 2024-05-03 03:32:37 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
pull-ci-openshift-origin-release-4.15-e2e-aws-ovn-single-node-serial (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1786194767621656576junit37 hours ago
May 03 02:46:22.791 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity: failed to get current state of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity: Get "https://api-int.ci-op-b0z5j86s-0931d.aws-2.ci.openshift.org:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/network-node-identity": dial tcp 10.0.30.177:6443: connect: connection refused (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)
May 03 02:46:22.791 - 24s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity: failed to get current state of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity: Get "https://api-int.ci-op-b0z5j86s-0931d.aws-2.ci.openshift.org:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/network-node-identity": dial tcp 10.0.30.177:6443: connect: connection refused (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)

... 1 lines not shown

periodic-ci-openshift-release-master-ci-4.14-upgrade-from-stable-4.13-e2e-aws-ovn-upgrade (all) - 4 runs, 0% failed, 50% of runs match
#1786175746117472256junit37 hours ago
May 03 01:21:16.696 - 18s   E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-157-83.us-west-2.compute.internal" not ready since 2024-05-03 01:19:16 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
May 03 01:27:32.449 - 1s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-246-238.us-west-2.compute.internal" not ready since 2024-05-03 01:27:13 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-246-238.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0503 01:27:27.054892       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0503 01:27:27.055097       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714699647 cert, and key in /tmp/serving-cert-1550138709/serving-signer.crt, /tmp/serving-cert-1550138709/serving-signer.key\nStaticPodsDegraded: I0503 01:27:27.221772       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0503 01:27:27.223290       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-246-238.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0503 01:27:27.223408       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1890-g2eab0f9-2eab0f9e2\nStaticPodsDegraded: I0503 01:27:27.224270       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1550138709/tls.crt::/tmp/serving-cert-1550138709/tls.key"\nStaticPodsDegraded: F0503 01:27:27.372538       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
#1786027140211281920junit47 hours ago
May 02 15:29:44.771 - 21s   E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-156-108.us-west-1.compute.internal" not ready since 2024-05-02 15:27:44 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
May 02 15:41:42.291 - 3s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-228-1.us-west-1.compute.internal" not ready since 2024-05-02 15:41:24 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-228-1.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:41:36.421225       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:41:36.421493       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714664496 cert, and key in /tmp/serving-cert-2570963332/serving-signer.crt, /tmp/serving-cert-2570963332/serving-signer.key\nStaticPodsDegraded: I0502 15:41:36.695228       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:41:36.722968       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-228-1.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:41:36.723093       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1890-g2eab0f9-2eab0f9e2\nStaticPodsDegraded: I0502 15:41:36.750679       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2570963332/tls.crt::/tmp/serving-cert-2570963332/tls.key"\nStaticPodsDegraded: F0502 15:41:36.971219       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
periodic-ci-openshift-knative-serverless-operator-main-412-kitchensink-e2e-aws-412-c (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786204804859564032junit38 hours ago
# step graph.Run multi-stage test kitchensink-e2e-aws-412-c - kitchensink-e2e-aws-412-c-knative-must-gather container test
com:6443/api?timeout=32s": dial tcp 13.58.97.229:6443: connect: connection refused
E0503 02:05:17.861771      25 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-2q801c8q-b3bbf.serverless.devcluster.openshift.com:6443/api?timeout=32s": dial tcp 13.58.97.229:6443: connect: connection refused

... 3 lines not shown

pull-ci-openshift-knative-eventing-release-next-412-test-encryption-auth-e2e-aws-412 (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786198386748166144junit38 hours ago
# step graph.Run multi-stage test test-encryption-auth-e2e-aws-412 - test-encryption-auth-e2e-aws-412-knative-must-gather container test
d4-c60e2.serverless.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.139.153.19:6443: connect: connection refused
ClusterOperators:
#1786198386748166144junit38 hours ago
Error running must-gather collection:
    creating temp namespace: Post "https://api.ci-op-d45d9hd4-c60e2.serverless.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp 3.139.153.19:6443: connect: connection refused
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-e2e-serial-aws-ovn-heterogeneous (all) - 11 runs, 36% failed, 25% of failures match = 9% impact
#1786180133959241728junit39 hours ago
# step graph.Run multi-stage test ocp-e2e-serial-aws-ovn-heterogeneous - ocp-e2e-serial-aws-ovn-heterogeneous-gather-audit-logs container test
-wgfctx9m-fe5c1.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 44.227.249.203:6443: connect: connection refused
E0503 00:49:52.411936      33 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-wgfctx9m-fe5c1.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 44.227.249.203:6443: connect: connection refused

... 3 lines not shown

periodic-ci-openshift-release-master-nightly-4.14-e2e-aws-ovn-single-node-serial (all) - 4 runs, 75% failed, 133% of failures match = 100% impact
#1786161998795378688junit39 hours ago
2024-05-03T00:00:39Z node/ip-10-0-76-49.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55b68z7q-d20be.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-49.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.40.78:6443: connect: connection refused
2024-05-03T00:00:39Z node/ip-10-0-76-49.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55b68z7q-d20be.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-49.us-west-1.compute.internal?timeout=10s - dial tcp 10.0.88.245:6443: connect: connection refused

... 14 lines not shown

#1786143239837847552junit41 hours ago
# step graph.Run multi-stage test e2e-aws-ovn-single-node-serial - e2e-aws-ovn-single-node-serial-gather-audit-logs container test
ci-op-i8vkl4np-d20be.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 44.225.121.43:6443: connect: connection refused
E0502 22:29:46.832742      32 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-i8vkl4np-d20be.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 44.225.121.43:6443: connect: connection refused

... 3 lines not shown

#1786097827437350912junit44 hours ago
May 02 19:53:36.489 - 45s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io: failed to get current state of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io: Get "https://api-int.ci-op-sfi5spdf-d20be.aws-2.ci.openshift.org:6443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/network-node-identity.openshift.io": dial tcp 10.0.49.188:6443: connect: connection refused
May 02 20:02:15.655 - 47s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited: failed to get current state of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited: Get "https://api-int.ci-op-sfi5spdf-d20be.aws-2.ci.openshift.org:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-ovn-kubernetes-control-plane-limited": dial tcp 10.0.92.170:6443: connect: connection refused

... 1 lines not shown

#1786059856575205376junit46 hours ago
2024-05-02T17:44:55Z node/ip-10-0-99-230.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9860p2py-d20be.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-230.ec2.internal?timeout=10s - dial tcp 10.0.27.25:6443: connect: connection refused
2024-05-02T17:44:55Z node/ip-10-0-99-230.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9860p2py-d20be.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-230.ec2.internal?timeout=10s - dial tcp 10.0.117.133:6443: connect: connection refused

... 14 lines not shown

periodic-ci-openshift-multiarch-master-nightly-4.14-upgrade-from-stable-4.13-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 3 runs, 67% failed, 50% of failures match = 33% impact
#1786143586216054784junit40 hours ago
May 02 23:15:43.653 - 17s   E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-141-63.us-west-2.compute.internal" not ready since 2024-05-02 23:15:38 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?])
May 02 23:22:32.266 - 1s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-244-132.us-west-2.compute.internal" not ready since 2024-05-02 23:22:11 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-244-132.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 23:22:26.667365       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 23:22:26.667605       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714692146 cert, and key in /tmp/serving-cert-246952080/serving-signer.crt, /tmp/serving-cert-246952080/serving-signer.key\nStaticPodsDegraded: I0502 23:22:26.977981       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 23:22:27.004195       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-244-132.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 23:22:27.004390       1 builder.go:271] check-endpoints version 4.14.0-202404250639.p0.g2eab0f9.assembly.stream.el8-2eab0f9-2eab0f9e27db4399bf8885d62ca338c3d02fdd35\nStaticPodsDegraded: I0502 23:22:27.035134       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-246952080/tls.crt::/tmp/serving-cert-246952080/tls.key"\nStaticPodsDegraded: F0502 23:22:27.600460       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
May 02 23:29:02.301 - 1s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-164-212.us-west-2.compute.internal" not ready since 2024-05-02 23:28:41 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-164-212.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 23:28:56.294957       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 23:28:56.295330       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714692536 cert, and key in /tmp/serving-cert-4067021953/serving-signer.crt, /tmp/serving-cert-4067021953/serving-signer.key\nStaticPodsDegraded: I0502 23:28:56.793681       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 23:28:56.811568       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-164-212.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 23:28:56.811758       1 builder.go:271] check-endpoints version 4.14.0-202404250639.p0.g2eab0f9.assembly.stream.el8-2eab0f9-2eab0f9e27db4399bf8885d62ca338c3d02fdd35\nStaticPodsDegraded: I0502 23:28:56.840912       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4067021953/tls.crt::/tmp/serving-cert-4067021953/tls.key"\nStaticPodsDegraded: F0502 23:28:57.311244       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:

... 1 lines not shown

release-openshift-origin-installer-launch-aws-modern (all) - 161 runs, 34% failed, 2% of failures match = 1% impact
#1786154191329169408junit41 hours ago
# step graph.Run multi-stage test launch - launch-gather-audit-logs container test
43/api?timeout=32s": dial tcp 3.89.240.13:6443: connect: connection refused
E0502 23:07:04.218781      32 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-ln-swpvth2-76ef8.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 3.89.240.13:6443: connect: connection refused

... 3 lines not shown

pull-ci-openshift-ovn-kubernetes-release-4.12-4.12-upgrade-from-stable-4.11-local-gateway-e2e-aws-ovn-upgrade (all) - 5 runs, 40% failed, 100% of failures match = 40% impact
#1786123078783209472junit41 hours ago
May 02 21:38:01.125 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-skd95 node/ip-10-0-140-204.us-east-2.compute.internal uid/6b86b508-2bc4-44d8-b75e-fd21dab3b339 container/csi-liveness-probe reason/ContainerExit code/2 cause/Error
May 02 21:38:01.708 E ns/openshift-cloud-network-config-controller pod/cloud-network-config-controller-789957884d-nr2tg node/ip-10-0-149-61.us-east-2.compute.internal uid/b79efbf7-9611-4222-9e80-cf258b1f5a62 container/controller reason/ContainerExit code/1 cause/Error t-2.compute.internal to node workqueue\nI0502 20:49:00.489486       1 controller.go:96] Starting node workers\nI0502 20:49:00.489519       1 controller.go:102] Started node workers\nI0502 20:49:00.489551       1 controller.go:160] Dropping key 'ip-10-0-223-161.us-east-2.compute.internal' from the node workqueue\nI0502 20:49:00.489558       1 controller.go:160] Dropping key 'ip-10-0-241-187.us-east-2.compute.internal' from the node workqueue\nI0502 20:49:00.489564       1 controller.go:160] Dropping key 'ip-10-0-140-204.us-east-2.compute.internal' from the node workqueue\nI0502 20:49:00.489569       1 controller.go:160] Dropping key 'ip-10-0-144-109.us-east-2.compute.internal' from the node workqueue\nI0502 20:49:00.489574       1 controller.go:160] Dropping key 'ip-10-0-149-61.us-east-2.compute.internal' from the node workqueue\nI0502 20:49:00.489578       1 controller.go:160] Dropping key 'ip-10-0-186-6.us-east-2.compute.internal' from the node workqueue\nI0502 20:49:00.493454       1 controller.go:96] Starting secret workers\nI0502 20:49:00.493499       1 controller.go:102] Started secret workers\nI0502 20:49:03.395640       1 controller.go:96] Starting cloud-private-ip-config workers\nI0502 20:49:03.395807       1 controller.go:102] Started cloud-private-ip-config workers\nE0502 20:55:05.178587       1 leaderelection.go:330] error retrieving resource lock openshift-cloud-network-config-controller/cloud-network-config-controller-lock: Get "https://api-int.ci-op-wf3n35qi-28185.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-cloud-network-config-controller/configmaps/cloud-network-config-controller-lock": dial tcp 10.0.159.88:6443: connect: connection refused\nI0502 21:38:00.735874       1 controller.go:104] Shutting down cloud-private-ip-config workers\nI0502 21:38:00.736178       1 controller.go:104] Shutting down secret workers\nI0502 21:38:00.736186       1 controller.go:104] Shutting down node workers\nI0502 21:38:00.781189       1 main.go:161] Stopped leading, sending SIGTERM and shutting down controller\n
May 02 21:38:07.760 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-88cqp node/ip-10-0-149-61.us-east-2.compute.internal uid/59149ebd-6c6e-43a1-80d1-4a5ab8380160 container/csi-liveness-probe reason/ContainerExit code/2 cause/Error
#1786056753012543488junit45 hours ago
May 02 17:29:29.534 E ns/openshift-ovn-kubernetes pod/ovnkube-master-dd2nm node/ip-10-0-142-21.us-west-2.compute.internal uid/f703c6d2-f21a-47b3-bf9d-357f595897c1 container/sbdb reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 17:29:31.549 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-21.us-west-2.compute.internal node/ip-10-0-142-21.us-west-2.compute.internal uid/e88043be-be48-40cd-be7e-fef1455f4df7 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 17:29:29.916475       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 17:29:29.935999       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714670969 cert, and key in /tmp/serving-cert-3917401140/serving-signer.crt, /tmp/serving-cert-3917401140/serving-signer.key\nI0502 17:29:30.537617       1 observer_polling.go:159] Starting file observer\nW0502 17:29:30.546768       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-142-21.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 17:29:30.546879       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0502 17:29:30.557584       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3917401140/tls.crt::/tmp/serving-cert-3917401140/tls.key"\nF0502 17:29:30.919132       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 17:29:35.590 E ns/openshift-network-diagnostics pod/network-check-target-qf5mt node/ip-10-0-142-21.us-west-2.compute.internal uid/e587f19e-b95e-413e-aeb8-cec01a89e923 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

pull-ci-openshift-ovn-kubernetes-release-4.12-4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade (all) - 3 runs, 0% failed, 67% of runs match
#1786123078695129088junit41 hours ago
May 02 21:20:39.247 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-w6q2j node/ip-10-0-128-45.ec2.internal uid/39ad485c-3da2-4473-8338-1b6688130df1 container/csi-liveness-probe reason/ContainerExit code/2 cause/Error
May 02 21:21:58.526 E ns/openshift-cloud-network-config-controller pod/cloud-network-config-controller-6d6cc7c78c-xt57s node/ip-10-0-128-45.ec2.internal uid/6254dd89-c17a-43d2-91d6-e419d3d9126b container/controller reason/ContainerExit code/1 cause/Error       1 leaderelection.go:330] error retrieving resource lock openshift-cloud-network-config-controller/cloud-network-config-controller-lock: Get "https://api-int.ci-op-vvxz66b5-a619b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-cloud-network-config-controller/configmaps/cloud-network-config-controller-lock": dial tcp 10.0.142.194:6443: connect: connection refused\nE0502 20:31:52.082373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)\nI0502 20:35:30.759397       1 controller.go:182] Assigning key: ip-10-0-174-21.ec2.internal to node workqueue\nI0502 20:35:31.046822       1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: [{"interface":"eni-01ebe10e5cfff50b8","ifaddr":{"ipv4":"10.0.128.0/18"},"capacity":{"ipv4":14,"ipv6":15}}]' on node: ip-10-0-174-21.ec2.internal\nI0502 20:35:31.065245       1 controller.go:160] Dropping key 'ip-10-0-174-21.ec2.internal' from the node workqueue\nI0502 20:35:31.434306       1 controller.go:182] Assigning key: ip-10-0-174-21.ec2.internal to node workqueue\nI0502 20:35:31.434383       1 controller.go:160] Dropping key 'ip-10-0-174-21.ec2.internal' from the node workqueue\nI0502 20:36:21.673350       1 controller.go:182] Assigning key: ip-10-0-174-21.ec2.internal to node workqueue\nI0502 20:36:21.673459       1 controller.go:160] Dropping key 'ip-10-0-174-21.ec2.internal' from the node workqueue\nI0502 20:36:26.429864       1 controller.go:182] Assigning key: ip-10-0-174-21.ec2.internal to node workqueue\nI0502 20:36:26.429901       1 controller.go:160] Dropping key 'ip-10-0-174-21.ec2.internal' from the node workqueue\nI0502 21:21:56.939423       1 controller.go:104] Shutting down cloud-private-ip-config workers\nI0502 21:21:56.939611       1 controller.go:104] Shutting down node workers\nI0502 21:21:56.939762       1 controller.go:104] Shutting down secret workers\nI0502 21:21:56.944869       1 main.go:161] Stopped leading, sending SIGTERM and shutting down controller\n
May 02 21:22:04.578 E ns/openshift-multus pod/multus-additional-cni-plugins-tw8df node/ip-10-0-174-21.ec2.internal uid/c5a84ef6-e2cd-49fb-8a62-b3214f92cf00 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
#1786123078695129088junit41 hours ago
May 02 21:54:27.890 E ns/openshift-dns pod/dns-default-wh2d6 node/ip-10-0-128-45.ec2.internal uid/df3d536e-6954-4a5f-b29a-18a2b0d1b7e3 container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 21:54:28.900 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-45.ec2.internal node/ip-10-0-128-45.ec2.internal uid/544e31ac-e429-450d-9a31-9e5ef406c337 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 21:54:22.987500       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 21:54:23.001177       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714686862 cert, and key in /tmp/serving-cert-347672523/serving-signer.crt, /tmp/serving-cert-347672523/serving-signer.key\nI0502 21:54:23.756219       1 observer_polling.go:159] Starting file observer\nW0502 21:54:23.767764       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-128-45.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 21:54:23.767952       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0502 21:54:23.775791       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-347672523/tls.crt::/tmp/serving-cert-347672523/tls.key"\nW0502 21:54:28.043882       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0502 21:54:28.043941       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
May 02 21:54:29.903 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-45.ec2.internal node/ip-10-0-128-45.ec2.internal uid/544e31ac-e429-450d-9a31-9e5ef406c337 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 21:54:22.987500       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 21:54:23.001177       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714686862 cert, and key in /tmp/serving-cert-347672523/serving-signer.crt, /tmp/serving-cert-347672523/serving-signer.key\nI0502 21:54:23.756219       1 observer_polling.go:159] Starting file observer\nW0502 21:54:23.767764       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-128-45.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 21:54:23.767952       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0502 21:54:23.775791       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-347672523/tls.crt::/tmp/serving-cert-347672523/tls.key"\nW0502 21:54:28.043882       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nF0502 21:54:28.043941       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n

... 1 lines not shown

#1786056752874131456junit45 hours ago
May 02 17:51:16.000 - 1s    E ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new reason/DisruptionBegan ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new stopped responding to GET requests over new connections: Get "https://test-disruption-new-openshift-image-registry.apps.ci-op-lmtzcz9z-a619b.origin-ci-int-aws.dev.rhcloud.com/healthz": read tcp 10.129.207.19:46684->184.72.57.184:443: read: connection reset by peer
May 02 17:51:16.466 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-191-35.us-west-1.compute.internal node/ip-10-0-191-35.us-west-1.compute.internal uid/6b854f9b-003c-4c71-99d4-e3735b4f75b2 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 17:51:14.749968       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 17:51:14.770056       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714672274 cert, and key in /tmp/serving-cert-2778190839/serving-signer.crt, /tmp/serving-cert-2778190839/serving-signer.key\nI0502 17:51:15.400309       1 observer_polling.go:159] Starting file observer\nW0502 17:51:15.415598       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-191-35.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 17:51:15.415794       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0502 17:51:15.416497       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2778190839/tls.crt::/tmp/serving-cert-2778190839/tls.key"\nF0502 17:51:15.961235       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 17:51:16.891 E ns/openshift-dns pod/dns-default-mp87f node/ip-10-0-179-27.us-west-1.compute.internal uid/d4867e11-933c-4ef6-bad0-2889f3b85cee container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1786056752874131456junit45 hours ago
May 02 17:51:19.619 E ns/e2e-k8s-sig-apps-daemonset-upgrade-3591 pod/ds1-hb999 node/ip-10-0-191-35.us-west-1.compute.internal uid/d87e673f-dba1-40a4-9551-cafd5c129435 container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 17:51:19.716 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-191-35.us-west-1.compute.internal node/ip-10-0-191-35.us-west-1.compute.internal uid/6b854f9b-003c-4c71-99d4-e3735b4f75b2 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 17:51:14.749968       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 17:51:14.770056       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714672274 cert, and key in /tmp/serving-cert-2778190839/serving-signer.crt, /tmp/serving-cert-2778190839/serving-signer.key\nI0502 17:51:15.400309       1 observer_polling.go:159] Starting file observer\nW0502 17:51:15.415598       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-191-35.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 17:51:15.415794       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0502 17:51:15.416497       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2778190839/tls.crt::/tmp/serving-cert-2778190839/tls.key"\nF0502 17:51:15.961235       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 17:51:19.741 E ns/openshift-network-diagnostics pod/network-check-target-dbj5f node/ip-10-0-191-35.us-west-1.compute.internal uid/f6c69dcd-3643-4920-bfed-7f622de99f65 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
periodic-ci-openshift-release-master-ci-4.13-upgrade-from-stable-4.12-e2e-aws-ovn-uwm (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786116482543915008junit42 hours ago
May 02 21:13:41.984 E ns/openshift-dns pod/node-resolver-ztd2v node/ip-10-0-146-209.ec2.internal uid/88debc3e-b3b0-4c78-b1fa-5e5825f8dffd container/dns-node-resolver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 21:13:46.004 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-209.ec2.internal node/ip-10-0-146-209.ec2.internal uid/99200b8a-f33f-45f8-810c-217c32051550 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 21:13:44.787826       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 21:13:44.795826       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714684424 cert, and key in /tmp/serving-cert-741304887/serving-signer.crt, /tmp/serving-cert-741304887/serving-signer.key\nI0502 21:13:45.278267       1 observer_polling.go:159] Starting file observer\nW0502 21:13:45.288215       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-146-209.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 21:13:45.288430       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 21:13:45.293660       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-741304887/tls.crt::/tmp/serving-cert-741304887/tls.key"\nF0502 21:13:45.606835       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 21:13:49.400 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-209.ec2.internal node/ip-10-0-146-209.ec2.internal uid/99200b8a-f33f-45f8-810c-217c32051550 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 21:13:44.787826       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 21:13:44.795826       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714684424 cert, and key in /tmp/serving-cert-741304887/serving-signer.crt, /tmp/serving-cert-741304887/serving-signer.key\nI0502 21:13:45.278267       1 observer_polling.go:159] Starting file observer\nW0502 21:13:45.288215       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-146-209.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 21:13:45.288430       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 21:13:45.293660       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-741304887/tls.crt::/tmp/serving-cert-741304887/tls.key"\nF0502 21:13:45.606835       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

pull-ci-openshift-ovn-kubernetes-release-4.13-4.13-upgrade-from-stable-4.12-e2e-aws-ovn-upgrade (all) - 5 runs, 0% failed, 60% of runs match
#1786099704711352320junit42 hours ago
May 02 20:27:32.677 E ns/openshift-monitoring pod/node-exporter-gqczr node/ip-10-0-174-20.us-west-2.compute.internal uid/96c90e23-2352-4681-bc29-491d11ce8452 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 20:27:38.399 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-174-20.us-west-2.compute.internal node/ip-10-0-174-20.us-west-2.compute.internal uid/a38973a1-13e6-4268-b4dd-8b2ad9dcd13f container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 20:27:36.781237       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 20:27:36.781493       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714681656 cert, and key in /tmp/serving-cert-1661203314/serving-signer.crt, /tmp/serving-cert-1661203314/serving-signer.key\nI0502 20:27:37.406890       1 observer_polling.go:159] Starting file observer\nW0502 20:27:37.424030       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-174-20.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 20:27:37.424171       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 20:27:37.447562       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1661203314/tls.crt::/tmp/serving-cert-1661203314/tls.key"\nF0502 20:27:37.786199       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 20:27:40.459 E ns/openshift-dns pod/node-resolver-bsvvk node/ip-10-0-174-20.us-west-2.compute.internal uid/19e12497-b9ad-490e-b317-3b573565df1f container/dns-node-resolver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 2 lines not shown

#1786087495495061504junit43 hours ago
May 02 19:52:24.570 - 999ms E ns/openshift-console route/console disruption/ingress-to-console connection/new reason/DisruptionBegan ns/openshift-console route/console disruption/ingress-to-console connection/new stopped responding to GET requests over new connections: Get "https://console-openshift-console.apps.ci-op-swnld3gb-af9c8.origin-ci-int-aws.dev.rhcloud.com/healthz": read tcp 10.129.23.17:52720->13.57.101.120:443: read: connection reset by peer
May 02 19:52:25.061 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-185-135.us-west-1.compute.internal node/ip-10-0-185-135.us-west-1.compute.internal uid/d7518500-654b-43db-9055-cd0242c93bf9 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 19:52:23.792532       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 19:52:23.796564       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714679543 cert, and key in /tmp/serving-cert-1477053604/serving-signer.crt, /tmp/serving-cert-1477053604/serving-signer.key\nI0502 19:52:24.293725       1 observer_polling.go:159] Starting file observer\nW0502 19:52:24.305146       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-185-135.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 19:52:24.305265       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 19:52:24.317841       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1477053604/tls.crt::/tmp/serving-cert-1477053604/tls.key"\nF0502 19:52:24.675909       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 19:52:30.549 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-185-135.us-west-1.compute.internal node/ip-10-0-185-135.us-west-1.compute.internal uid/d7518500-654b-43db-9055-cd0242c93bf9 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 19:52:23.792532       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 19:52:23.796564       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714679543 cert, and key in /tmp/serving-cert-1477053604/serving-signer.crt, /tmp/serving-cert-1477053604/serving-signer.key\nI0502 19:52:24.293725       1 observer_polling.go:159] Starting file observer\nW0502 19:52:24.305146       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-185-135.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 19:52:24.305265       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 19:52:24.317841       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1477053604/tls.crt::/tmp/serving-cert-1477053604/tls.key"\nF0502 19:52:24.675909       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1786066852900769792junit47 hours ago
# step graph.Run multi-stage test e2e-aws-ovn-upgrade - e2e-aws-ovn-upgrade-gather-audit-logs container test
dev.rhcloud.com:6443/api?timeout=32s": dial tcp 52.21.86.108:6443: connect: connection refused
E0502 16:48:19.220318      33 memcache.go:238] couldn't get current server API group list: Get "https://api.ci-op-dq9g24mr-af9c8.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 52.21.86.108:6443: connect: connection refused

... 3 lines not shown

periodic-ci-openshift-release-master-ci-4.14-upgrade-from-stable-4.13-e2e-aws-ovn-uwm (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786116485060497408junit42 hours ago
May 02 21:12:32.313 - 422ms E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-202-153.ec2.internal" not ready since 2024-05-02 21:12:14 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-202-153.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 21:12:25.533161       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 21:12:25.533448       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714684345 cert, and key in /tmp/serving-cert-1847252621/serving-signer.crt, /tmp/serving-cert-1847252621/serving-signer.key\nStaticPodsDegraded: I0502 21:12:25.931321       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 21:12:25.939438       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-202-153.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 21:12:25.939545       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1890-g2eab0f9-2eab0f9e2\nStaticPodsDegraded: I0502 21:12:25.953140       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1847252621/tls.crt::/tmp/serving-cert-1847252621/tls.key"\nStaticPodsDegraded: F0502 21:12:26.131872       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
May 02 21:17:49.700 - 28s   E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-173-238.ec2.internal" not ready since 2024-05-02 21:15:49 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
pull-ci-openshift-cluster-control-plane-machine-set-operator-main-e2e-aws-ovn-upgrade (all) - 4 runs, 0% failed, 75% of runs match
#1786119948242784256junit42 hours ago
I0502 20:13:02.609446       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 20:21:08.457313       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-zyl3jfnj-0b29d.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.24.42:6443: connect: connection refused
I0502 20:21:35.553392       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786119948242784256junit42 hours ago
I0502 21:13:28.772496       1 observer_polling.go:159] Starting file observer
W0502 21:13:28.782120       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-60-139.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 21:13:28.782245       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786067577970102272junit46 hours ago
May 02 17:53:03.631 - 30s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-6-100.us-east-2.compute.internal" not ready since 2024-05-02 17:51:03 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 17:53:33.829 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-6-100.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 17:53:25.001203       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 17:53:25.001421       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714672405 cert, and key in /tmp/serving-cert-2608519013/serving-signer.crt, /tmp/serving-cert-2608519013/serving-signer.key\nStaticPodsDegraded: I0502 17:53:25.307844       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 17:53:25.309294       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-6-100.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 17:53:25.309419       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 17:53:25.310050       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2608519013/tls.crt::/tmp/serving-cert-2608519013/tls.key"\nStaticPodsDegraded: F0502 17:53:25.612383       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 17:58:46.208 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-72-165.us-east-2.compute.internal" not ready since 2024-05-02 17:58:32 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786067577970102272junit46 hours ago
I0502 16:56:01.409206       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0502 17:02:58.159758       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-47394byw-0b29d.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.42.89:6443: connect: connection refused
I0502 17:03:04.405977       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1786059030246985728junit46 hours ago
I0502 17:39:29.946263       1 observer_polling.go:159] Starting file observer
W0502 17:39:29.975682       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-115-92.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 17:39:29.975900       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

pull-ci-openshift-ovn-kubernetes-release-4.13-e2e-aws-ovn-upgrade-local-gateway (all) - 5 runs, 0% failed, 60% of runs match
#1786099707223740416junit42 hours ago
May 02 20:22:41.096 E ns/openshift-monitoring pod/node-exporter-pk976 node/ip-10-0-210-180.us-west-1.compute.internal uid/7868a0f8-254f-4390-a6c0-931a4f1dfd41 container/node-exporter reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 20:22:45.120 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-210-180.us-west-1.compute.internal node/ip-10-0-210-180.us-west-1.compute.internal uid/07ca48c3-01cb-43b6-a998-972bd9b0860e container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 20:22:44.226732       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 20:22:44.233903       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714681364 cert, and key in /tmp/serving-cert-1347146758/serving-signer.crt, /tmp/serving-cert-1347146758/serving-signer.key\nI0502 20:22:44.521920       1 observer_polling.go:159] Starting file observer\nW0502 20:22:44.540624       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-210-180.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 20:22:44.540747       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 20:22:44.593703       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1347146758/tls.crt::/tmp/serving-cert-1347146758/tls.key"\nF0502 20:22:44.840803       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 20:22:48.169 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-210-180.us-west-1.compute.internal node/ip-10-0-210-180.us-west-1.compute.internal uid/07ca48c3-01cb-43b6-a998-972bd9b0860e container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 20:22:44.226732       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 20:22:44.233903       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714681364 cert, and key in /tmp/serving-cert-1347146758/serving-signer.crt, /tmp/serving-cert-1347146758/serving-signer.key\nI0502 20:22:44.521920       1 observer_polling.go:159] Starting file observer\nW0502 20:22:44.540624       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-210-180.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 20:22:44.540747       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 20:22:44.593703       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1347146758/tls.crt::/tmp/serving-cert-1347146758/tls.key"\nF0502 20:22:44.840803       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1786087523240382464junit43 hours ago
May 02 19:37:53.682 E ns/openshift-dns pod/node-resolver-vxpdr node/ip-10-0-201-100.us-east-2.compute.internal uid/bf218e86-2a9e-4a2e-a416-0c8159dbf0ef container/dns-node-resolver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 19:37:59.761 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-201-100.us-east-2.compute.internal node/ip-10-0-201-100.us-east-2.compute.internal uid/e621c0e3-e3fc-4570-bccf-2769f1801efc container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 19:37:57.921895       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 19:37:57.926307       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714678677 cert, and key in /tmp/serving-cert-1543716725/serving-signer.crt, /tmp/serving-cert-1543716725/serving-signer.key\nI0502 19:37:58.350332       1 observer_polling.go:159] Starting file observer\nW0502 19:37:58.371577       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-201-100.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 19:37:58.371771       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 19:37:58.389340       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1543716725/tls.crt::/tmp/serving-cert-1543716725/tls.key"\nF0502 19:37:58.896180       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 19:38:00.910 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-201-100.us-east-2.compute.internal node/ip-10-0-201-100.us-east-2.compute.internal uid/e621c0e3-e3fc-4570-bccf-2769f1801efc container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 19:37:57.921895       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 19:37:57.926307       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714678677 cert, and key in /tmp/serving-cert-1543716725/serving-signer.crt, /tmp/serving-cert-1543716725/serving-signer.key\nI0502 19:37:58.350332       1 observer_polling.go:159] Starting file observer\nW0502 19:37:58.371577       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-201-100.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 19:37:58.371771       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 19:37:58.389340       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1543716725/tls.crt::/tmp/serving-cert-1543716725/tls.key"\nF0502 19:37:58.896180       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

#1786066891488366592junit47 hours ago
# step graph.Run multi-stage test e2e-aws-ovn-upgrade-local-gateway - e2e-aws-ovn-upgrade-local-gateway-gather-audit-logs container test
m:6443/api?timeout=32s": dial tcp 34.218.182.150:6443: connect: connection refused
E0502 16:47:55.961361      30 memcache.go:238] couldn't get current server API group list: Get "https://api.ci-op-31rf9mvs-8db46.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s": dial tcp 34.218.182.150:6443: connect: connection refused

... 3 lines not shown

pull-ci-openshift-ovn-kubernetes-release-4.12-e2e-aws-ovn-upgrade-local-gateway (all) - 3 runs, 0% failed, 33% of runs match
#1786099851193225216junit42 hours ago
May 02 20:20:46.538 E ns/openshift-ovn-kubernetes pod/ovnkube-master-qv5m7 node/ip-10-0-175-230.us-west-1.compute.internal uid/ab7f7a43-14b1-4633-a761-7a0872b26f71 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 20:20:48.701 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-175-230.us-west-1.compute.internal node/ip-10-0-175-230.us-west-1.compute.internal uid/2b796c7c-578c-4fb7-a0a1-1fd5bc9ed58f container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 20:20:47.008010       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 20:20:47.022695       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714681247 cert, and key in /tmp/serving-cert-3519536386/serving-signer.crt, /tmp/serving-cert-3519536386/serving-signer.key\nI0502 20:20:47.372866       1 observer_polling.go:159] Starting file observer\nW0502 20:20:47.392230       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-175-230.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 20:20:47.392382       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1741-g09d7ddb-09d7ddbab\nI0502 20:20:47.395100       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3519536386/tls.crt::/tmp/serving-cert-3519536386/tls.key"\nF0502 20:20:47.588219       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 20:20:51.456 E ns/openshift-network-diagnostics pod/network-check-target-czlhz node/ip-10-0-175-230.us-west-1.compute.internal uid/c6a9199f-4def-4ed5-9d7a-90ac6f917d50 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

... 3 lines not shown

pull-ci-openshift-ovn-kubernetes-release-4.13-e2e-aws-ovn-upgrade (all) - 5 runs, 0% failed, 40% of runs match
#1786099705134977024junit42 hours ago
May 02 20:17:14.368 E ns/e2e-k8s-sig-apps-daemonset-upgrade-8802 pod/ds1-bg9pp node/ip-10-0-204-136.ec2.internal uid/ccfdaf2a-01b6-440c-a2ca-d7be4f0aa4ad container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 20:17:20.394 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-204-136.ec2.internal node/ip-10-0-204-136.ec2.internal uid/6704cb4f-856b-4dcd-bf55-2644a6e28daf container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 20:17:18.325705       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 20:17:18.326005       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714681038 cert, and key in /tmp/serving-cert-1144116515/serving-signer.crt, /tmp/serving-cert-1144116515/serving-signer.key\nI0502 20:17:18.808244       1 observer_polling.go:159] Starting file observer\nW0502 20:17:18.918005       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-204-136.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 20:17:18.918155       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 20:17:18.973943       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1144116515/tls.crt::/tmp/serving-cert-1144116515/tls.key"\nF0502 20:17:19.379318       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 20:17:21.371 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-204-136.ec2.internal node/ip-10-0-204-136.ec2.internal uid/6704cb4f-856b-4dcd-bf55-2644a6e28daf container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 20:17:18.325705       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 20:17:18.326005       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714681038 cert, and key in /tmp/serving-cert-1144116515/serving-signer.crt, /tmp/serving-cert-1144116515/serving-signer.key\nI0502 20:17:18.808244       1 observer_polling.go:159] Starting file observer\nW0502 20:17:18.918005       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-204-136.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 20:17:18.918155       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 20:17:18.973943       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1144116515/tls.crt::/tmp/serving-cert-1144116515/tls.key"\nF0502 20:17:19.379318       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 2 lines not shown

#1786087520635719680junit43 hours ago
May 02 19:37:08.672 E ns/openshift-dns pod/dns-default-k975r node/ip-10-0-200-1.us-east-2.compute.internal uid/e077a22f-1cdb-49b5-b9fb-654c551d5b96 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 19:37:09.653 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-200-1.us-east-2.compute.internal node/ip-10-0-200-1.us-east-2.compute.internal uid/08542da2-c508-4dd5-a59b-1535b21dfef7 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 19:37:07.321509       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 19:37:07.321965       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714678627 cert, and key in /tmp/serving-cert-3489608004/serving-signer.crt, /tmp/serving-cert-3489608004/serving-signer.key\nI0502 19:37:07.852203       1 observer_polling.go:159] Starting file observer\nW0502 19:37:07.869190       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-200-1.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 19:37:07.869318       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 19:37:07.876040       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3489608004/tls.crt::/tmp/serving-cert-3489608004/tls.key"\nF0502 19:37:08.618231       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 19:37:10.639 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-200-1.us-east-2.compute.internal node/ip-10-0-200-1.us-east-2.compute.internal uid/08542da2-c508-4dd5-a59b-1535b21dfef7 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 19:37:07.321509       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 19:37:07.321965       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714678627 cert, and key in /tmp/serving-cert-3489608004/serving-signer.crt, /tmp/serving-cert-3489608004/serving-signer.key\nI0502 19:37:07.852203       1 observer_polling.go:159] Starting file observer\nW0502 19:37:07.869190       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-200-1.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 19:37:07.869318       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1810-g4d70179-4d7017904\nI0502 19:37:07.876040       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3489608004/tls.crt::/tmp/serving-cert-3489608004/tls.key"\nF0502 19:37:08.618231       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

pull-ci-openshift-ovn-kubernetes-master-e2e-aws-live-migration-sdn-ovn-rollback (all) - 4 runs, 50% failed, 200% of failures match = 100% impact
#1786089498199724032junit43 hours ago
I0502 18:56:14.108918       1 observer_polling.go:159] Starting file observer
W0502 18:56:14.130608       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-138.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 18:56:14.130702       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786035980659068928junit47 hours ago
May 02 15:25:50.320 - 7s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-125-153.us-east-2.compute.internal" not ready since 2024-05-02 15:25:35 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)
May 02 15:25:58.142 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-125-153.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:25:49.010038       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:25:49.010337       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714663549 cert, and key in /tmp/serving-cert-3543491689/serving-signer.crt, /tmp/serving-cert-3543491689/serving-signer.key\nStaticPodsDegraded: I0502 15:25:49.528190       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:25:49.539106       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-125-153.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:25:49.539222       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:25:49.560902       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3543491689/tls.crt::/tmp/serving-cert-3543491689/tls.key"\nStaticPodsDegraded: F0502 15:25:49.875500       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 15:30:28.009 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-65-43.us-east-2.compute.internal" not ready since 2024-05-02 15:30:22 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)
#1786035980659068928junit47 hours ago
May 02 15:30:54.629 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-65-43.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-65-43.us-east-2.compute.internal_openshift-kube-apiserver(3e57a49edaea830e34462a1915dbf93b) (exception: Degraded=False is the happy case)
May 02 15:35:39.691 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-4-31.us-east-2.compute.internal" not ready since 2024-05-02 15:35:11 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-4-31.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:35:34.153718       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:35:34.154146       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714664134 cert, and key in /tmp/serving-cert-927372766/serving-signer.crt, /tmp/serving-cert-927372766/serving-signer.key\nStaticPodsDegraded: I0502 15:35:34.608756       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:35:34.625993       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-31.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:35:34.626244       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:35:34.650006       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-927372766/tls.crt::/tmp/serving-cert-927372766/tls.key"\nStaticPodsDegraded: F0502 15:35:35.216024       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)
May 02 15:35:39.691 - 4s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-4-31.us-east-2.compute.internal" not ready since 2024-05-02 15:35:11 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-4-31.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:35:34.153718       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:35:34.154146       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714664134 cert, and key in /tmp/serving-cert-927372766/serving-signer.crt, /tmp/serving-cert-927372766/serving-signer.key\nStaticPodsDegraded: I0502 15:35:34.608756       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:35:34.625993       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-31.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:35:34.626244       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:35:34.650006       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-927372766/tls.crt::/tmp/serving-cert-927372766/tls.key"\nStaticPodsDegraded: F0502 15:35:35.216024       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)

... 1 lines not shown

#1786029707100164096junit47 hours ago
I0502 15:14:52.057569       1 observer_polling.go:159] Starting file observer
W0502 15:14:52.081589       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-123-110.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 15:14:52.081704       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786024547217051648junit2 days ago
I0502 14:35:59.887381       1 observer_polling.go:159] Starting file observer
W0502 14:35:59.905069       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-208.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 14:35:59.905220       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

periodic-ci-openshift-release-master-okd-scos-4.14-e2e-aws-ovn (all) - 3 runs, 100% failed, 100% of failures match = 100% impact
#1786097127965855744junit44 hours ago
error: gather did not start for pod must-gather-f75sg: Get "https://api.ci-op-3j95rs0n-bd13c.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-w5wvl/pods/must-gather-f75sg": dial tcp 184.72.37.172:6443: connect: connection refused
{"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-05-02T19:09:42Z"}
#1786097127965855744junit44 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-must-gather container test
p-3j95rs0n-bd13c.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 54.183.45.138:6443: connect: connection refused
E0502 19:29:44.386835      40 memcache.go:265] couldn't get current server API group list: Get "https://api.ci-op-3j95rs0n-bd13c.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 54.183.45.138:6443: connect: connection refused

... 7 lines not shown

#1786077727762157568junit46 hours ago
# step graph.Run multi-stage test e2e-aws-ovn - e2e-aws-ovn-gather-must-gather container test
o error: Get "https://api.ci-op-ly6n7ijl-bd13c.aws-2.ci.openshift.org:6443/apis/monitoring.coreos.com/v1/prometheusrules": dial tcp 54.177.146.96:6443: connect: connection refused, skipping gathering alertmanagers.monitoring.coreos.com due to error: Get "https://api.ci-op-ly6n7ijl-bd13c.aws-2.ci.openshift.org:6443/apis/monitoring.coreos.com/v1/alertmanagers": dial tcp 54.177.146.96:6443: connect: connection refused, skipping gathering prometheuses.monitoring.coreos.com due to error: Get "https://api.ci-op-ly6n7ijl-bd13c.aws-2.ci.openshift.org:6443/apis/monitoring.coreos.com/v1/prometheuses": dial tcp 54.177.146.96:6443: connect: connection refused, skipping gathering thanosrulers.monitoring.coreos.com due to error: Get "https://api.ci-op-ly6n7ijl-bd13c.aws-2.ci.openshift.org:6443/apis/monitoring.coreos.com/v1/thanosrulers": dial tcp 54.177.146.96:6443: connect: connection refused, skipping gathering alertmanagerconfigs.monitoring.coreos.com due to error: Get "https://api.ci-op-ly6n7ijl-bd13c.aws-2.ci.openshift.org:6443/apis/monitoring.coreos.com/v1beta1/alertmanagerconfigs": dial tcp 54.177.146.96:6443: connect: connection refused, skipping gathering configs.samples.operator.openshift.io/cluster due to error: configs.samples.operator.openshift.io "cluster" not found, skipping gathering templates.template.openshift.io due to error: the server doesn't have a resource type "templates", skipping gathering imagestreams.image.openshift.io due to error: the server doesn't have a resource type "imagestreams"]
error: creating temp namespace: Post "https://api.ci-op-ly6n7ijl-bd13c.aws-2.ci.openshift.org:6443/api/v1/namespaces": dial tcp 54.177.146.96:6443: connect: connection refused

... 1 lines not shown

#1786046925154291712junit47 hours ago
must-gather      ] OUT pod for plug-in image registry.redhat.io/openshift4/ose-must-gather:latest created
[must-gather-lwnnn] OUT gather did not start: Get "https://api.ci-op-v8664sts-bd13c.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-28n6k/pods/must-gather-lwnnn": dial tcp 54.212.225.85:6443: connect: connection refused
Delete "https://api.ci-op-v8664sts-bd13c.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-must-gather-28n6k": dial tcp 54.212.225.85:6443: connect: connection refused

... 2 lines not shown

periodic-ci-openshift-release-master-nightly-4.13-upgrade-from-stable-4.12-e2e-aws-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786077153708740608junit44 hours ago
May 02 18:29:46.588 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-dpzl2 node/ip-10-0-222-179.us-west-2.compute.internal uid/81b53cf3-2815-4272-a98a-39486b74f710 container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 18:29:54.094 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-222-179.us-west-2.compute.internal node/ip-10-0-222-179.us-west-2.compute.internal uid/784d44f9-8180-4c5a-b5ea-15559bdfff8b container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 18:29:52.740809       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 18:29:52.756221       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714674592 cert, and key in /tmp/serving-cert-1255914579/serving-signer.crt, /tmp/serving-cert-1255914579/serving-signer.key\nI0502 18:29:53.314555       1 observer_polling.go:159] Starting file observer\nW0502 18:29:53.325785       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-222-179.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 18:29:53.325982       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0502 18:29:53.339268       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1255914579/tls.crt::/tmp/serving-cert-1255914579/tls.key"\nF0502 18:29:53.824707       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 18:29:55.129 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-222-179.us-west-2.compute.internal node/ip-10-0-222-179.us-west-2.compute.internal uid/784d44f9-8180-4c5a-b5ea-15559bdfff8b container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 18:29:52.740809       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 18:29:52.756221       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714674592 cert, and key in /tmp/serving-cert-1255914579/serving-signer.crt, /tmp/serving-cert-1255914579/serving-signer.key\nI0502 18:29:53.314555       1 observer_polling.go:159] Starting file observer\nW0502 18:29:53.325785       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-222-179.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 18:29:53.325982       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0502 18:29:53.339268       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1255914579/tls.crt::/tmp/serving-cert-1255914579/tls.key"\nF0502 18:29:53.824707       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 2 lines not shown

periodic-ci-openshift-release-master-nightly-4.13-e2e-aws-sdn-serial (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1786077274508890112junit45 hours ago
May 02 18:38:00.835 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-201-24.us-west-2.compute.internal node/ip-10-0-201-24.us-west-2.compute.internal uid/c4bf2b48-58ef-4bcd-88f7-88131d4b0133 container/kube-apiserver-cert-syncer reason/ContainerExit code/2 cause/Error rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0502 18:34:48.067452       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0502 18:34:48.073722       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0502 18:34:48.862936       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0502 18:34:48.863200       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
May 02 18:40:15.169 - 999ms E disruption/oauth-api connection/new reason/DisruptionBegan disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-88737qsl-2ac23.aws-2.ci.openshift.org:6443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 52.13.87.190:6443: connect: connection refused
May 02 18:41:48.114 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-183-61.us-west-2.compute.internal node/ip-10-0-183-61.us-west-2.compute.internal uid/d9c824f4-a80d-4a54-a114-26bbc42796e6 container/kube-apiserver-cert-syncer reason/ContainerExit code/2 cause/Error rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0502 18:34:48.065623       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0502 18:34:48.065905       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0502 18:34:48.862524       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0502 18:34:48.862862       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
periodic-ci-openshift-release-master-nightly-4.13-e2e-aws-ovn-single-node-serial (all) - 1 runs, 0% failed, 100% of runs match
#1786077186424311808junit45 hours ago
May 02 18:32:28.674 E ns/openshift-machine-config-operator pod/machine-config-daemon-nhtwj node/ip-10-0-180-164.ec2.internal uid/22f64e69-9f1e-48bc-b789-b7a7b233a1b2 reason/Failed ():
May 02 18:35:19.473 - 12s   E disruption/cache-kube-api connection/new reason/DisruptionBegan disruption/cache-kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-qbz3mngz-00f44.aws-2.ci.openshift.org:6443/api/v1/namespaces/default?resourceVersion=0": dial tcp 52.71.27.49:6443: connect: connection refused
May 02 18:35:19.473 - 11s   E disruption/cache-openshift-api connection/new reason/DisruptionBegan disruption/cache-openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-qbz3mngz-00f44.aws-2.ci.openshift.org:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams?resourceVersion=0": dial tcp 52.71.27.49:6443: connect: connection refused

... 14 lines not shown

periodic-ci-openshift-release-master-nightly-4.13-e2e-aws-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786077053892694016junit45 hours ago
May 02 17:46:53.759 - 11s   E clusteroperator/etcd condition/Degraded status/True reason/ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:7449393866743223626 name:"ip-10-0-227-144.us-west-1.compute.internal" peerURLs:"https://10.0.227.144:2380" clientURLs:"https://10.0.227.144:2379"  Healthy:true Took:3.643626ms Error:<nil>} {Member:ID:12113112754047333905 name:"ip-10-0-183-27.us-west-1.compute.internal" peerURLs:"https://10.0.183.27:2380" clientURLs:"https://10.0.183.27:2379"  Healthy:true Took:1.646774ms Error:<nil>} {Member:ID:17177534091591107454 name:"ip-10-0-163-123.us-west-1.compute.internal" peerURLs:"https://10.0.163.123:2380" clientURLs:"https://10.0.163.123:2379"  Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.163.123:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-163-123.us-west-1.compute.internal is unhealthy
May 02 17:46:55.556 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-123.us-west-1.compute.internal node/ip-10-0-163-123.us-west-1.compute.internal uid/b9f8f598-8bf3-4ca2-b837-2549041a3f24 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 17:46:54.109261       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 17:46:54.119425       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714672014 cert, and key in /tmp/serving-cert-3101684927/serving-signer.crt, /tmp/serving-cert-3101684927/serving-signer.key\nI0502 17:46:54.356664       1 observer_polling.go:159] Starting file observer\nW0502 17:46:54.478965       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-163-123.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 17:46:54.479155       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0502 17:46:54.496620       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3101684927/tls.crt::/tmp/serving-cert-3101684927/tls.key"\nF0502 17:46:54.895830       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 17:46:56.568 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-123.us-west-1.compute.internal node/ip-10-0-163-123.us-west-1.compute.internal uid/b9f8f598-8bf3-4ca2-b837-2549041a3f24 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 17:46:54.109261       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 17:46:54.119425       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714672014 cert, and key in /tmp/serving-cert-3101684927/serving-signer.crt, /tmp/serving-cert-3101684927/serving-signer.key\nI0502 17:46:54.356664       1 observer_polling.go:159] Starting file observer\nW0502 17:46:54.478965       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-163-123.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 17:46:54.479155       1 builder.go:271] check-endpoints version 4.13.0-202404250638.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0502 17:46:54.496620       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3101684927/tls.crt::/tmp/serving-cert-3101684927/tls.key"\nF0502 17:46:54.895830       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

pull-ci-openshift-cluster-monitoring-operator-master-e2e-aws-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786062477977456640junit46 hours ago
May 02 17:58:25.117 - 19s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-119-159.us-west-2.compute.internal" not ready since 2024-05-02 17:58:20 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 17:58:44.160 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-119-159.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 17:58:36.099018       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 17:58:36.099224       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714672716 cert, and key in /tmp/serving-cert-544133798/serving-signer.crt, /tmp/serving-cert-544133798/serving-signer.key\nStaticPodsDegraded: I0502 17:58:36.341597       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 17:58:36.343800       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-119-159.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 17:58:36.343972       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 17:58:36.344747       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-544133798/tls.crt::/tmp/serving-cert-544133798/tls.key"\nStaticPodsDegraded: F0502 17:58:36.924182       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 18:03:37.966 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-121-164.us-west-2.compute.internal" not ready since 2024-05-02 18:01:37 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786062477977456640junit46 hours ago
E0502 16:47:58.924287       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-bvtvmp81-a136b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0502 16:48:45.416049       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-bvtvmp81-a136b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.11.102:6443: connect: connection refused
I0502 16:49:08.929228       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
pull-ci-openshift-ovn-kubernetes-release-4.14-e2e-aws-ovn-upgrade-local-gateway (all) - 3 runs, 33% failed, 100% of failures match = 33% impact
#1786048003652456448junit46 hours ago
May 02 16:59:48.306 - 28s   E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-63-238.ec2.internal" not ready since 2024-05-02 16:57:48 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
May 02 17:05:23.337 - 8s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-126-228.ec2.internal" not ready since 2024-05-02 17:04:58 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-126-228.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 17:05:20.764277       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 17:05:20.764690       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714669520 cert, and key in /tmp/serving-cert-666240772/serving-signer.crt, /tmp/serving-cert-666240772/serving-signer.key\nStaticPodsDegraded: I0502 17:05:21.349580       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 17:05:21.361125       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-126-228.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 17:05:21.361236       1 builder.go:271] check-endpoints version v4.0.0-alpha.0-1890-g2eab0f9-2eab0f9e2\nStaticPodsDegraded: I0502 17:05:21.378830       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-666240772/tls.crt::/tmp/serving-cert-666240772/tls.key"\nStaticPodsDegraded: F0502 17:05:21.660515       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
periodic-ci-openshift-release-master-nightly-4.13-e2e-aws-sdn (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1786077239381594112junit46 hours ago
StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-160-124.ec2.internal container "kube-controller-manager" is terminated: Completed:
StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-160-124.ec2.internal container "kube-controller-manager-cert-syncer" is terminated: Error: t:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused
StaticPodsDegraded: W0502 17:20:34.126604       1 reflector.go:424] k8s.io/client-go@v0.26.10/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused

... 6 lines not shown

periodic-ci-openshift-release-master-ci-4.15-e2e-aws-ovn-upgrade (all) - 4 runs, 0% failed, 25% of runs match
#1786049478659149824junit46 hours ago
May 02 16:34:56.698 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-102-48.us-west-1.compute.internal" not ready since 2024-05-02 16:34:49 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:35:13.467 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-102-48.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:35:01.918589       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:35:01.918917       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714667701 cert, and key in /tmp/serving-cert-2342177876/serving-signer.crt, /tmp/serving-cert-2342177876/serving-signer.key\nStaticPodsDegraded: I0502 16:35:02.490246       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:35:02.512305       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-102-48.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:35:02.512405       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:35:02.533988       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2342177876/tls.crt::/tmp/serving-cert-2342177876/tls.key"\nStaticPodsDegraded: F0502 16:35:02.793938       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
pull-ci-openshift-ovn-kubernetes-master-4.16-upgrade-from-stable-4.15-e2e-aws-ovn-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#1786035980357079040junit46 hours ago
I0502 16:19:49.857768       1 observer_polling.go:159] Starting file observer
W0502 16:19:49.875468       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-112-128.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:19:49.875584       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786024525150818304junit47 hours ago
May 02 15:17:14.373 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-75-157.us-east-2.compute.internal" not ready since 2024-05-02 15:16:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 15:17:25.062 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-75-157.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:17:18.594073       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:17:18.594286       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714663038 cert, and key in /tmp/serving-cert-3857010493/serving-signer.crt, /tmp/serving-cert-3857010493/serving-signer.key\nStaticPodsDegraded: I0502 15:17:19.063349       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:17:19.065099       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-75-157.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:17:19.065200       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:17:19.065815       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3857010493/tls.crt::/tmp/serving-cert-3857010493/tls.key"\nStaticPodsDegraded: F0502 15:17:19.170506       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 15:23:15.508 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-94-66.us-east-2.compute.internal" not ready since 2024-05-02 15:21:15 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786024525150818304junit47 hours ago
I0502 15:10:48.253632       1 observer_polling.go:159] Starting file observer
W0502 15:10:48.273883       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-45-79.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 15:10:48.274020       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
pull-ci-openshift-ovn-kubernetes-master-4.16-upgrade-from-stable-4.15-local-gateway-e2e-aws-ovn-upgrade (all) - 3 runs, 0% failed, 100% of runs match
#1786035980520656896junit46 hours ago
I0502 16:09:30.024067       1 observer_polling.go:159] Starting file observer
W0502 16:09:30.037213       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-126-200.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 16:09:30.037351       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786029706953363456junit47 hours ago
May 02 15:27:41.486 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-101-92.us-east-2.compute.internal" not ready since 2024-05-02 15:25:41 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 15:28:13.938 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-101-92.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:28:05.289382       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:28:05.289633       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714663685 cert, and key in /tmp/serving-cert-1291996611/serving-signer.crt, /tmp/serving-cert-1291996611/serving-signer.key\nStaticPodsDegraded: I0502 15:28:05.692092       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:28:05.702579       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-92.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:28:05.702709       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:28:05.715618       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1291996611/tls.crt::/tmp/serving-cert-1291996611/tls.key"\nStaticPodsDegraded: F0502 15:28:06.022597       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 15:34:28.485 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-80-75.us-east-2.compute.internal" not ready since 2024-05-02 15:34:23 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786029706953363456junit47 hours ago
I0502 15:28:05.692092       1 observer_polling.go:159] Starting file observer
W0502 15:28:05.702579       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-92.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 15:28:05.702709       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6
#1786024536324444160junit47 hours ago
I0502 15:25:34.099058       1 observer_polling.go:159] Starting file observer
W0502 15:25:34.112188       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-156.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 15:25:34.112327       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

periodic-ci-openshift-release-master-nightly-4.12-upgrade-from-stable-4.11-e2e-aws-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786041071344553984junit46 hours ago
May 02 16:05:34.517 E ns/openshift-sdn pod/sdn-khzv8 node/ip-10-0-160-111.us-west-1.compute.internal uid/d4ce8e78-1042-4de7-9662-29ea6e7dea77 container/sdn reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 16:05:39.331 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-160-111.us-west-1.compute.internal node/ip-10-0-160-111.us-west-1.compute.internal uid/585990e8-d5c6-422d-9799-cdf757c4eee1 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 16:05:38.091925       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 16:05:38.098814       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714665938 cert, and key in /tmp/serving-cert-1704287578/serving-signer.crt, /tmp/serving-cert-1704287578/serving-signer.key\nI0502 16:05:38.626757       1 observer_polling.go:159] Starting file observer\nW0502 16:05:38.640765       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-160-111.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 16:05:38.640964       1 builder.go:271] check-endpoints version 4.12.0-202404242136.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0502 16:05:38.653636       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1704287578/tls.crt::/tmp/serving-cert-1704287578/tls.key"\nF0502 16:05:38.948888       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 16:05:42.652 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-160-111.us-west-1.compute.internal node/ip-10-0-160-111.us-west-1.compute.internal uid/585990e8-d5c6-422d-9799-cdf757c4eee1 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 16:05:38.091925       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 16:05:38.098814       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714665938 cert, and key in /tmp/serving-cert-1704287578/serving-signer.crt, /tmp/serving-cert-1704287578/serving-signer.key\nI0502 16:05:38.626757       1 observer_polling.go:159] Starting file observer\nW0502 16:05:38.640765       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-160-111.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 16:05:38.640964       1 builder.go:271] check-endpoints version 4.12.0-202404242136.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0502 16:05:38.653636       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1704287578/tls.crt::/tmp/serving-cert-1704287578/tls.key"\nF0502 16:05:38.948888       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

periodic-ci-openshift-release-master-nightly-4.15-e2e-aws-sdn-upgrade (all) - 4 runs, 0% failed, 25% of runs match
#1786050153979842560junit46 hours ago
May 02 16:12:25.239 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-26-163.ec2.internal" not ready since 2024-05-02 16:12:15 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:12:31.492 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-26-163.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:12:28.490509       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:12:28.490799       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666348 cert, and key in /tmp/serving-cert-3817335541/serving-signer.crt, /tmp/serving-cert-3817335541/serving-signer.key\nStaticPodsDegraded: I0502 16:12:29.062521       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:12:29.076276       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-26-163.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:12:29.076394       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:12:29.090268       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3817335541/tls.crt::/tmp/serving-cert-3817335541/tls.key"\nStaticPodsDegraded: F0502 16:12:29.303131       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 16:17:58.243 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-27-11.ec2.internal" not ready since 2024-05-02 16:17:38 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

periodic-ci-openshift-multiarch-master-nightly-4.12-upgrade-from-stable-4.11-ocp-e2e-aws-sdn-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#1786040028460224512junit47 hours ago
May 02 16:09:06.335 E ns/openshift-multus pod/network-metrics-daemon-f5vs6 node/ip-10-0-131-118.us-west-2.compute.internal uid/be095f5e-a600-4099-9538-045b7bc73876 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 16:09:08.310 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-118.us-west-2.compute.internal node/ip-10-0-131-118.us-west-2.compute.internal uid/66ed1951-da01-4fd9-bce2-52661cbacdd4 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 16:09:05.548365       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 16:09:05.548647       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666145 cert, and key in /tmp/serving-cert-3515957423/serving-signer.crt, /tmp/serving-cert-3515957423/serving-signer.key\nI0502 16:09:06.507937       1 observer_polling.go:159] Starting file observer\nW0502 16:09:06.521702       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-131-118.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 16:09:06.521814       1 builder.go:271] check-endpoints version 4.12.0-202404242136.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0502 16:09:06.522335       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3515957423/tls.crt::/tmp/serving-cert-3515957423/tls.key"\nF0502 16:09:07.157566       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 16:09:12.312 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-118.us-west-2.compute.internal node/ip-10-0-131-118.us-west-2.compute.internal uid/66ed1951-da01-4fd9-bce2-52661cbacdd4 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 16:09:05.548365       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 16:09:05.548647       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666145 cert, and key in /tmp/serving-cert-3515957423/serving-signer.crt, /tmp/serving-cert-3515957423/serving-signer.key\nI0502 16:09:06.507937       1 observer_polling.go:159] Starting file observer\nW0502 16:09:06.521702       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-131-118.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 16:09:06.521814       1 builder.go:271] check-endpoints version 4.12.0-202404242136.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0502 16:09:06.522335       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3515957423/tls.crt::/tmp/serving-cert-3515957423/tls.key"\nF0502 16:09:07.157566       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n

... 1 lines not shown

pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-upgrade (all) - 3 runs, 0% failed, 100% of runs match
#1786036003870347264junit46 hours ago
May 02 15:57:19.834 - 39s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-62-153.ec2.internal" not ready since 2024-05-02 15:55:19 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 15:57:59.395 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-62-153.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:57:51.301390       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:57:51.301632       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714665471 cert, and key in /tmp/serving-cert-2591674833/serving-signer.crt, /tmp/serving-cert-2591674833/serving-signer.key\nStaticPodsDegraded: I0502 15:57:51.455830       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:57:51.457199       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-62-153.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:57:51.457337       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:57:51.458217       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2591674833/tls.crt::/tmp/serving-cert-2591674833/tls.key"\nStaticPodsDegraded: F0502 15:57:51.856747       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 16:03:12.720 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-120-67.ec2.internal" not ready since 2024-05-02 16:03:02 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786036003870347264junit46 hours ago
May 02 16:08:55.961 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-73-210.ec2.internal" not ready since 2024-05-02 16:08:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:09:10.578 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-73-210.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:09:02.714925       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:09:02.715143       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666142 cert, and key in /tmp/serving-cert-614803688/serving-signer.crt, /tmp/serving-cert-614803688/serving-signer.key\nStaticPodsDegraded: I0502 16:09:02.983633       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:09:02.985089       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-73-210.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:09:02.985199       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 16:09:02.985733       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-614803688/tls.crt::/tmp/serving-cert-614803688/tls.key"\nStaticPodsDegraded: F0502 16:09:03.264191       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786029729468387328junit47 hours ago
I0502 15:55:33.000200       1 observer_polling.go:159] Starting file observer
W0502 15:55:33.010582       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-236.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 15:55:33.010741       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786024566519238656junit47 hours ago
I0502 14:07:15.134498       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0502 14:07:24.795543       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-v1ckqpnj-e1d2a.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.94.50:6443: connect: connection refused
E0502 14:14:23.209009       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-v1ckqpnj-e1d2a.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.94.50:6443: connect: connection refused

... 1 lines not shown

pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-upgrade-local-gateway (all) - 3 runs, 0% failed, 100% of runs match
#1786029731984969728junit47 hours ago
May 02 15:53:34.224 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-28-82.us-west-2.compute.internal" not ready since 2024-05-02 15:53:16 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 15:53:49.690 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-28-82.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:53:39.392731       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:53:39.393095       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714665219 cert, and key in /tmp/serving-cert-2425579833/serving-signer.crt, /tmp/serving-cert-2425579833/serving-signer.key\nStaticPodsDegraded: I0502 15:53:40.028473       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:53:40.035806       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-28-82.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:53:40.035941       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:53:40.045518       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2425579833/tls.crt::/tmp/serving-cert-2425579833/tls.key"\nStaticPodsDegraded: F0502 15:53:40.788448       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 15:59:01.215 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-61-203.us-west-2.compute.internal" not ready since 2024-05-02 15:58:39 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786036007343230976junit47 hours ago
May 02 15:58:10.203 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-14-202.ec2.internal" not ready since 2024-05-02 15:58:09 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 15:58:43.001 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-14-202.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 15:58:34.699723       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 15:58:34.699961       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714665514 cert, and key in /tmp/serving-cert-1397564085/serving-signer.crt, /tmp/serving-cert-1397564085/serving-signer.key\nStaticPodsDegraded: I0502 15:58:35.304910       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 15:58:35.306878       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-14-202.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 15:58:35.307006       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 15:58:35.307796       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1397564085/tls.crt::/tmp/serving-cert-1397564085/tls.key"\nStaticPodsDegraded: F0502 15:58:35.481501       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 16:03:37.193 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-65-185.ec2.internal" not ready since 2024-05-02 16:01:37 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786036007343230976junit47 hours ago
May 02 16:09:19.885 - 17s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-42-106.ec2.internal" not ready since 2024-05-02 16:09:14 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:09:37.113 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-42-106.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:09:27.991835       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:09:27.992087       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666167 cert, and key in /tmp/serving-cert-1108277167/serving-signer.crt, /tmp/serving-cert-1108277167/serving-signer.key\nStaticPodsDegraded: I0502 16:09:28.277425       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:09:28.278879       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-42-106.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:09:28.279027       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6\nStaticPodsDegraded: I0502 16:09:28.279872       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1108277167/tls.crt::/tmp/serving-cert-1108277167/tls.key"\nStaticPodsDegraded: F0502 16:09:28.728572       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786024569040015360junit47 hours ago
I0502 15:20:55.315212       1 observer_polling.go:159] Starting file observer
W0502 15:20:55.328223       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-45-70.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 15:20:55.328343       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

release-openshift-origin-installer-e2e-aws-upgrade (all) - 16 runs, 19% failed, 300% of failures match = 56% impact
#1786049370454495232junit47 hours ago
May 02 16:48:17.445 - 17s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-106-224.us-west-1.compute.internal" not ready since 2024-05-02 16:47:59 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:48:35.402 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-106-224.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:48:23.490491       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:48:23.490852       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714668503 cert, and key in /tmp/serving-cert-1456269884/serving-signer.crt, /tmp/serving-cert-1456269884/serving-signer.key\nStaticPodsDegraded: I0502 16:48:24.292832       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:48:24.311545       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-106-224.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:48:24.311789       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:48:24.338288       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1456269884/tls.crt::/tmp/serving-cert-1456269884/tls.key"\nStaticPodsDegraded: F0502 16:48:24.594922       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 16:54:27.068 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-57-37.us-west-1.compute.internal" not ready since 2024-05-02 16:54:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786046319035420672junit47 hours ago
May 02 16:35:09.030 - 1s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-169-80.us-east-2.compute.internal" not ready since 2024-05-02 16:34:45 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)
May 02 16:40:42.037 - 6s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-223-20.us-east-2.compute.internal" not ready since 2024-05-02 16:40:24 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-223-20.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:40:35.152275       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:40:35.152890       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714668035 cert, and key in /tmp/serving-cert-562377471/serving-signer.crt, /tmp/serving-cert-562377471/serving-signer.key\nStaticPodsDegraded: I0502 16:40:35.631134       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:40:35.646082       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-223-20.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:40:35.646273       1 builder.go:271] check-endpoints version 4.14.0-202404250639.p0.g2eab0f9.assembly.stream.el8-2eab0f9-2eab0f9e27db4399bf8885d62ca338c3d02fdd35\nStaticPodsDegraded: I0502 16:40:35.670526       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-562377471/tls.crt::/tmp/serving-cert-562377471/tls.key"\nStaticPodsDegraded: F0502 16:40:36.338551       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
May 02 16:46:21.575 - 1s    E clusteroperator/kube-apiserver condition/Degraded status/True reason/NodeControllerDegraded: The master nodes not ready: node "ip-10-0-154-150.us-east-2.compute.internal" not ready since 2024-05-02 16:46:03 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-154-150.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:46:14.558525       1 cmd.go:237] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:46:14.558861       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714668374 cert, and key in /tmp/serving-cert-3920045398/serving-signer.crt, /tmp/serving-cert-3920045398/serving-signer.key\nStaticPodsDegraded: I0502 16:46:14.794636       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:46:14.816289       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-154-150.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:46:14.816444       1 builder.go:271] check-endpoints version 4.14.0-202404250639.p0.g2eab0f9.assembly.stream.el8-2eab0f9-2eab0f9e27db4399bf8885d62ca338c3d02fdd35\nStaticPodsDegraded: I0502 16:46:14.825294       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3920045398/tls.crt::/tmp/serving-cert-3920045398/tls.key"\nStaticPodsDegraded: F0502 16:46:15.137585       1 cmd.go:162] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:

... 1 lines not shown

#1786049388053794816junit47 hours ago
May 02 17:00:02.099 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-124-13.us-west-1.compute.internal" not ready since 2024-05-02 16:59:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 17:00:17.022 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-124-13.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 17:00:06.171876       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 17:00:06.172144       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714669206 cert, and key in /tmp/serving-cert-2312687311/serving-signer.crt, /tmp/serving-cert-2312687311/serving-signer.key\nStaticPodsDegraded: I0502 17:00:06.878385       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 17:00:06.881890       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-124-13.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 17:00:06.882000       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 17:00:06.899913       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2312687311/tls.crt::/tmp/serving-cert-2312687311/tls.key"\nStaticPodsDegraded: F0502 17:00:07.115184       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786049394747904000junit47 hours ago
May 02 16:41:58.879 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-9-94.us-west-2.compute.internal" not ready since 2024-05-02 16:39:58 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:42:29.943 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-9-94.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:42:22.159014       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:42:22.159216       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714668142 cert, and key in /tmp/serving-cert-2581102084/serving-signer.crt, /tmp/serving-cert-2581102084/serving-signer.key\nStaticPodsDegraded: I0502 16:42:22.469523       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:42:22.470806       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-9-94.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:42:22.470951       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:42:22.471494       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2581102084/tls.crt::/tmp/serving-cert-2581102084/tls.key"\nStaticPodsDegraded: F0502 16:42:22.787303       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 16:47:54.889 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-20-107.us-west-2.compute.internal" not ready since 2024-05-02 16:45:54 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786049365383581696junit47 hours ago
May 02 16:33:27.524 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-22-236.ec2.internal" not ready since 2024-05-02 16:31:27 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:34:01.083 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-22-236.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:33:52.707565       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:33:52.707928       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714667632 cert, and key in /tmp/serving-cert-1942009137/serving-signer.crt, /tmp/serving-cert-1942009137/serving-signer.key\nStaticPodsDegraded: I0502 16:33:53.297839       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:33:53.311795       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-236.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:33:53.311965       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:33:53.333460       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1942009137/tls.crt::/tmp/serving-cert-1942009137/tls.key"\nStaticPodsDegraded: F0502 16:33:53.760195       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 16:39:14.410 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-56-58.ec2.internal" not ready since 2024-05-02 16:37:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786049375445716992junit47 hours ago
May 02 16:37:49.209 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-2-48.us-west-1.compute.internal" not ready since 2024-05-02 16:37:40 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:38:04.720 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-2-48.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:37:53.368857       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:37:53.383177       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714667873 cert, and key in /tmp/serving-cert-3488604597/serving-signer.crt, /tmp/serving-cert-3488604597/serving-signer.key\nStaticPodsDegraded: I0502 16:37:53.853268       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:37:53.871186       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-2-48.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:37:53.871324       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:37:53.893400       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3488604597/tls.crt::/tmp/serving-cert-3488604597/tls.key"\nStaticPodsDegraded: F0502 16:37:54.310045       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 16:43:20.804 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-35-77.us-west-1.compute.internal" not ready since 2024-05-02 16:41:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786049628819427328junit47 hours ago
May 02 16:38:07.824 - 27s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-29-187.us-east-2.compute.internal" not ready since 2024-05-02 16:36:07 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:38:35.074 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-29-187.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:38:22.227402       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:38:22.228731       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714667902 cert, and key in /tmp/serving-cert-813765713/serving-signer.crt, /tmp/serving-cert-813765713/serving-signer.key\nStaticPodsDegraded: I0502 16:38:22.935442       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:38:22.942024       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-29-187.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:38:22.942149       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:38:22.962914       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-813765713/tls.crt::/tmp/serving-cert-813765713/tls.key"\nStaticPodsDegraded: F0502 16:38:23.282008       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 16:43:55.462 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-56-126.us-east-2.compute.internal" not ready since 2024-05-02 16:43:33 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786049381334519808junit47 hours ago
May 02 16:11:41.288 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-28-195.us-east-2.compute.internal" not ready since 2024-05-02 16:09:41 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:12:12.460 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-28-195.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:12:05.149619       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:12:05.150054       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666325 cert, and key in /tmp/serving-cert-4106346790/serving-signer.crt, /tmp/serving-cert-4106346790/serving-signer.key\nStaticPodsDegraded: I0502 16:12:05.396182       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:12:05.399980       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-28-195.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:12:05.400313       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:12:05.401454       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4106346790/tls.crt::/tmp/serving-cert-4106346790/tls.key"\nStaticPodsDegraded: F0502 16:12:05.734748       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 16:17:08.288 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-122-76.us-east-2.compute.internal" not ready since 2024-05-02 16:15:08 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786049381334519808junit47 hours ago
May 02 16:22:52.678 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-12-184.us-east-2.compute.internal" not ready since 2024-05-02 16:22:30 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:23:03.068 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-12-184.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:22:54.788293       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:22:54.788440       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666974 cert, and key in /tmp/serving-cert-2146230724/serving-signer.crt, /tmp/serving-cert-2146230724/serving-signer.key\nStaticPodsDegraded: I0502 16:22:55.193114       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:22:55.194427       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-12-184.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:22:55.194544       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:22:55.195117       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2146230724/tls.crt::/tmp/serving-cert-2146230724/tls.key"\nStaticPodsDegraded: F0502 16:22:55.378425       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786049403140706304junit47 hours ago
May 02 16:14:28.491 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-43-154.us-east-2.compute.internal" not ready since 2024-05-02 16:14:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:14:40.736 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-43-154.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:14:32.050028       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:14:32.050222       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666472 cert, and key in /tmp/serving-cert-3811525434/serving-signer.crt, /tmp/serving-cert-3811525434/serving-signer.key\nStaticPodsDegraded: I0502 16:14:32.292477       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:14:32.294206       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-43-154.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:14:32.294413       1 builder.go:299] check-endpoints version 4.15.0-202404161612.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0502 16:14:32.295426       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3811525434/tls.crt::/tmp/serving-cert-3811525434/tls.key"\nStaticPodsDegraded: F0502 16:14:32.434728       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 02 16:19:40.502 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-29-166.us-east-2.compute.internal" not ready since 2024-05-02 16:17:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

pull-ci-openshift-oauth-apiserver-release-4.15-e2e-aws-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786046464162533376junit47 hours ago
May 02 16:30:41.310 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-108-61.ec2.internal" not ready since 2024-05-02 16:30:34 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 02 16:30:54.967 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-108-61.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0502 16:30:47.440855       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0502 16:30:47.441391       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714667447 cert, and key in /tmp/serving-cert-3473115549/serving-signer.crt, /tmp/serving-cert-3473115549/serving-signer.key\nStaticPodsDegraded: I0502 16:30:47.769983       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0502 16:30:47.786487       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-61.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0502 16:30:47.786666       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1929-gf5c5a60-f5c5a609f\nStaticPodsDegraded: I0502 16:30:47.798856       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3473115549/tls.crt::/tmp/serving-cert-3473115549/tls.key"\nStaticPodsDegraded: F0502 16:30:48.330492       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 02 16:36:12.064 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-112-125.ec2.internal" not ready since 2024-05-02 16:36:10 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
periodic-ci-openshift-multiarch-master-nightly-4.12-upgrade-from-stable-4.11-ocp-e2e-aws-heterogeneous-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1786041125044228096junit47 hours ago
May 02 16:18:15.562 E ns/e2e-k8s-sig-apps-daemonset-upgrade-4970 pod/ds1-lvf9h node/ip-10-0-142-107.us-west-2.compute.internal uid/fa893933-e082-46a3-9b16-ec5aa2d49e13 container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 16:18:16.521 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-107.us-west-2.compute.internal node/ip-10-0-142-107.us-west-2.compute.internal uid/8ffc199b-ebe2-46cc-a74c-6f9b40bacecd container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 16:18:15.209292       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 16:18:15.209739       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666695 cert, and key in /tmp/serving-cert-1662299599/serving-signer.crt, /tmp/serving-cert-1662299599/serving-signer.key\nI0502 16:18:15.835611       1 observer_polling.go:159] Starting file observer\nW0502 16:18:15.851780       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-142-107.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 16:18:15.851889       1 builder.go:271] check-endpoints version 4.12.0-202404242136.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0502 16:18:15.876453       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1662299599/tls.crt::/tmp/serving-cert-1662299599/tls.key"\nF0502 16:18:16.229221       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 16:18:23.504 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is degraded
#1786041125044228096junit47 hours ago
May 02 16:18:23.603 E ns/openshift-multus pod/network-metrics-daemon-9wxnm node/ip-10-0-142-107.us-west-2.compute.internal uid/a78f9176-1e3b-41ad-ac01-cf83793445f1 container/network-metrics-daemon reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 02 16:18:23.633 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-107.us-west-2.compute.internal node/ip-10-0-142-107.us-west-2.compute.internal uid/8ffc199b-ebe2-46cc-a74c-6f9b40bacecd container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0502 16:18:15.209292       1 cmd.go:216] Using insecure, self-signed certificates\nI0502 16:18:15.209739       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714666695 cert, and key in /tmp/serving-cert-1662299599/serving-signer.crt, /tmp/serving-cert-1662299599/serving-signer.key\nI0502 16:18:15.835611       1 observer_polling.go:159] Starting file observer\nW0502 16:18:15.851780       1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-142-107.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0502 16:18:15.851889       1 builder.go:271] check-endpoints version 4.12.0-202404242136.p0.g09d7ddb.assembly.stream.el8-09d7ddb-09d7ddbaba9eb5715313e716476e6a33848d045c\nI0502 16:18:15.876453       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1662299599/tls.crt::/tmp/serving-cert-1662299599/tls.key"\nF0502 16:18:16.229221       1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n
May 02 16:18:25.631 E ns/openshift-e2e-loki pod/loki-promtail-4k8km node/ip-10-0-142-107.us-west-2.compute.internal uid/0b8b115e-bc0f-43fc-9d64-d560d84ed630 container/promtail reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-ovn-single-node-serial (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786041023537876992junit47 hours ago
May 02 15:51:25.058 E ns/e2e-volumelimits-244-4120 pod/csi-hostpathplugin-0 node/ip-10-0-153-55.us-west-1.compute.internal uid/2164a70f-3115-434d-ad48-c55b528ad815 container/hostpath reason/ContainerExit code/2 cause/Error
May 02 16:02:39.335 - 14s   E disruption/kube-api connection/new reason/DisruptionBegan disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-dvpbh6j0-82233.aws-2.ci.openshift.org:6443/api/v1/namespaces/default": dial tcp 54.215.120.217:6443: connect: connection refused
May 02 16:02:39.335 - 14s   E disruption/cache-kube-api connection/new reason/DisruptionBegan disruption/cache-kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-dvpbh6j0-82233.aws-2.ci.openshift.org:6443/api/v1/namespaces/default?resourceVersion=0": dial tcp 54.215.120.217:6443: connect: connection refused

... 14 lines not shown

pull-ci-openshift-ovn-kubernetes-release-4.13-e2e-aws-ovn-local-gateway (all) - 5 runs, 0% failed, 20% of runs match
#1786055883935977472junit2 days ago
# step graph.Run multi-stage test e2e-aws-ovn-local-gateway - e2e-aws-ovn-local-gateway-gather-audit-logs container test
rver API group list: Get "https://api.ci-op-j0fxstgp-ca1df.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 34.212.113.4:6443: connect: connection refused
E0502 16:16:36.436434      31 memcache.go:238] couldn't get current server API group list: Get "https://api.ci-op-j0fxstgp-ca1df.aws-2.ci.openshift.org:6443/api?timeout=32s": dial tcp 34.212.113.4:6443: connect: connection refused

... 3 lines not shown

pull-ci-openshift-ovn-kubernetes-master-e2e-aws-live-migration-sdn-ovn (all) - 2 runs, 0% failed, 100% of runs match
#1786035980612931584junit2 days ago
I0502 15:26:28.596520       1 observer_polling.go:159] Starting file observer
W0502 15:26:28.622869       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-28-50.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0502 15:26:28.622980       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

#1786029707041443840junit2 days ago
I0502 15:18:19.474066       1 observer_polling.go:159] Starting file observer
W0502 15:18:19.502231       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-124-8.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0502 15:18:19.502410       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1975-g1d9a2d0-1d9a2d0d6

... 3 lines not shown

pull-ci-openshift-etcd-openshift-4.16-e2e-aws-etcd-recovery (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1786036895071866880junit2 days ago
W0502 14:41:01.151400       1 cloud_config_sync_controller.go:100] managed cloud-config is not found, falling back to infrastructure config
E0502 14:49:37.785023       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager-operator/cluster-cloud-config-sync-leader: Get "https://api-int.ci-op-vrhvxqs8-20dd2.origin-ci-int-aws.dev.rhcloud.com:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager-operator/leases/cluster-cloud-config-sync-leader": dial tcp 10.0.32.130:6443: connect: connection refused
E0502 15:26:58.500762       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager-operator/cluster-cloud-config-sync-leader: rpc error: code = Unavailable desc = error reading from server: EOF

Found in 19.35% of runs (59.32% of failures) across 3339 total runs and 745 jobs (32.61% failed) in 424ms - clear search | chart view - source code located on github