Job:
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-aws-uwm (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1782686622811164672build-log2 days ago
Apr 23 09:18:14.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ip-10-0-226-139.us-west-2.compute.internal (2 times)
Apr 23 09:18:15.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9fd240107948c0eb6cdff3955f45d26328a1cc1e630b58d43fb548830536102,registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:aaf64581465fcab92dd584a85f6bd826af868e9f60a628be91dcf53688e1793c (45 times)
Apr 23 09:18:16.883 - 59s   I alert/PodDisruptionBudgetAtLimit ns/openshift-etcd ALERTS{alertname="PodDisruptionBudgetAtLimit", alertstate="pending", namespace="openshift-etcd", poddisruptionbudget="etcd-quorum-guard", prometheus="openshift-monitoring/k8s", severity="warning"}
Apr 23 09:18:17.000 W ns/openshift-etcd pod/etcd-quorum-guard-548c5677fb-879td node/ip-10-0-226-139.us-west-2.compute.internal reason/Unhealthy Readiness probe failed:  (8 times)
Apr 23 09:18:18.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9fd240107948c0eb6cdff3955f45d26328a1cc1e630b58d43fb548830536102,registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:aaf64581465fcab92dd584a85f6bd826af868e9f60a628be91dcf53688e1793c (46 times)
Apr 23 09:18:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-18.us-west-2.compute.internal node/ip-10-0-172-18 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Apr 23 09:18:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-18.us-west-2.compute.internal node/ip-10-0-172-18 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Apr 23 09:18:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-18.us-west-2.compute.internal node/ip-10-0-172-18 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Apr 23 09:18:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-18.us-west-2.compute.internal node/ip-10-0-172-18 reason/TerminationGracefulTerminationFinished All pending requests processed
Apr 23 09:18:22.000 W ns/openshift-etcd pod/etcd-quorum-guard-548c5677fb-879td node/ip-10-0-226-139.us-west-2.compute.internal reason/Unhealthy Readiness probe failed:  (9 times)
Apr 23 09:18:22.883 - 59s   I alert/KubeDeploymentReplicasMismatch ns/openshift-etcd container/kube-rbac-proxy-main ALERTS{alertname="KubeDeploymentReplicasMismatch", alertstate="pending", container="kube-rbac-proxy-main", deployment="etcd-quorum-guard", endpoint="https-main", job="kube-state-metrics", namespace="openshift-etcd", prometheus="openshift-monitoring/k8s", service="kube-state-metrics", severity="warning"}
#1782686622811164672build-log2 days ago
Apr 23 09:23:43.000 I ns/openshift-marketplace pod/certified-operators-89vdz reason/AddedInterface Add eth0 [10.131.0.35/23] from openshift-sdn
Apr 23 09:23:44.000 I ns/openshift-marketplace pod/certified-operators-89vdz node/ip-10-0-189-37.us-west-2.compute.internal container/registry-server reason/Created
Apr 23 09:23:44.000 I ns/openshift-marketplace pod/certified-operators-89vdz node/ip-10-0-189-37.us-west-2.compute.internal container/registry-server reason/Pulled duration/0.703s image/registry.redhat.io/redhat/certified-operator-index:v4.9
Apr 23 09:23:44.000 I ns/openshift-marketplace pod/certified-operators-89vdz node/ip-10-0-189-37.us-west-2.compute.internal container/registry-server reason/Started
Apr 23 09:23:44.752 I ns/openshift-marketplace pod/certified-operators-89vdz node/ip-10-0-189-37.us-west-2.compute.internal container/registry-server reason/ContainerStart duration/3.00s
Apr 23 09:23:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-158-202.us-west-2.compute.internal node/ip-10-0-158-202 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Apr 23 09:23:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-158-202.us-west-2.compute.internal node/ip-10-0-158-202 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Apr 23 09:23:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-158-202.us-west-2.compute.internal node/ip-10-0-158-202 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Apr 23 09:23:48.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-158-202.us-west-2.compute.internal node/ip-10-0-158-202 reason/TerminationGracefulTerminationFinished All pending requests processed
Apr 23 09:23:52.813 I ns/openshift-marketplace pod/certified-operators-89vdz node/ip-10-0-189-37.us-west-2.compute.internal container/registry-server reason/Ready
Apr 23 09:23:52.814 I ns/openshift-marketplace pod/certified-operators-89vdz node/ip-10-0-189-37.us-west-2.compute.internal reason/GracefulDelete duration/1s
#1782686622811164672build-log2 days ago
Apr 23 09:26:37.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-226-139.us-west-2.compute.internal node/ip-10-0-226-139.us-west-2.compute.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-deprecated-api-requests-filter ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (12 times)
Apr 23 09:27:01.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (38 times)
Apr 23 09:27:59.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (39 times)
Apr 23 09:28:17.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-226-139.us-west-2.compute.internal node/ip-10-0-226-139.us-west-2.compute.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-deprecated-api-requests-filter ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (33 times)
Apr 23 09:28:59.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (40 times)
Apr 23 09:29:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-226-139.us-west-2.compute.internal node/ip-10-0-226-139 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Apr 23 09:29:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-226-139.us-west-2.compute.internal node/ip-10-0-226-139 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Apr 23 09:29:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-226-139.us-west-2.compute.internal node/ip-10-0-226-139 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Apr 23 09:29:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-226-139.us-west-2.compute.internal node/ip-10-0-226-139 reason/TerminationGracefulTerminationFinished All pending requests processed
Apr 23 09:29:28.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-226-139.us-west-2.compute.internal node/ip-10-0-226-139.us-west-2.compute.internal container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8
Apr 23 09:29:28.597 W ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-226-139.us-west-2.compute.internal node/ip-10-0-226-139.us-west-2.compute.internal invariant violation (bug): static pod should not transition Running->Pending with same UID
#1782686622811164672build-log2 days ago
Apr 23 09:49:07.000 I ns/openshift-monitoring statefulset/prometheus-k8s reason/SuccessfulDelete delete Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful
Apr 23 09:49:07.584 I ns/openshift-monitoring pod/prometheus-adapter-5db8c99df8-rdndp node/ip-10-0-143-63.us-west-2.compute.internal reason/Scheduled
Apr 23 09:49:08.000 I ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-250-141.us-west-2.compute.internal container/init-config-reloader reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:daa4d8e16f1eb10a379a2ad5246e9bbdfe5455ce3c143fc64c672ca9efbfa88b
Apr 23 09:49:08.000 I ns/openshift-monitoring pod/prometheus-k8s-0 reason/AddedInterface Add eth0 [10.128.2.31/23] from openshift-sdn
Apr 23 09:49:08.000 I ns/openshift-monitoring pod/prometheus-k8s-1 reason/AddedInterface Add eth0 [10.129.2.21/23] from openshift-sdn
Apr 23 09:49:08.000 I ns/openshift-apiserver pod/apiserver-744c5dcdd6-j7drv node/apiserver-744c5dcdd6-j7drv reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Apr 23 09:49:08.000 I ns/openshift-apiserver pod/apiserver-744c5dcdd6-j7drv node/apiserver-744c5dcdd6-j7drv reason/TerminationStoppedServing Server has stopped listening
Apr 23 09:49:08.008 I ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-250-141.us-west-2.compute.internal reason/GracefulDelete duration/600s
Apr 23 09:49:09.000 I ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-63.us-west-2.compute.internal container/init-config-reloader reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:daa4d8e16f1eb10a379a2ad5246e9bbdfe5455ce3c143fc64c672ca9efbfa88b
Apr 23 09:49:09.000 W ns/openshift-user-workload-monitoring statefulset/prometheus-user-workload reason/FailedCreate create Pod prometheus-user-workload-1 in StatefulSet prometheus-user-workload failed error: Pod "prometheus-user-workload-1" is invalid: spec.containers[0].startupProbe: Required value: must specify a handler type (10 times)
Apr 23 09:49:09.000 W ns/openshift-apiserver pod/apiserver-744c5dcdd6-j7drv node/ip-10-0-226-139.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.48:8443/readyz": dial tcp 10.130.0.48:8443: connect: connection refused\nbody: \n
#1782686622811164672build-log2 days ago
Apr 23 09:49:29.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-684c9c85ff to 2
Apr 23 09:49:29.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-5d8577cf75 to 1
Apr 23 09:49:29.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5d8577cf75 reason/SuccessfulCreate Created pod: apiserver-5d8577cf75-sv5m7
Apr 23 09:49:29.000 I ns/openshift-cluster-node-tuning-operator daemonset/tuned reason/SuccessfulCreate Created pod: tuned-lkmhc
Apr 23 09:49:29.000 I ns/openshift-oauth-apiserver replicaset/apiserver-684c9c85ff reason/SuccessfulDelete Deleted pod: apiserver-684c9c85ff-t2kd6
Apr 23 09:49:29.000 I ns/default namespace/kube-system node/apiserver-684c9c85ff-t2kd6 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Apr 23 09:49:29.000 I ns/default namespace/kube-system node/apiserver-684c9c85ff-t2kd6 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Apr 23 09:49:29.000 I ns/default namespace/kube-system node/apiserver-684c9c85ff-t2kd6 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Apr 23 09:49:29.000 I ns/default namespace/kube-system node/apiserver-684c9c85ff-t2kd6 reason/TerminationStoppedServing Server has stopped listening
Apr 23 09:49:29.000 W ns/openshift-apiserver pod/apiserver-744c5dcdd6-j7drv node/ip-10-0-226-139.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.48:8443/readyz": dial tcp 10.130.0.48:8443: connect: connection refused (5 times)
Apr 23 09:49:29.462 I ns/openshift-cluster-node-tuning-operator pod/tuned-v7gkl node/ip-10-0-158-202.us-west-2.compute.internal container/tuned reason/ContainerExit code/0 cause/Completed
#1782686622811164672build-log2 days ago
Apr 23 09:49:51.000 I ns/openshift-marketplace deployment/marketplace-operator reason/ScalingReplicaSet Scaled down replica set marketplace-operator-6445cc7d88 to 0
Apr 23 09:49:51.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-85d748b46 to 2
Apr 23 09:49:51.000 I ns/openshift-oauth-apiserver replicaset/apiserver-85d748b46 reason/SuccessfulCreate Created pod: apiserver-85d748b46-gtx59
Apr 23 09:49:51.000 I ns/openshift-oauth-apiserver replicaset/apiserver-684c9c85ff reason/SuccessfulDelete Deleted pod: apiserver-684c9c85ff-wmbp4
Apr 23 09:49:51.000 I ns/openshift-marketplace replicaset/marketplace-operator-6445cc7d88 reason/SuccessfulDelete Deleted pod: marketplace-operator-6445cc7d88-cv7lq
Apr 23 09:49:51.000 I ns/default namespace/kube-system node/apiserver-684c9c85ff-wmbp4 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Apr 23 09:49:51.000 I ns/default namespace/kube-system node/apiserver-684c9c85ff-wmbp4 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Apr 23 09:49:51.000 I ns/default namespace/kube-system node/apiserver-684c9c85ff-wmbp4 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Apr 23 09:49:51.000 I ns/default namespace/kube-system node/apiserver-684c9c85ff-wmbp4 reason/TerminationStoppedServing Server has stopped listening
Apr 23 09:49:51.248 I ns/openshift-marketplace pod/marketplace-operator-844df4dc78-4gjv4 node/ip-10-0-172-18.us-west-2.compute.internal container/marketplace-operator reason/Ready
Apr 23 09:49:51.356 I ns/openshift-oauth-apiserver pod/apiserver-85d748b46-nkjhq node/ip-10-0-226-139.us-west-2.compute.internal container/oauth-apiserver reason/Ready
release-openshift-origin-installer-e2e-aws-disruptive-4.9 (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1782590738459004928build-log2 days ago
Apr 23 02:46:04.000 I ns/openshift-cluster-csi-drivers replicaset/aws-ebs-csi-driver-controller-54d956df5f reason/SuccessfulCreate Created pod: aws-ebs-csi-driver-controller-54d956df5f-xdnms
Apr 23 02:46:04.000 I ns/openshift-console replicaset/console-5875fb7ccf reason/SuccessfulCreate Created pod: console-5875fb7ccf-ztbqn
Apr 23 02:46:04.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-webhook-57d89449ff reason/SuccessfulCreate Created pod: csi-snapshot-webhook-57d89449ff-jzdf7
Apr 23 02:46:04.000 I ns/openshift-network-operator replicaset/network-operator-565bb8f965 reason/SuccessfulCreate Created pod: network-operator-565bb8f965-qk5ds
Apr 23 02:46:04.000 I ns/openshift-authentication replicaset/oauth-openshift-64bc4c65b5 reason/SuccessfulCreate Created pod: oauth-openshift-64bc4c65b5-xj8gs
Apr 23 02:46:04.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-c5grs reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Apr 23 02:46:04.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-c5grs reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Apr 23 02:46:04.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-c5grs reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Apr 23 02:46:04.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-c5grs reason/TerminationStoppedServing Server has stopped listening
Apr 23 02:46:04.023 I ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-54d956df5f-l88zm node/ip-10-0-129-23.us-east-2.compute.internal reason/GracefulDelete duration/30s
Apr 23 02:46:04.101 I ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-57d89449ff-w28bw node/ip-10-0-129-23.us-east-2.compute.internal reason/GracefulDelete duration/30s
#1782590738459004928build-log2 days ago
Apr 23 02:46:06.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()"
Apr 23 02:46:06.397 I ns/openshift-network-operator pod/network-operator-565bb8f965-qk5ds node/ip-10-0-255-71.us-east-2.compute.internal container/network-operator reason/ContainerStart duration/1.00s
Apr 23 02:46:06.397 I ns/openshift-network-operator pod/network-operator-565bb8f965-qk5ds node/ip-10-0-255-71.us-east-2.compute.internal container/network-operator reason/Ready
Apr 23 02:46:06.542 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-57d89449ff-w28bw node/ip-10-0-129-23.us-east-2.compute.internal container/webhook reason/ContainerExit code/2 cause/Error
Apr 23 02:46:06.575 I ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-57d89449ff-w28bw node/ip-10-0-129-23.us-east-2.compute.internal reason/Deleted
Apr 23 02:46:06.596 E ns/openshift-console-operator pod/console-operator-db5dfd8d9-wqhvh node/ip-10-0-129-23.us-east-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error :03.792402       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0423 02:46:03.790753       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0423 02:46:03.790759       1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ...\nW0423 02:46:03.790803       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0423 02:46:03.790845       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0423 02:46:03.792526       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-wqhvh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0423 02:46:03.792567       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-wqhvh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0423 02:46:03.792599       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0423 02:46:03.792639       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-wqhvh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0423 02:46:03.792673       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0423 02:46:03.792702       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0423 02:46:03.792732       1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated\n
Apr 23 02:46:06.621 I ns/openshift-console-operator pod/console-operator-db5dfd8d9-wqhvh node/ip-10-0-129-23.us-east-2.compute.internal reason/Deleted
Apr 23 02:46:06.641 I ns/openshift-oauth-apiserver pod/apiserver-76b77fd68b-c5grs node/ip-10-0-129-23.us-east-2.compute.internal container/oauth-apiserver reason/ContainerExit code/0 cause/Completed
Apr 23 02:46:06.689 I ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-54d956df5f-l88zm node/ip-10-0-129-23.us-east-2.compute.internal container/resizer-kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
Apr 23 02:46:06.689 I ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-54d956df5f-l88zm node/ip-10-0-129-23.us-east-2.compute.internal container/driver-kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
Apr 23 02:46:06.689 I ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-54d956df5f-l88zm node/ip-10-0-129-23.us-east-2.compute.internal container/attacher-kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
#1782590738459004928build-log2 days ago
Apr 23 02:46:18.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-654cb75ccd reason/SuccessfulCreate (combined from similar events): Created pod: etcd-quorum-guard-654cb75ccd-9p2g2 (2 times)
Apr 23 02:46:18.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-654cb75ccd reason/SuccessfulCreate (combined from similar events): Created pod: etcd-quorum-guard-654cb75ccd-rstb4 (3 times)
Apr 23 02:46:18.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-654cb75ccd reason/SuccessfulDelete (combined from similar events): Deleted pod: etcd-quorum-guard-654cb75ccd-64nrn (3 times)
Apr 23 02:46:18.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-654cb75ccd reason/SuccessfulDelete (combined from similar events): Deleted pod: etcd-quorum-guard-654cb75ccd-8pkl9 (2 times)
Apr 23 02:46:18.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-654cb75ccd reason/SuccessfulDelete (combined from similar events): Deleted pod: etcd-quorum-guard-654cb75ccd-xl7tz
Apr 23 02:46:18.000 I ns/openshift-apiserver pod/apiserver-69c76b86df-2w67r node/apiserver-69c76b86df-2w67r reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Apr 23 02:46:18.000 I ns/openshift-apiserver pod/apiserver-69c76b86df-2w67r node/apiserver-69c76b86df-2w67r reason/TerminationStoppedServing Server has stopped listening
Apr 23 02:46:18.080 I ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-9ndlh node/ip-10-0-255-71.us-east-2.compute.internal container/guard reason/ContainerExit code/0 cause/Completed
Apr 23 02:46:18.099 I ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-9ndlh node/ip-10-0-255-71.us-east-2.compute.internal reason/Deleted
Apr 23 02:46:18.407 I ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-xl7tz node/ip-10-0-175-54.us-east-2.compute.internal reason/GracefulDelete duration/3s
Apr 23 02:46:18.413 I ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-8pkl9 node/ reason/DeletedBeforeScheduling
#1782590738459004928build-log2 days ago
Apr 23 02:52:02.000 I ns/openshift-kube-controller-manager-operator deployment/kube-controller-manager-operator reason/OperatorStatusChanged Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 8" to "NodeInstallerProgressing: 2 nodes are at revision 8",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 8"
Apr 23 02:52:02.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/PodCreated Created Pod/revision-pruner-6-ip-10-0-255-71.us-east-2.compute.internal -n openshift-etcd because it was missing (7 times)
Apr 23 02:52:02.000 I ns/openshift-kube-scheduler-operator deployment/openshift-kube-scheduler-operator reason/PodCreated Created Pod/revision-pruner-8-ip-10-0-255-71.us-east-2.compute.internal -n openshift-kube-scheduler because it was missing (3 times)
Apr 23 02:52:02.000 I ns/openshift-apiserver replicaset/apiserver-69c76b86df reason/SuccessfulCreate Created pod: apiserver-69c76b86df-s7bcz
Apr 23 02:52:02.000 I ns/openshift-oauth-apiserver replicaset/apiserver-76b77fd68b reason/SuccessfulCreate Created pod: apiserver-76b77fd68b-2rg2h
Apr 23 02:52:02.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-sl5qg reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Apr 23 02:52:02.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-sl5qg reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Apr 23 02:52:02.000 I ns/openshift-apiserver pod/apiserver-69c76b86df-fsgxg node/apiserver-69c76b86df-fsgxg reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Apr 23 02:52:02.000 I ns/openshift-apiserver pod/apiserver-69c76b86df-fsgxg node/apiserver-69c76b86df-fsgxg reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Apr 23 02:52:02.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-sl5qg reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Apr 23 02:52:02.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-sl5qg reason/TerminationStoppedServing Server has stopped listening
#1782590738459004928build-log2 days ago
Apr 23 02:52:16.000 I ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-5tbfq node/ip-10-0-175-54.us-east-2.compute.internal container/guard reason/Created
Apr 23 02:52:16.000 I ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-5tbfq node/ip-10-0-175-54.us-east-2.compute.internal container/guard reason/Pulled image/registry.ci.openshift.org/ocp/4.9-2023-09-10-173339@sha256:be18af4f4397dda669f82d98ef1de694e3d1d3b9aa504cce26fe80405b8fce50
Apr 23 02:52:16.000 I ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-5tbfq node/ip-10-0-175-54.us-east-2.compute.internal container/guard reason/Started
Apr 23 02:52:16.843 I ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-5tbfq node/ip-10-0-175-54.us-east-2.compute.internal container/guard reason/ContainerStart duration/1.00s
Apr 23 02:52:17.000 I ns/openshift-machine-api machine/ci-op-w411ntqg-d5637-xftcw-master-2 reason/Delete Deleted machine ci-op-w411ntqg-d5637-xftcw-master-2 (5 times)
Apr 23 02:52:17.000 I ns/openshift-apiserver pod/apiserver-69c76b86df-fsgxg node/apiserver-69c76b86df-fsgxg reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Apr 23 02:52:17.000 I ns/openshift-apiserver pod/apiserver-69c76b86df-fsgxg node/apiserver-69c76b86df-fsgxg reason/TerminationStoppedServing Server has stopped listening
Apr 23 02:52:18.000 I ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-5tbfq node/ip-10-0-175-54.us-east-2.compute.internal container/guard reason/Killing
Apr 23 02:52:18.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/ModifiedQuorumGuardDeployment etcd-quorum-guard was modified (77 times)
Apr 23 02:52:18.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nDefragControllerDegraded: cluster is unhealthy: 2 of 3 members are available, ip-10-0-129-23.us-east-2.compute.internal is unhealthy\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster 3 nodes are required, but only 2 are available\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-129-23.us-east-2.compute.internal is unhealthy" to "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nDefragControllerDegraded: cluster is unhealthy: 2 of 3 members are available, ip-10-0-129-23.us-east-2.compute.internal is unhealthy\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster 3 nodes are required, but only 2 are available\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-129-23.us-east-2.compute.internal is unhealthy\nQuorumGuardControllerDegraded: Operation cannot be fulfilled on deployments.apps \"etcd-quorum-guard\": the object has been modified; please apply your changes to the latest version and try again" (2 times)
Apr 23 02:52:18.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nDefragControllerDegraded: cluster is unhealthy: 2 of 3 members are available, ip-10-0-129-23.us-east-2.compute.internal is unhealthy\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster 3 nodes are required, but only 2 are available\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-129-23.us-east-2.compute.internal is unhealthy" to "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nDefragControllerDegraded: cluster is unhealthy: 2 of 3 members are available, ip-10-0-129-23.us-east-2.compute.internal is unhealthy\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster 3 nodes are required, but only 2 are available\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-129-23.us-east-2.compute.internal is unhealthy\nQuorumGuardControllerDegraded: Operation cannot be fulfilled on deployments.apps \"etcd-quorum-guard\": the object has been modified; please apply your changes to the latest version and try again" (3 times)
#1782590738459004928build-log2 days ago
Apr 23 03:09:41.000 W ns/openshift-oauth-apiserver pod/apiserver-8659b7bd6f-jfx99 node/ip-10-0-190-170.us-east-2.compute.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[-]informer-sync failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/openshift.io-StartUserInformer ok\n[+]poststarthook/openshift.io-StartOAuthInformer ok\n[+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok\n[+]shutdown ok\nreadyz check failed\n\n
Apr 23 03:09:41.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-76b77fd68b to 1 (2 times)
Apr 23 03:09:41.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-8659b7bd6f to 2
Apr 23 03:09:41.000 I ns/openshift-oauth-apiserver replicaset/apiserver-8659b7bd6f reason/SuccessfulCreate Created pod: apiserver-8659b7bd6f-2nsb9
Apr 23 03:09:41.000 I ns/openshift-oauth-apiserver replicaset/apiserver-76b77fd68b reason/SuccessfulDelete Deleted pod: apiserver-76b77fd68b-bmm7x
Apr 23 03:09:41.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-bmm7x reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Apr 23 03:09:41.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-bmm7x reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Apr 23 03:09:41.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-bmm7x reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Apr 23 03:09:41.000 I ns/default namespace/kube-system node/apiserver-76b77fd68b-bmm7x reason/TerminationStoppedServing Server has stopped listening
Apr 23 03:09:41.000 W ns/openshift-oauth-apiserver pod/apiserver-8659b7bd6f-jfx99 node/ip-10-0-190-170.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500
Apr 23 03:09:41.727 W ns/openshift-oauth-apiserver pod/apiserver-8659b7bd6f-2nsb9 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-aws-upgrade-workload (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1781973512680902656build-log4 days ago
Apr 21 09:48:47.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-131-251.us-east-2.compute.internal is unhealthy"
Apr 21 09:48:47.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-131-251.us-east-2.compute.internal is unhealthy"
Apr 21 09:48:50.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ip-10-0-131-251.us-east-2.compute.internal (2 times)
Apr 21 09:48:51.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9fd240107948c0eb6cdff3955f45d26328a1cc1e630b58d43fb548830536102,registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:aaf64581465fcab92dd584a85f6bd826af868e9f60a628be91dcf53688e1793c (41 times)
Apr 21 09:48:51.000 W ns/openshift-etcd pod/etcd-quorum-guard-548c5677fb-dk2mj node/ip-10-0-131-251.us-east-2.compute.internal reason/Unhealthy Readiness probe failed:  (8 times)
Apr 21 09:48:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-171-126.us-east-2.compute.internal node/ip-10-0-171-126 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Apr 21 09:48:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-171-126.us-east-2.compute.internal node/ip-10-0-171-126 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Apr 21 09:48:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-171-126.us-east-2.compute.internal node/ip-10-0-171-126 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Apr 21 09:48:56.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-171-126.us-east-2.compute.internal node/ip-10-0-171-126.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.171.126:6443/readyz": dial tcp 10.0.171.126:6443: connect: connection refused\nbody: \n
Apr 21 09:48:56.000 W ns/openshift-etcd pod/etcd-quorum-guard-548c5677fb-dk2mj node/ip-10-0-131-251.us-east-2.compute.internal reason/Unhealthy Readiness probe failed:  (9 times)
Apr 21 09:48:58.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-171-126.us-east-2.compute.internal node/ip-10-0-171-126 reason/TerminationGracefulTerminationFinished All pending requests processed
#1781973512680902656build-log4 days ago
Apr 21 09:53:36.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (21 times)
Apr 21 09:53:38.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (22 times)
Apr 21 09:53:44.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-211-57.us-east-2.compute.internal node/ip-10-0-211-57.us-east-2.compute.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-deprecated-api-requests-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (37 times)
Apr 21 09:54:11.000 W ns/openshift-network-diagnostics node/ip-10-0-149-45.us-east-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-171-126: failed to establish a TCP connection to 10.0.171.126:6443: dial tcp 10.0.171.126:6443: connect: connection refused
Apr 21 09:54:11.000 I ns/openshift-network-diagnostics node/ip-10-0-149-45.us-east-2.compute.internal reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000319216s: kubernetes-apiserver-endpoint-ip-10-0-171-126: tcp connection to 10.0.171.126:6443 succeeded
Apr 21 09:54:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-211-57.us-east-2.compute.internal node/ip-10-0-211-57 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Apr 21 09:54:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-211-57.us-east-2.compute.internal node/ip-10-0-211-57 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Apr 21 09:54:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-211-57.us-east-2.compute.internal node/ip-10-0-211-57 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Apr 21 09:54:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (23 times)
Apr 21 09:54:31.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-211-57.us-east-2.compute.internal node/ip-10-0-211-57 reason/TerminationGracefulTerminationFinished All pending requests processed
Apr 21 09:54:32.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-211-57.us-east-2.compute.internal node/ip-10-0-211-57.us-east-2.compute.internal container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8
#1781973512680902656build-log4 days ago
Apr 21 09:59:28.637 I ns/openshift-marketplace pod/redhat-operators-phqgh node/ip-10-0-149-45.us-east-2.compute.internal reason/GracefulDelete duration/1s
Apr 21 09:59:30.000 I ns/openshift-marketplace pod/redhat-operators-phqgh node/ip-10-0-149-45.us-east-2.compute.internal container/registry-server reason/Killing
Apr 21 09:59:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (40 times)
Apr 21 09:59:31.602 I ns/openshift-marketplace pod/redhat-operators-phqgh node/ip-10-0-149-45.us-east-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Apr 21 09:59:31.614 I ns/openshift-marketplace pod/redhat-operators-phqgh node/ip-10-0-149-45.us-east-2.compute.internal reason/Deleted
Apr 21 09:59:41.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-251.us-east-2.compute.internal node/ip-10-0-131-251 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Apr 21 09:59:41.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-251.us-east-2.compute.internal node/ip-10-0-131-251 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Apr 21 09:59:41.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-251.us-east-2.compute.internal node/ip-10-0-131-251 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Apr 21 09:59:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-251.us-east-2.compute.internal node/ip-10-0-131-251 reason/TerminationGracefulTerminationFinished All pending requests processed
Apr 21 09:59:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-251.us-east-2.compute.internal node/ip-10-0-131-251.us-east-2.compute.internal container/kube-apiserver reason/Killing
Apr 21 09:59:47.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-251.us-east-2.compute.internal node/ip-10-0-131-251.us-east-2.compute.internal container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2024-04-05-170549@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8
#1781973512680902656build-log4 days ago
Apr 21 10:15:26.000 I ns/openshift-console replicaset/downloads-79b6c98b5 reason/SuccessfulCreate Created pod: downloads-79b6c98b5-5tmnt
Apr 21 10:15:26.000 I ns/openshift-console replicaset/downloads-79b6c98b5 reason/SuccessfulCreate Created pod: downloads-79b6c98b5-xt7vj
Apr 21 10:15:26.000 I ns/openshift-oauth-apiserver replicaset/apiserver-57b6644959 reason/SuccessfulDelete Deleted pod: apiserver-57b6644959-v27ck
Apr 21 10:15:26.000 I ns/openshift-apiserver replicaset/apiserver-5f7d895f reason/SuccessfulDelete Deleted pod: apiserver-5f7d895f-rf8pp
Apr 21 10:15:26.000 I ns/openshift-console replicaset/downloads-79bffbf4bd reason/SuccessfulDelete Deleted pod: downloads-79bffbf4bd-bnwdc
Apr 21 10:15:26.000 I ns/default namespace/kube-system node/apiserver-57b6644959-v27ck reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Apr 21 10:15:26.000 I ns/default namespace/kube-system node/apiserver-57b6644959-v27ck reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Apr 21 10:15:26.000 I ns/default namespace/kube-system node/apiserver-57b6644959-v27ck reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Apr 21 10:15:26.000 I ns/default namespace/kube-system node/apiserver-57b6644959-v27ck reason/TerminationStoppedServing Server has stopped listening
Apr 21 10:15:26.061 W ns/openshift-authentication pod/oauth-openshift-5999458c56-mxdvp reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Apr 21 10:15:26.061 W ns/openshift-authentication pod/oauth-openshift-5999458c56-mxdvp reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
#1781973512680902656build-log4 days ago
Apr 21 10:15:42.000 W ns/openshift-service-ca-operator deployment/service-ca-operator reason/FastControllerResync Controller "ServiceCAOperator" resync interval is set to 0s which might lead to client request throttling
Apr 21 10:15:42.000 I ns/openshift-service-ca-operator configmap/service-ca-operator-lock reason/LeaderElection service-ca-operator-788c77b6ff-64bcx_c56acd50-2b41-453a-980d-9aef94fd8eda became leader
Apr 21 10:15:42.000 W ns/openshift-oauth-apiserver pod/apiserver-57b6644959-v27ck node/ip-10-0-131-251.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.46:8443/readyz": dial tcp 10.129.0.46:8443: connect: connection refused\nbody: \n (2 times)
Apr 21 10:15:42.000 W ns/openshift-ingress pod/router-default-79997cdb7c-wtvgn node/ip-10-0-213-197.us-east-2.compute.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]backend-proxy-http ok\n[+]has-synced ok\n[-]process-running failed: reason withheld\nhealthz check failed\n\n (2 times)
Apr 21 10:15:42.000 W ns/openshift-ingress pod/router-default-79997cdb7c-sfjdm node/ip-10-0-149-231.us-east-2.compute.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [-]backend-proxy-http failed: reason withheld\n[+]has-synced ok\n[-]process-running failed: reason withheld\nhealthz check failed\n\n
Apr 21 10:15:42.000 I ns/openshift-apiserver pod/apiserver-5f7d895f-rf8pp node/apiserver-5f7d895f-rf8pp reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Apr 21 10:15:42.000 I ns/openshift-apiserver pod/apiserver-5f7d895f-rf8pp node/apiserver-5f7d895f-rf8pp reason/TerminationStoppedServing Server has stopped listening
Apr 21 10:15:42.000 W ns/openshift-oauth-apiserver pod/apiserver-57b6644959-v27ck node/ip-10-0-131-251.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.46:8443/readyz": dial tcp 10.129.0.46:8443: connect: connection refused (2 times)
Apr 21 10:15:42.000 W ns/openshift-ingress pod/router-default-79997cdb7c-wtvgn node/ip-10-0-213-197.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Apr 21 10:15:42.000 W ns/openshift-ingress pod/router-default-79997cdb7c-sfjdm node/ip-10-0-149-231.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (6 times)
Apr 21 10:15:42.182 W clusteroperator/operator-lifecycle-manager-catalog condition/Progressing status/True changed: Deployed 0.19.0
#1781973512680902656build-log4 days ago
Apr 21 10:17:13.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-5c49c8569d to 2
Apr 21 10:17:13.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5c49c8569d reason/SuccessfulCreate Created pod: apiserver-5c49c8569d-ggqr7
Apr 21 10:17:13.000 I ns/openshift-image-registry daemonset/node-ca reason/SuccessfulCreate Created pod: node-ca-wmj2b
Apr 21 10:17:13.000 I ns/openshift-oauth-apiserver replicaset/apiserver-57b6644959 reason/SuccessfulDelete Deleted pod: apiserver-57b6644959-tdtbx
Apr 21 10:17:13.000 I ns/openshift-image-registry daemonset/node-ca reason/SuccessfulDelete Deleted pod: node-ca-swzgq
Apr 21 10:17:13.000 I ns/default namespace/kube-system node/apiserver-57b6644959-tdtbx reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Apr 21 10:17:13.000 I ns/default namespace/kube-system node/apiserver-57b6644959-tdtbx reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Apr 21 10:17:13.000 I ns/default namespace/kube-system node/apiserver-57b6644959-tdtbx reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Apr 21 10:17:13.000 I ns/default namespace/kube-system node/apiserver-57b6644959-tdtbx reason/TerminationStoppedServing Server has stopped listening
Apr 21 10:17:13.026 I ns/openshift-image-registry pod/node-ca-b68lq node/ip-10-0-131-251.us-east-2.compute.internal container/node-ca reason/ContainerStart duration/9.00s
Apr 21 10:17:13.026 I ns/openshift-image-registry pod/node-ca-b68lq node/ip-10-0-131-251.us-east-2.compute.internal container/node-ca reason/Ready
periodic-ci-openshift-release-master-ci-4.9-upgrade-from-stable-4.8-e2e-aws-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1780922810433015808build-log.txt.gz6 days ago
Apr 18 12:41:50.000 I ns/openshift-marketplace pod/redhat-operators-vxb5k node/ip-10-0-170-231.ec2.internal container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-operator-index:v4.8
Apr 18 12:41:50.000 I ns/openshift-marketplace pod/redhat-operators-vxb5k reason/AddedInterface Add eth0 [10.131.0.41/23] from ovn-kubernetes
Apr 18 12:41:51.000 I ns/openshift-marketplace pod/redhat-operators-vxb5k node/ip-10-0-170-231.ec2.internal container/registry-server reason/Created
Apr 18 12:41:51.000 I ns/openshift-marketplace pod/redhat-operators-vxb5k node/ip-10-0-170-231.ec2.internal container/registry-server reason/Started
Apr 18 12:41:51.922 I ns/openshift-marketplace pod/redhat-operators-vxb5k node/ip-10-0-170-231.ec2.internal container/registry-server reason/ContainerStart duration/4.00s
Apr 18 12:41:55.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-186-171.ec2.internal node/ip-10-0-186-171 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Apr 18 12:41:55.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-186-171.ec2.internal node/ip-10-0-186-171 reason/TerminationStoppedServing Server has stopped listening
Apr 18 12:41:57.000 I ns/openshift-marketplace pod/redhat-operators-vxb5k node/ip-10-0-170-231.ec2.internal container/registry-server reason/Killing
Apr 18 12:41:57.770 I ns/openshift-marketplace pod/redhat-operators-vxb5k node/ip-10-0-170-231.ec2.internal container/registry-server reason/Ready
Apr 18 12:41:57.784 I ns/openshift-marketplace pod/redhat-operators-vxb5k node/ip-10-0-170-231.ec2.internal reason/GracefulDelete duration/1s
Apr 18 12:41:58.937 I ns/openshift-marketplace pod/redhat-operators-vxb5k node/ip-10-0-170-231.ec2.internal container/registry-server reason/ContainerExit code/0 cause/Completed
#1780922810433015808build-log.txt.gz6 days ago
Apr 18 12:46:26.536 I ns/openshift-marketplace pod/certified-operators-b54bw node/ip-10-0-170-231.ec2.internal reason/GracefulDelete duration/1s
Apr 18 12:46:27.554 I ns/openshift-marketplace pod/certified-operators-b54bw node/ip-10-0-170-231.ec2.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Apr 18 12:46:39.138 I ns/openshift-marketplace pod/certified-operators-b54bw node/ip-10-0-170-231.ec2.internal reason/Deleted
Apr 18 12:47:19.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2023-09-10-173339@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (17 times)
Apr 18 12:47:21.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2023-09-10-173339@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (18 times)
Apr 18 12:47:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-176-20.ec2.internal node/ip-10-0-176-20 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Apr 18 12:47:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-176-20.ec2.internal node/ip-10-0-176-20 reason/TerminationStoppedServing Server has stopped listening
Apr 18 12:48:15.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2023-09-10-173339@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (19 times)
Apr 18 12:48:27.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-176-20.ec2.internal node/ip-10-0-176-20.ec2.internal container/kube-scheduler-recovery-controller reason/Created
Apr 18 12:48:27.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-176-20.ec2.internal node/ip-10-0-176-20.ec2.internal container/kube-scheduler-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:258478726c54a7e6ea106d65b04dbd5b25a6d0c5427dad81ad2bcf16eb4d6a82
Apr 18 12:48:27.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-176-20.ec2.internal node/ip-10-0-176-20.ec2.internal container/kube-scheduler-recovery-controller reason/Started
#1780922810433015808build-log.txt.gz6 days ago
Apr 18 12:53:03.642 I ns/openshift-marketplace pod/redhat-marketplace-btt9h node/ip-10-0-170-231.ec2.internal reason/GracefulDelete duration/1s
Apr 18 12:53:04.000 I ns/openshift-marketplace pod/redhat-marketplace-btt9h node/ip-10-0-170-231.ec2.internal container/registry-server reason/Killing
Apr 18 12:53:05.420 I ns/openshift-marketplace pod/redhat-marketplace-btt9h node/ip-10-0-170-231.ec2.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Apr 18 12:53:09.139 I ns/openshift-marketplace pod/redhat-marketplace-btt9h node/ip-10-0-170-231.ec2.internal reason/Deleted
Apr 18 12:53:15.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2023-09-10-173339@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (37 times)
Apr 18 12:53:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-230-231.ec2.internal node/ip-10-0-230-231 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Apr 18 12:53:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-230-231.ec2.internal node/ip-10-0-230-231 reason/TerminationStoppedServing Server has stopped listening
Apr 18 12:54:15.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-230-231.ec2.internal node/ip-10-0-230-231.ec2.internal container/kube-scheduler-recovery-controller reason/Created
Apr 18 12:54:15.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-230-231.ec2.internal node/ip-10-0-230-231.ec2.internal container/kube-scheduler-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:258478726c54a7e6ea106d65b04dbd5b25a6d0c5427dad81ad2bcf16eb4d6a82
Apr 18 12:54:15.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-230-231.ec2.internal node/ip-10-0-230-231.ec2.internal container/kube-scheduler-recovery-controller reason/Started
Apr 18 12:54:15.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2023-09-10-173339@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (38 times)
#1780922810433015808build-log.txt.gz6 days ago
Apr 18 13:02:32.906 W ns/openshift-apiserver pod/apiserver-66f4f6f5f-dhjsz reason/FailedScheduling skip schedule deleting pod: openshift-apiserver/apiserver-66f4f6f5f-dhjsz
Apr 18 13:02:32.927 I ns/openshift-apiserver pod/apiserver-66f4f6f5f-dhjsz node/ reason/DeletedBeforeScheduling
Apr 18 13:02:32.951 W ns/openshift-apiserver pod/apiserver-7b7996cc44-vbmqr reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Apr 18 13:02:32.952 I ns/openshift-apiserver pod/apiserver-7b7996cc44-vbmqr node/ reason/Created
Apr 18 13:02:36.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Apr 18 13:02:39.000 I ns/openshift-apiserver pod/apiserver-75cf5d87fc-lsmln node/apiserver-75cf5d87fc-lsmln reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Apr 18 13:02:39.000 I ns/openshift-apiserver pod/apiserver-75cf5d87fc-lsmln node/apiserver-75cf5d87fc-lsmln reason/TerminationStoppedServing Server has stopped listening
Apr 18 13:02:48.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-lsmln node/ip-10-0-176-20.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.48:8443/healthz": dial tcp 10.128.0.48:8443: connect: connection refused\nbody: \n
Apr 18 13:02:48.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-lsmln node/ip-10-0-176-20.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.48:8443/healthz": dial tcp 10.128.0.48:8443: connect: connection refused\nbody: \n
Apr 18 13:02:48.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-lsmln node/ip-10-0-176-20.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.48:8443/healthz": dial tcp 10.128.0.48:8443: connect: connection refused
Apr 18 13:02:48.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-lsmln node/ip-10-0-176-20.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.48:8443/healthz": dial tcp 10.128.0.48:8443: connect: connection refused
#1780922810433015808build-log.txt.gz6 days ago
Apr 18 13:04:09.000 I ns/openshift-apiserver pod/apiserver-75cf5d87fc-rj7zp node/apiserver-75cf5d87fc-rj7zp reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Apr 18 13:04:09.033 I ns/openshift-apiserver pod/apiserver-75cf5d87fc-rj7zp node/ip-10-0-230-231.ec2.internal reason/GracefulDelete duration/70s
Apr 18 13:04:09.104 I ns/openshift-apiserver pod/apiserver-7b7996cc44-trd5p node/ reason/Created
Apr 18 13:04:09.105 W ns/openshift-apiserver pod/apiserver-7b7996cc44-trd5p reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Apr 18 13:04:10.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Apr 18 13:04:19.000 I ns/openshift-apiserver pod/apiserver-75cf5d87fc-rj7zp node/apiserver-75cf5d87fc-rj7zp reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Apr 18 13:04:19.000 I ns/openshift-apiserver pod/apiserver-75cf5d87fc-rj7zp node/apiserver-75cf5d87fc-rj7zp reason/TerminationStoppedServing Server has stopped listening
Apr 18 13:04:20.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-rj7zp node/ip-10-0-230-231.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.43:8443/healthz": dial tcp 10.129.0.43:8443: connect: connection refused\nbody: \n
Apr 18 13:04:20.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-rj7zp node/ip-10-0-230-231.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.43:8443/healthz": dial tcp 10.129.0.43:8443: connect: connection refused\nbody: \n
Apr 18 13:04:20.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-rj7zp node/ip-10-0-230-231.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.43:8443/healthz": dial tcp 10.129.0.43:8443: connect: connection refused
Apr 18 13:04:20.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-rj7zp node/ip-10-0-230-231.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.43:8443/healthz": dial tcp 10.129.0.43:8443: connect: connection refused
#1780922810433015808build-log.txt.gz6 days ago
Apr 18 13:05:41.215 I ns/openshift-apiserver pod/apiserver-75cf5d87fc-d9x44 node/ip-10-0-186-171.ec2.internal reason/GracefulDelete duration/70s
Apr 18 13:05:41.430 W ns/openshift-apiserver pod/apiserver-7b7996cc44-b9b5t reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Apr 18 13:05:41.431 I ns/openshift-apiserver pod/apiserver-7b7996cc44-b9b5t node/ reason/Created
Apr 18 13:05:42.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Apr 18 13:05:42.785 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Apr 18 13:05:51.000 I ns/openshift-apiserver pod/apiserver-75cf5d87fc-d9x44 node/apiserver-75cf5d87fc-d9x44 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Apr 18 13:05:51.000 I ns/openshift-apiserver pod/apiserver-75cf5d87fc-d9x44 node/apiserver-75cf5d87fc-d9x44 reason/TerminationStoppedServing Server has stopped listening
Apr 18 13:05:52.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-d9x44 node/ip-10-0-186-171.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused\nbody: \n
Apr 18 13:05:52.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-d9x44 node/ip-10-0-186-171.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused\nbody: \n
Apr 18 13:05:52.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-d9x44 node/ip-10-0-186-171.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused
Apr 18 13:05:52.000 W ns/openshift-apiserver pod/apiserver-75cf5d87fc-d9x44 node/ip-10-0-186-171.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused

Found in 4.76% of runs (5.71% of failures) across 84 total runs and 74 jobs (83.33% failed) in 197ms - clear search | chart view - source code located on github