Job:
#1921157bug23 months ago[sig-api-machinery] Kubernetes APIs remain available for new connections ASSIGNED
T2: At 06:45:58: systemd-shutdown was sending SIGTERM to remaining processes...
T3: At 06:45:58: kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Received signal to terminate, becoming unready, but keeping serving (TerminationStart event)
T4: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: The minimal shutdown duration of 1m10s finished (TerminationMinimalShutdownDurationFinished event)
T5: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Server has stopped listening (TerminationStoppedServing event)
T5 is the last event reported from that api server. At T5 the server might wait up to 60s for all requests to complete and then it fires TerminationGracefulTerminationFinished event.
periodic-ci-openshift-release-master-ci-4.9-upgrade-from-stable-4.8-e2e-aws-upgrade (all) - 14 runs, 79% failed, 9% of failures match = 7% impact
#1617489183293575168junit7 days ago
Jan 23 13:36:42.019 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-6846998c6c-dcd7x node/ip-10-0-180-122.us-west-1.compute.internal container/csi-attacher reason/ContainerExit code/2 cause/Error
Jan 23 13:36:42.019 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-6846998c6c-dcd7x node/ip-10-0-180-122.us-west-1.compute.internal container/csi-driver reason/ContainerExit code/2 cause/Error
Jan 23 13:36:42.019 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-6846998c6c-dcd7x node/ip-10-0-180-122.us-west-1.compute.internal container/csi-snapshotter reason/ContainerExit code/2 cause/Error
Jan 23 13:36:42.600 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-69fd597c9-5nnmr node/ip-10-0-180-122.us-west-1.compute.internal container/cluster-storage-operator reason/ContainerExit code/1 cause/Error Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-69fd597c9-5nnmr_104a19a3-6872-4fb9-bb90-f32df940e865/cluster-storage-operator/0.log": lstat /var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-69fd597c9-5nnmr_104a19a3-6872-4fb9-bb90-f32df940e865/cluster-storage-operator/0.log: no such file or directory
Jan 23 13:36:42.901 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5f74f5f497-n2rgq node/ip-10-0-180-122.us-west-1.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error vent.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-storage-operator", Name:"csi-snapshot-controller-operator", UID:"aea3381a-8e43-48ef-bf43-13b26853e491", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well")\nI0123 13:36:41.290327       1 operator.go:159] Finished syncing operator at 315.362726ms\nI0123 13:36:41.290610       1 operator.go:157] Starting syncing operator at 2023-01-23 13:36:41.290605503 +0000 UTC m=+1493.877508558\nI0123 13:36:41.386915       1 operator.go:159] Finished syncing operator at 96.299831ms\nI0123 13:36:41.497361       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0123 13:36:41.497776       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0123 13:36:41.497847       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0123 13:36:41.497879       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0123 13:36:41.497934       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0123 13:36:41.498268       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0123 13:36:41.498322       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0123 13:36:41.498346       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0123 13:36:41.498372       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0123 13:36:41.498412       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0123 13:36:41.498435       1 base_controller.go:167] Shutting down StaticResourceController ...\nW0123 13:36:41.498540       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 23 13:36:43.861 E ns/openshift-console-operator pod/console-operator-b4bd97884-t7nvn node/ip-10-0-180-122.us-west-1.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error "", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0123 13:36:43.081049       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0123 13:36:43.081096       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0123 13:36:43.082282       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0123 13:36:43.082301       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0123 13:36:43.082311       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0123 13:36:43.082320       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0123 13:36:43.082328       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0123 13:36:43.082337       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0123 13:36:43.082349       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0123 13:36:43.082358       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0123 13:36:43.082409       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0123 13:36:43.082446       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0123 13:36:43.082472       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0123 13:36:43.082500       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0123 13:36:43.082525       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0123 13:36:43.082555       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0123 13:36:43.082599       1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0123 13:36:43.082847       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 23 13:36:45.109 E ns/openshift-sdn pod/sdn-46n4l node/ip-10-0-212-33.us-west-1.compute.internal container/sdn reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 13:36:45.109 E ns/openshift-sdn pod/sdn-46n4l node/ip-10-0-212-33.us-west-1.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 13:36:47.127 E ns/openshift-monitoring pod/node-exporter-mtqjk node/ip-10-0-212-33.us-west-1.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 13:36:47.127 E ns/openshift-monitoring pod/node-exporter-mtqjk node/ip-10-0-212-33.us-west-1.compute.internal container/node-exporter reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 13:36:48.144 E ns/openshift-machine-config-operator pod/machine-config-daemon-bhtcf node/ip-10-0-212-33.us-west-1.compute.internal container/machine-config-daemon reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

Found in 7.14% of runs (9.09% of failures) across 14 total runs and 1 jobs (78.57% failed) in 711ms - clear search | chart view - source code located on github