Job:
#1921157bug23 months ago[sig-api-machinery] Kubernetes APIs remain available for new connections ASSIGNED
T2: At 06:45:58: systemd-shutdown was sending SIGTERM to remaining processes...
T3: At 06:45:58: kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Received signal to terminate, becoming unready, but keeping serving (TerminationStart event)
T4: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: The minimal shutdown duration of 1m10s finished (TerminationMinimalShutdownDurationFinished event)
T5: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Server has stopped listening (TerminationStoppedServing event)
T5 is the last event reported from that api server. At T5 the server might wait up to 60s for all requests to complete and then it fires TerminationGracefulTerminationFinished event.
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-gcp-ovn-upgrade (all) - 33 runs, 48% failed, 25% of failures match = 12% impact
#1619112719900741632junit3 days ago
Jan 28 00:21:30.802 E ns/openshift-ingress-operator pod/ingress-operator-665cf85bf-qg985 node/ci-op-svgy92xg-82914-dk4xf-master-0 container/ingress-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 28 00:21:33.819 E ns/openshift-insights pod/insights-operator-854449444c-7jdrc node/ci-op-svgy92xg-82914-dk4xf-master-0 container/insights-operator reason/ContainerExit code/2 cause/Error erAgent="Prometheus/2.29.2" audit-ID="1634a463-1d43-4517-ae7b-cd80ed21fbfe" srcIP="10.131.0.21:40764" resp=200\nI0128 00:20:22.817949       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="8.309749ms" userAgent="Prometheus/2.29.2" audit-ID="681fbf6d-95bf-4479-874a-737ee4c39327" srcIP="10.129.2.15:44040" resp=200\nI0128 00:20:24.888783       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="8.251549ms" userAgent="Prometheus/2.29.2" audit-ID="f0668665-758c-48f4-ab34-35097b516d0d" srcIP="10.131.0.21:40764" resp=200\nI0128 00:20:52.820844       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="11.844888ms" userAgent="Prometheus/2.29.2" audit-ID="c78bb160-2808-4e45-9d05-36e86e4d4700" srcIP="10.129.2.15:44040" resp=200\nI0128 00:20:54.885062       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.385879ms" userAgent="Prometheus/2.29.2" audit-ID="19151eb4-d664-4e1b-b14d-d5cc83b3c938" srcIP="10.131.0.21:40764" resp=200\nI0128 00:21:08.962704       1 reflector.go:535] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Watch close - *v1.ConfigMap total 9 items received\nI0128 00:21:14.083501       1 configobserver.go:77] Refreshing configuration from cluster pull secret\nI0128 00:21:14.088843       1 configobserver.go:102] Found cloud.openshift.com token\nI0128 00:21:14.088888       1 configobserver.go:120] Refreshing configuration from cluster secret\nI0128 00:21:22.822345       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="12.360208ms" userAgent="Prometheus/2.29.2" audit-ID="abc120e4-b918-4713-a402-a9c0bf6bf37d" srcIP="10.129.2.15:44040" resp=200\nI0128 00:21:24.971430       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="82.791796ms" userAgent="Prometheus/2.29.2" audit-ID="43655976-d378-4690-854e-6eda2c73eba7" srcIP="10.131.0.21:40764" resp=200\nI0128 00:21:31.000992       1 reflector.go:535] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Watch close - *v1.ConfigMap total 10 items received\n
Jan 28 00:21:33.819 E ns/openshift-insights pod/insights-operator-854449444c-7jdrc node/ci-op-svgy92xg-82914-dk4xf-master-0 container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 28 00:21:50.684 E ns/openshift-image-registry pod/cluster-image-registry-operator-864d6d8695-gd42x node/ci-op-svgy92xg-82914-dk4xf-master-0 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 28 00:21:51.528 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator authentication is updating versions\n* Cluster operator cloud-credential is updating versions\n* Cluster operator cluster-autoscaler is updating versions\n* Cluster operator console is updating versions\n* Cluster operator csi-snapshot-controller is updating versions\n* Cluster operator image-registry is updating versions\n* Cluster operator ingress is updating versions\n* Cluster operator insights is updating versions\n* Cluster operator kube-storage-version-migrator is updating versions\n* Cluster operator machine-approver is updating versions\n* Cluster operator monitoring is updating versions\n* Cluster operator node-tuning is updating versions\n* Cluster operator openshift-apiserver is updating versions\n* Cluster operator openshift-controller-manager is updating versions\n* Cluster operator openshift-samples is updating versions\n* Cluster operator storage is updating versions
Jan 28 00:21:52.096 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-bgwqk node/ci-op-svgy92xg-82914-dk4xf-master-1 container/console-operator reason/ContainerExit code/1 cause/Error 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-bgwqk", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0128 00:21:48.337030       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0128 00:21:48.337121       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0128 00:21:48.337156       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0128 00:21:48.337169       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0128 00:21:48.337185       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0128 00:21:48.337198       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0128 00:21:48.337210       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0128 00:21:48.337226       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0128 00:21:48.337232       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0128 00:21:48.337244       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0128 00:21:48.337255       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0128 00:21:48.337266       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0128 00:21:48.337279       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0128 00:21:48.337298       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0128 00:21:48.337312       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0128 00:21:48.337324       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nW0128 00:21:48.337507       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 28 00:22:02.297 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-9s77f node/ci-op-svgy92xg-82914-dk4xf-master-0 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error ssing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2023-01-27T23:36:47Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2023-01-27T23:34:29Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0128 00:21:52.435033       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"20a63290-8a8e-43a6-ba60-d3f90a117f6d", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: daemonset/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.")\nI0128 00:21:54.208972       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.RoleBinding total 10 items received\nI0128 00:22:00.640851       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0128 00:22:00.641424       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0128 00:22:00.641646       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0128 00:22:00.641659       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0128 00:22:00.641683       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0128 00:22:00.643041       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nW0128 00:22:00.641687       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 28 00:22:02.297 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-9s77f node/ci-op-svgy92xg-82914-dk4xf-master-0 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 28 00:22:03.992 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-x57br node/ci-op-svgy92xg-82914-dk4xf-master-0 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 28 00:22:04.509 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-mtz4s node/ci-op-svgy92xg-82914-dk4xf-master-0 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error ontroller ...\nI0128 00:22:01.055709       1 base_controller.go:104] All StatusSyncer_storage workers have been terminated\nI0128 00:22:01.054269       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0128 00:22:01.055820       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0128 00:22:01.054274       1 base_controller.go:114] Shutting down worker of DefaultStorageClassController controller ...\nI0128 00:22:01.055900       1 base_controller.go:104] All DefaultStorageClassController workers have been terminated\nI0128 00:22:01.054281       1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0128 00:22:01.056072       1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0128 00:22:01.054286       1 base_controller.go:114] Shutting down worker of VSphereProblemDetectorStarter controller ...\nI0128 00:22:01.056165       1 base_controller.go:104] All VSphereProblemDetectorStarter workers have been terminated\nI0128 00:22:01.054291       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0128 00:22:01.056227       1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0128 00:22:01.056270       1 controller_manager.go:54] StaticResourceController controller terminated\nI0128 00:22:01.054296       1 base_controller.go:114] Shutting down worker of GCPPDCSIDriverOperatorDeployment controller ...\nI0128 00:22:01.056322       1 base_controller.go:104] All GCPPDCSIDriverOperatorDeployment workers have been terminated\nI0128 00:22:01.056353       1 controller_manager.go:54] GCPPDCSIDriverOperatorDeployment controller terminated\nI0128 00:22:01.054439       1 base_controller.go:114] Shutting down worker of SnapshotCRDController controller ...\nI0128 00:22:01.056399       1 base_controller.go:104] All SnapshotCRDController workers have been terminated\nW0128 00:22:01.054591       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 28 00:22:04.509 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-mtz4s node/ci-op-svgy92xg-82914-dk4xf-master-0 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1617808075513663488junit6 days ago
Jan 24 10:01:12.961 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-lngrw node/ci-op-023gmii6-82914-nxmgq-master-2 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 10:01:13.865 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-dt4d7 node/ci-op-023gmii6-82914-nxmgq-master-2 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error otal 10 items received\nI0124 10:00:57.724573       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="211.761188ms" userAgent="Prometheus/2.29.2" audit-ID="0e3cbf36-cc91-4392-ace1-1c64fb7b1662" srcIP="10.128.2.19:57584" resp=200\nI0124 10:01:02.265494       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Role total 9 items received\nI0124 10:01:07.269051       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Service total 10 items received\nI0124 10:01:11.961923       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0124 10:01:11.963637       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0124 10:01:11.977869       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0124 10:01:11.977901       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0124 10:01:11.977921       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0124 10:01:11.977955       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0124 10:01:11.977972       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0124 10:01:11.978064       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0124 10:01:12.001662       1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0124 10:01:12.002142       1 reflector.go:225] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 10:01:12.061088       1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0124 10:01:12.002263       1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0124 10:01:12.065747       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 10:01:13.865 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-dt4d7 node/ci-op-023gmii6-82914-nxmgq-master-2 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 10:01:13.919 E ns/openshift-insights pod/insights-operator-854449444c-dv67t node/ci-op-023gmii6-82914-nxmgq-master-2 container/insights-operator reason/ContainerExit code/2 cause/Error 11.722353ms" userAgent="Prometheus/2.29.2" audit-ID="6870c3ad-851b-44d3-b048-dc8a1e2995d0" srcIP="10.128.2.19:58860" resp=200\nI0124 09:59:32.606048       1 status.go:354] The operator is healthy\nI0124 09:59:32.606245       1 status.go:441] No status update necessary, objects are identical\nI0124 09:59:51.788480       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="23.828123ms" userAgent="Prometheus/2.29.2" audit-ID="5d532e94-586a-4ea3-84f5-5fdc09a468f6" srcIP="10.131.0.19:43610" resp=200\nI0124 09:59:55.768019       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.279843ms" userAgent="Prometheus/2.29.2" audit-ID="d43f3231-7cc3-4822-a5cc-d5dfe10f9efd" srcIP="10.128.2.19:58860" resp=200\nI0124 10:00:17.262001       1 configobserver.go:77] Refreshing configuration from cluster pull secret\nI0124 10:00:17.268780       1 configobserver.go:102] Found cloud.openshift.com token\nI0124 10:00:17.268843       1 configobserver.go:120] Refreshing configuration from cluster secret\nI0124 10:00:21.776554       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="12.311907ms" userAgent="Prometheus/2.29.2" audit-ID="c3a4186c-4769-4fe5-9097-964d3cafd781" srcIP="10.131.0.19:43610" resp=200\nI0124 10:00:25.773398       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="10.619848ms" userAgent="Prometheus/2.29.2" audit-ID="641a4713-a9f1-4e78-8f2e-0d2da4335858" srcIP="10.128.2.19:58860" resp=200\nI0124 10:00:51.792943       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="27.905848ms" userAgent="Prometheus/2.29.2" audit-ID="8264e7e5-09f1-4001-b452-76d1641f7716" srcIP="10.131.0.19:43610" resp=200\nI0124 10:00:55.823827       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="59.917435ms" userAgent="Prometheus/2.29.2" audit-ID="592eaf0d-433c-47c1-b13d-bd6efb70dd5e" srcIP="10.128.2.19:58860" resp=200\nI0124 10:01:00.601428       1 reflector.go:535] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Watch close - *v1.ConfigMap total 9 items received\n
Jan 24 10:01:13.919 E ns/openshift-insights pod/insights-operator-854449444c-dv67t node/ci-op-023gmii6-82914-nxmgq-master-2 container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 10:01:27.615 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-4mzdb node/ci-op-023gmii6-82914-nxmgq-master-0 container/console-operator reason/ContainerExit code/1 cause/Error rminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0124 10:01:24.750144       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0124 10:01:24.750213       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-4mzdb", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0124 10:01:24.750278       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-4mzdb", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0124 10:01:24.750365       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0124 10:01:24.750664       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-4mzdb", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0124 10:01:24.750813       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0124 10:01:24.754021       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0124 10:01:24.754021       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0124 10:01:24.754374       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0124 10:01:24.754458       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0124 10:01:24.754514       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nW0124 10:01:24.754621       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 10:01:27.772 E ns/openshift-image-registry pod/cluster-image-registry-operator-864d6d8695-nwdkh node/ci-op-023gmii6-82914-nxmgq-master-2 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 10:01:28.894 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-hh94w node/ci-op-023gmii6-82914-nxmgq-master-2 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error 124 09:37:54.210862       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0124 09:40:54.185917       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0124 09:44:29.748990       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0124 09:54:07.679788       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0124 09:54:29.751323       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0124 10:00:54.170607       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0124 10:01:22.994553       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0124 10:01:22.995162       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0124 10:01:22.995236       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0124 10:01:23.065578       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0124 10:01:23.065636       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0124 10:01:23.065671       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nI0124 10:01:23.065706       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0124 10:01:23.065714       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0124 10:01:23.065732       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0124 10:01:23.065748       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0124 10:01:23.065766       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0124 10:01:23.065781       1 base_controller.go:167] Shutting down GCPPDCSIDriverOperatorDeployment ...\nI0124 10:01:23.065788       1 base_controller.go:145] All GCPPDCSIDriverOperatorDeployment post start hooks have been terminated\nW0124 10:01:23.065980       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 10:01:28.894 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-hh94w node/ci-op-023gmii6-82914-nxmgq-master-2 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 10:01:48.637 E ns/openshift-controller-manager pod/controller-manager-9n287 node/ci-op-023gmii6-82914-nxmgq-master-1 container/controller-manager reason/ContainerExit code/137 cause/Error ontrollers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0124 09:19:42.720525       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0124 09:24:09.556638       1 webhook.go:155] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0124 09:24:09.558019       1 webhook.go:224] Failed to make webhook authorizer request: Post "https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0124 09:24:09.558191       1 errors.go:77] Post "https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0124 09:24:15.165535       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0124 09:24:17.516682       1 webhook.go:155] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0124 09:24:17.517853       1 webhook.go:224] Failed to make webhook authorizer request: Post "https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0124 09:24:17.518011       1 errors.go:77] Post "https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews": dial tcp 172.30.0.1:443: connect: connection refused\n
Jan 24 10:01:49.966 E ns/openshift-controller-manager pod/controller-manager-7ps2h node/ci-op-023gmii6-82914-nxmgq-master-0 container/controller-manager reason/ContainerExit code/137 cause/Error account:openshift-controller-manager:openshift-controller-manager-sa" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0124 09:24:24.976878       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-controller-manager:openshift-controller-manager-sa" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0124 09:24:24.999278       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.Build: unknown (get builds.build.openshift.io)\nE0124 10:01:41.302402       1 imagestream_controller.go:136] Error syncing image stream "openshift/java": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "java": the object has been modified; please apply your changes to the latest version and try again\nE0124 10:01:41.345661       1 imagestream_controller.go:136] Error syncing image stream "openshift/ubi8-openjdk-11": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "ubi8-openjdk-11": the object has been modified; please apply your changes to the latest version and try again\nE0124 10:01:41.364855       1 imagestream_controller.go:136] Error syncing image stream "openshift/rhpam-businesscentral-rhel8": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "rhpam-businesscentral-rhel8": the object has been modified; please apply your changes to the latest version and try again\nE0124 10:01:41.902655       1 imagestream_controller.go:136] Error syncing image stream "openshift/jenkins-agent-base": Operation cannot be fulfilled on imagestream.image.openshift.io "jenkins-agent-base": the image stream was updated from "46516" to "46521"\nE0124 10:01:41.964501       1 imagestream_controller.go:136] Error syncing image stream "openshift/fuse7-console": Operation cannot be fulfilled on imagestream.image.openshift.io "fuse7-console": the image stream was updated from "46524" to "46532"\n
#1617851120162443264junit6 days ago
Jan 24 12:51:38.547 E ns/openshift-monitoring pod/prometheus-operator-6594997947-fhrss node/ci-op-s3nlt6cx-82914-vphqr-master-2 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 12:51:38.590 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-s3nlt6cx-82914-vphqr-worker-a-x5szc container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/24 12:10:07 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/24 12:10:07 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/24 12:10:07 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/24 12:10:07 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/24 12:10:07 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/24 12:10:07 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/24 12:10:07 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2023/01/24 12:10:07 http.go:107: HTTPS: listening on [::]:9091\nI0124 12:10:07.287947       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nE0124 12:12:07.847099       1 reflector.go:127] github.com/openshift/oauth-proxy/providers/openshift/provider.go:347: Failed to watch *v1.ConfigMap: unknown (get configmaps)\n
Jan 24 12:51:38.590 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-s3nlt6cx-82914-vphqr-worker-a-x5szc container/config-reloader reason/ContainerExit code/2 cause/Error :06.054213899Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-24T12:10:06.054378619Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-24T12:10:06.856737884Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-24T12:10:06.856871989Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-24T12:10:12.499721581Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-24T12:12:53.987177003Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-24T12:17:55.985926411Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nle
Jan 24 12:51:38.939 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-s3nlt6cx-82914-vphqr-worker-a-x5szc container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/24 12:09:59 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/24 12:09:59 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/24 12:09:59 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/24 12:09:59 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/24 12:09:59 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/24 12:09:59 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/24 12:09:59 http.go:107: HTTPS: listening on [::]:9095\nI0124 12:09:59.284605       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jan 24 12:51:38.939 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-s3nlt6cx-82914-vphqr-worker-a-x5szc container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-24T12:09:58.667376567Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-24T12:09:58.671573325Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-24T12:09:58.672268548Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-01-24T12:09:58.672040492Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-24T12:10:00.409722782Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Jan 24 12:51:38.997 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-pxzgc node/ci-op-s3nlt6cx-82914-vphqr-master-2 container/console-operator reason/ContainerExit code/1 cause/Error Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-pxzgc", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0124 12:51:31.426798       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-pxzgc", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0124 12:51:31.426815       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0124 12:51:31.426834       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-pxzgc", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0124 12:51:31.426850       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0124 12:51:31.483606       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0124 12:51:31.483655       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0124 12:51:31.483673       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0124 12:51:31.483700       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0124 12:51:31.483718       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nW0124 12:51:31.485559       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 12:51:40.621 E ns/openshift-monitoring pod/openshift-state-metrics-7cff956b85-wb4z2 node/ci-op-s3nlt6cx-82914-vphqr-worker-a-x5szc container/openshift-state-metrics reason/ContainerExit code/2 cause/Error
Jan 24 12:51:41.382 E ns/openshift-monitoring pod/telemeter-client-84d6b6b5b6-5br4v node/ci-op-s3nlt6cx-82914-vphqr-worker-a-x5szc container/reload reason/ContainerExit code/2 cause/Error
Jan 24 12:51:41.382 E ns/openshift-monitoring pod/telemeter-client-84d6b6b5b6-5br4v node/ci-op-s3nlt6cx-82914-vphqr-worker-a-x5szc container/telemeter-client reason/ContainerExit code/2 cause/Error
Jan 24 12:51:41.694 E ns/openshift-monitoring pod/node-exporter-hlzbj node/ci-op-s3nlt6cx-82914-vphqr-master-2 container/node-exporter reason/ContainerExit code/143 cause/Error 4T12:02:38.934Z caller=node_exporter.go:113 collector=meminfo\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=netclass\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=netdev\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=netstat\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=nfs\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=nfsd\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=powersupplyclass\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=pressure\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=rapl\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=schedstat\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=sockstat\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=softnet\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=stat\nlevel=info ts=2023-01-24T12:02:38.934Z caller=node_exporter.go:113 collector=textfile\nlevel=info ts=2023-01-24T12:02:38.936Z caller=node_exporter.go:113 collector=thermal_zone\nlevel=info ts=2023-01-24T12:02:38.936Z caller=node_exporter.go:113 collector=time\nlevel=info ts=2023-01-24T12:02:38.936Z caller=node_exporter.go:113 collector=timex\nlevel=info ts=2023-01-24T12:02:38.936Z caller=node_exporter.go:113 collector=udp_queues\nlevel=info ts=2023-01-24T12:02:38.936Z caller=node_exporter.go:113 collector=uname\nlevel=info ts=2023-01-24T12:02:38.936Z caller=node_exporter.go:113 collector=vmstat\nlevel=info ts=2023-01-24T12:02:38.936Z caller=node_exporter.go:113 collector=xfs\nlevel=info ts=2023-01-24T12:02:38.936Z caller=node_exporter.go:113 collector=zfs\nlevel=info ts=2023-01-24T12:02:38.936Z caller=node_exporter.go:195 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2023-01-24T12:02:38.937Z caller=tls_config.go:191 msg="TLS is disabled." http2=false\n
Jan 24 12:51:42.354 E ns/openshift-ingress-canary pod/ingress-canary-56hzw node/ci-op-s3nlt6cx-82914-vphqr-worker-a-x5szc container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
#1615864578900496384junit12 days ago
Jan 19 01:12:52.261 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-c859f7bd5-xnbb5 node/ci-op-6v5s22ds-82914-4ptf4-master-0 container/kube-storage-version-migrator-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 01:12:59.411 E ns/openshift-ingress-operator pod/ingress-operator-665cf85bf-5xlnd node/ci-op-6v5s22ds-82914-4ptf4-master-0 container/ingress-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 01:13:02.575 E ns/openshift-insights pod/insights-operator-854449444c-6rc29 node/ci-op-6v5s22ds-82914-4ptf4-master-0 container/insights-operator reason/ContainerExit code/2 cause/Error  httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.78066ms" userAgent="Prometheus/2.29.2" audit-ID="c0cdd661-1e62-477c-92de-324b802f1441" srcIP="10.128.2.13:39684" resp=200\nI0119 01:11:09.428637       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.9899ms" userAgent="Prometheus/2.29.2" audit-ID="ce6d5c6d-2b63-4d12-b9b9-c0c6262880c9" srcIP="10.129.2.13:47298" resp=200\nI0119 01:11:29.263503       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="11.704919ms" userAgent="Prometheus/2.29.2" audit-ID="41ddeaa7-f8be-4d86-9f7f-541481027349" srcIP="10.128.2.13:39684" resp=200\nI0119 01:11:39.435925       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="11.03364ms" userAgent="Prometheus/2.29.2" audit-ID="9f91c478-c10d-4015-ad81-bca1622247f5" srcIP="10.129.2.13:47298" resp=200\nI0119 01:11:59.257105       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.771531ms" userAgent="Prometheus/2.29.2" audit-ID="efe6e9f6-c18d-4955-9c81-7c7e271e2ecb" srcIP="10.128.2.13:39684" resp=200\nI0119 01:12:09.429008       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.00068ms" userAgent="Prometheus/2.29.2" audit-ID="807e9ae9-9588-43f6-877f-98c6abe77413" srcIP="10.129.2.13:47298" resp=200\nI0119 01:12:09.938970       1 status.go:354] The operator is healthy\nI0119 01:12:09.939067       1 status.go:441] No status update necessary, objects are identical\nI0119 01:12:29.263984       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="11.95801ms" userAgent="Prometheus/2.29.2" audit-ID="fc43af2f-f8cb-4cae-85b6-ded6527457df" srcIP="10.128.2.13:39684" resp=200\nI0119 01:12:39.434887       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="9.60623ms" userAgent="Prometheus/2.29.2" audit-ID="28e2ccd8-1fcb-416f-a6be-dc07a141dc92" srcIP="10.129.2.13:47298" resp=200\nI0119 01:12:59.257028       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.694329ms" userAgent="Prometheus/2.29.2" audit-ID="41b32997-27b0-4939-96ce-8c6549f97120" srcIP="10.128.2.13:39684" resp=200\n
Jan 19 01:13:18.751 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-f7hs9 node/ci-op-6v5s22ds-82914-4ptf4-master-0 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error ng\nI0119 00:55:08.360610       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0119 00:57:34.540184       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0119 00:59:15.894596       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0119 01:07:32.737813       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0119 01:09:15.894907       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0119 01:13:17.318370       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0119 01:13:17.318952       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0119 01:13:17.319146       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0119 01:13:17.319251       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0119 01:13:17.319270       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0119 01:13:17.319279       1 base_controller.go:167] Shutting down GCPPDCSIDriverOperatorDeployment ...\nI0119 01:13:17.335537       1 base_controller.go:145] All GCPPDCSIDriverOperatorDeployment post start hooks have been terminated\nI0119 01:13:17.319289       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0119 01:13:17.319298       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0119 01:13:17.319309       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0119 01:13:17.319319       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nI0119 01:13:17.319329       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0119 01:13:17.319352       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0119 01:13:17.335571       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nW0119 01:13:17.319387       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 19 01:13:18.751 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-f7hs9 node/ci-op-6v5s22ds-82914-4ptf4-master-0 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 01:13:24.667 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-hpw45 node/ci-op-6v5s22ds-82914-4ptf4-master-2 container/console-operator reason/ContainerExit code/1 cause/Error 0119 01:13:21.565866       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0119 01:13:21.566150       1 genericapiserver.go:398] [graceful-termination] RunPreShutdownHooks has completed\nI0119 01:13:21.566222       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-hpw45", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0119 01:13:21.566265       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0119 01:13:21.566339       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-hpw45", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0119 01:13:21.566408       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-hpw45", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0119 01:13:21.566429       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0119 01:13:21.566453       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-hpw45", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0119 01:13:21.566479       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nW0119 01:13:21.566606       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 19 01:13:32.942 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-nnf2t node/ci-op-6v5s22ds-82914-4ptf4-master-0 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 01:13:33.108 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-kcrgw node/ci-op-6v5s22ds-82914-4ptf4-master-0 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error er of StatusSyncer_openshift-controller-manager controller ...\nI0119 01:13:30.824786       1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0119 01:13:30.818234       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0119 01:13:30.824830       1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0119 01:13:30.818239       1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0119 01:13:30.824875       1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0119 01:13:30.818285       1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ...\nI0119 01:13:30.824921       1 base_controller.go:104] All UserCAObservationController workers have been terminated\nI0119 01:13:30.818289       1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0119 01:13:30.824962       1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0119 01:13:30.825080       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0119 01:13:30.825117       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0119 01:13:30.825164       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0119 01:13:30.825273       1 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0119 01:13:30.825316       1 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0119 01:13:30.825358       1 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0119 01:13:30.843559       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 19 01:13:33.108 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-kcrgw node/ci-op-6v5s22ds-82914-4ptf4-master-0 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 01:13:40.140 E ns/openshift-monitoring pod/cluster-monitoring-operator-894d44997-b7kh7 node/ci-op-6v5s22ds-82914-4ptf4-master-0 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 01:13:40.867 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator authentication is updating versions\n* Cluster operator cloud-credential is updating versions\n* Cluster operator cluster-autoscaler is updating versions\n* Cluster operator console is updating versions\n* Cluster operator csi-snapshot-controller is updating versions\n* Cluster operator image-registry is updating versions\n* Cluster operator ingress is updating versions\n* Cluster operator kube-storage-version-migrator is updating versions\n* Cluster operator machine-approver is updating versions\n* Cluster operator monitoring is updating versions\n* Cluster operator node-tuning is updating versions\n* Cluster operator openshift-apiserver is updating versions\n* Cluster operator openshift-controller-manager is updating versions\n* Cluster operator openshift-samples is updating versions\n* Could not update deployment "openshift-marketplace/marketplace-operator" (601 of 773)\n* Could not update olmconfig "cluster" (578 of 773)

Found in 12.12% of runs (25.00% of failures) across 33 total runs and 1 jobs (48.48% failed) in 129ms - clear search | chart view - source code located on github