Job:
#1921157bug23 months ago[sig-api-machinery] Kubernetes APIs remain available for new connections ASSIGNED
T2: At 06:45:58: systemd-shutdown was sending SIGTERM to remaining processes...
T3: At 06:45:58: kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Received signal to terminate, becoming unready, but keeping serving (TerminationStart event)
T4: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: The minimal shutdown duration of 1m10s finished (TerminationMinimalShutdownDurationFinished event)
T5: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Server has stopped listening (TerminationStoppedServing event)
T5 is the last event reported from that api server. At T5 the server might wait up to 60s for all requests to complete and then it fires TerminationGracefulTerminationFinished event.
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-aws-ovn-upgrade (all) - 70 runs, 57% failed, 33% of failures match = 19% impact
#1620254489955012608junit5 hours ago
Jan 31 04:06:13.580 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-c859f7bd5-vfsbt node/ip-10-0-130-150.us-west-2.compute.internal container/kube-storage-version-migrator-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 31 04:06:22.508 E ns/openshift-ingress-operator pod/ingress-operator-665cf85bf-djdmq node/ip-10-0-130-150.us-west-2.compute.internal container/ingress-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 31 04:06:25.764 E ns/openshift-insights pod/insights-operator-854449444c-kt5p6 node/ip-10-0-130-150.us-west-2.compute.internal container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 31 04:06:32.866 E ns/openshift-machine-api pod/cluster-autoscaler-operator-5b4cbb5884-rsxsw node/ip-10-0-130-150.us-west-2.compute.internal container/cluster-autoscaler-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 31 04:06:36.889 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-vrcfg node/ip-10-0-130-150.us-west-2.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error 5.911221       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0131 04:06:35.911234       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0131 04:06:35.911307       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0131 04:06:35.911361       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0131 04:06:35.911367       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0131 04:06:35.911378       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0131 04:06:35.911386       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0131 04:06:35.911389       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0131 04:06:35.911398       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0131 04:06:35.911407       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0131 04:06:35.911479       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0131 04:06:35.911561       1 secure_serving.go:311] Stopped listening on [::]:8443\nI0131 04:06:35.911583       1 genericapiserver.go:363] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nI0131 04:06:35.911585       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nW0131 04:06:35.911601       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0131 04:06:35.911602       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/tmp/serving-cert-002794869/tls.crt::/tmp/serving-cert-002794869/tls.key"\n
Jan 31 04:06:46.313 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-6jqt7 node/ip-10-0-178-14.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error eping serving\nI0131 04:06:44.561226       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-6jqt7", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0131 04:06:44.561237       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0131 04:06:44.561251       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-6jqt7", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0131 04:06:44.561265       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0131 04:06:44.561270       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0131 04:06:44.561316       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0131 04:06:44.561339       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0131 04:06:44.561346       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0131 04:06:44.561353       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0131 04:06:44.561362       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0131 04:06:44.561370       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0131 04:06:44.561377       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nW0131 04:06:44.561385       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0131 04:06:44.561392       1 base_controller.go:114] Shutting down worker of DownloadsRouteController controller ...\nI0131 04:06:44.561395       1 base_controller.go:167] Shutting down ConsoleOperator ...\n
Jan 31 04:06:52.941 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-tqdcw node/ip-10-0-130-150.us-west-2.compute.internal container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 31 04:06:54.313 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-6b4cbf84ff-4wzfn node/ip-10-0-130-150.us-west-2.compute.internal container/cluster-node-tuning-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 31 04:06:55.049 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-cdrpx node/ip-10-0-130-150.us-west-2.compute.internal container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error 7\nI0131 04:06:50.220295       1 reflector.go:225] Stopping reflector *v1.Service (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 04:06:50.220342       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 04:06:50.220391       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 04:06:50.222318       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 04:06:50.222398       1 reflector.go:225] Stopping reflector *v1.Proxy (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 04:06:50.222465       1 reflector.go:225] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 04:06:50.222517       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 04:06:50.222555       1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 04:06:50.222630       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0131 04:06:50.222663       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0131 04:06:50.222692       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0131 04:06:50.222719       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0131 04:06:50.222752       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0131 04:06:50.222775       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0131 04:06:50.222802       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nW0131 04:06:50.223110       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 31 04:06:55.049 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-cdrpx node/ip-10-0-130-150.us-west-2.compute.internal container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 31 04:06:55.532 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-bl8xm node/ip-10-0-130-150.us-west-2.compute.internal container/cluster-storage-operator reason/ContainerExit code/1 cause/Error ss gp2 found, reconciling\nI0131 03:41:43.159106       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0131 03:47:42.501959       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0131 03:56:00.901193       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0131 03:57:42.502615       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0131 04:01:43.159593       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0131 04:06:51.936727       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0131 04:06:51.937123       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0131 04:06:51.937151       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0131 04:06:51.961795       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0131 04:06:51.961897       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0131 04:06:51.961927       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0131 04:06:51.961958       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0131 04:06:51.961978       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0131 04:06:51.962000       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0131 04:06:51.962023       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0131 04:06:51.962047       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0131 04:06:51.962069       1 base_controller.go:167] Shutting down AWSEBSCSIDriverOperatorDeployment ...\nI0131 04:06:51.962085       1 base_controller.go:145] All AWSEBSCSIDriverOperatorDeployment post start hooks have been terminated\nI0131 04:06:51.962106       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nW0131 04:06:51.962394       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
#1619132915797463040junit3 days ago
Jan 28 02:13:42.374 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-6b4cbf84ff-vtj8d node/ip-10-0-233-188.ec2.internal container/cluster-node-tuning-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 28 02:13:42.714 E ns/openshift-machine-api pod/cluster-autoscaler-operator-5b4cbb5884-rlwfk node/ip-10-0-233-188.ec2.internal container/cluster-autoscaler-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 28 02:13:44.526 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-khp4w node/ip-10-0-141-79.ec2.internal container/webhook reason/ContainerExit code/2 cause/Error
Jan 28 02:13:45.912 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-xclmb node/ip-10-0-233-188.ec2.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 28 02:13:47.871 E ns/openshift-monitoring pod/cluster-monitoring-operator-894d44997-96s4l node/ip-10-0-233-188.ec2.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 28 02:13:50.745 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-nfszl node/ip-10-0-131-184.ec2.internal container/console-operator reason/ContainerExit code/1 cause/Error       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0128 02:13:38.831207       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-nfszl", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0128 02:13:38.831256       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-nfszl", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0128 02:13:38.831284       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0128 02:13:38.831269       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0128 02:13:38.831288       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0128 02:13:38.831278       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0128 02:13:38.831289       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0128 02:13:38.831295       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0128 02:13:38.831303       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0128 02:13:38.831310       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0128 02:13:38.831318       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0128 02:13:38.831319       1 base_controller.go:114] Shutting down worker of HealthCheckController controller ...\nI0128 02:13:38.831513       1 base_controller.go:104] All HealthCheckController workers have been terminated\nW0128 02:13:38.831246       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 28 02:13:50.745 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-nfszl node/ip-10-0-131-184.ec2.internal container/console-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 28 02:13:55.945 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-253-146.ec2.internal container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/28 01:19:05 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/28 01:19:05 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/28 01:19:05 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/28 01:19:05 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/28 01:19:05 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/28 01:19:05 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/28 01:19:05 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0128 01:19:05.434277       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/28 01:19:05 http.go:107: HTTPS: listening on [::]:9091\n
Jan 28 02:13:55.945 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-253-146.ec2.internal container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-28T01:19:04.974116087Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-28T01:19:04.974160478Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-28T01:19:04.974272931Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-28T01:19:05.355773692Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-28T01:19:05.356804167Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-28T01:20:31.140621807Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-28T01:41:53.996033216Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\n
Jan 28 02:13:56.259 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-89.ec2.internal container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/28 01:19:03 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/28 01:19:03 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/28 01:19:03 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/28 01:19:03 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/28 01:19:03 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/28 01:19:03 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/28 01:19:03 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0128 01:19:03.222825       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/28 01:19:03 http.go:107: HTTPS: listening on [::]:9091\n
Jan 28 02:13:56.259 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-89.ec2.internal container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-28T01:19:02.7756813Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-28T01:19:02.775731701Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-28T01:19:02.775854353Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-28T01:19:03.162285183Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-28T01:19:03.162381164Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-28T01:20:19.372208815Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-28T01:42:25.291182549Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\n
#1618808039047958528junit4 days ago
Jan 27 04:20:27.458 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-xk8xj node/ip-10-0-138-106.us-west-1.compute.internal container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 27 04:20:27.672 E ns/openshift-kube-storage-version-migrator pod/migrator-5554c9565f-tw5n9 node/ip-10-0-149-12.us-west-1.compute.internal container/migrator reason/ContainerExit code/2 cause/Error I0127 03:24:38.985026       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0127 03:24:38.985100       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0127 03:24:38.985105       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0127 03:24:38.985109       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0127 03:24:38.985130       1 migrator.go:18] FLAG: --kubeconfig=""\nI0127 03:24:38.985134       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0127 03:24:38.985139       1 migrator.go:18] FLAG: --log_dir=""\nI0127 03:24:38.985142       1 migrator.go:18] FLAG: --log_file=""\nI0127 03:24:38.985145       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0127 03:24:38.985148       1 migrator.go:18] FLAG: --logtostderr="true"\nI0127 03:24:38.985151       1 migrator.go:18] FLAG: --one_output="false"\nI0127 03:24:38.985154       1 migrator.go:18] FLAG: --skip_headers="false"\nI0127 03:24:38.985158       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0127 03:24:38.985160       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0127 03:24:38.985164       1 migrator.go:18] FLAG: --v="2"\nI0127 03:24:38.985167       1 migrator.go:18] FLAG: --vmodule=""\nI0127 03:24:38.986018       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0127 03:24:50.100473       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0127 03:24:50.171054       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0127 03:24:51.178925       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0127 03:24:51.227408       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\n
Jan 27 04:20:35.393 E ns/openshift-monitoring pod/cluster-monitoring-operator-894d44997-m2l47 node/ip-10-0-242-20.us-west-1.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 27 04:20:35.794 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-tz2sw node/ip-10-0-138-106.us-west-1.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error 023-01-27 04:20:22.27856049 +0000 UTC m=+3355.366743068\nI0127 04:20:22.327752       1 operator.go:159] Finished syncing operator at 49.184068ms\nI0127 04:20:22.327836       1 operator.go:157] Starting syncing operator at 2023-01-27 04:20:22.32783231 +0000 UTC m=+3355.416014888\nI0127 04:20:22.361894       1 operator.go:159] Finished syncing operator at 34.053686ms\nI0127 04:20:22.521593       1 operator.go:157] Starting syncing operator at 2023-01-27 04:20:22.521583664 +0000 UTC m=+3355.609766241\nI0127 04:20:22.843314       1 operator.go:159] Finished syncing operator at 321.721261ms\nI0127 04:20:33.759918       1 operator.go:157] Starting syncing operator at 2023-01-27 04:20:33.75990698 +0000 UTC m=+3366.848089558\nI0127 04:20:33.833739       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0127 04:20:33.838145       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0127 04:20:33.838204       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0127 04:20:33.838235       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0127 04:20:33.838271       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0127 04:20:33.838565       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0127 04:20:33.838602       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0127 04:20:33.838626       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0127 04:20:33.838648       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0127 04:20:33.838672       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0127 04:20:33.838695       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nW0127 04:20:33.838808       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 27 04:20:35.956 - 3s    E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator
Jan 27 04:20:37.664 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-w67kp node/ip-10-0-149-12.us-west-1.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error minate, becoming unready, but keeping serving\nI0127 04:20:35.190402       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-w67kp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0127 04:20:35.190444       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0127 04:20:35.190508       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-w67kp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0127 04:20:35.190549       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0127 04:20:35.191657       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0127 04:20:35.191874       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0127 04:20:35.191968       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0127 04:20:35.192381       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0127 04:20:35.192481       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0127 04:20:35.192634       1 secure_serving.go:311] Stopped listening on [::]:8443\nI0127 04:20:35.192695       1 genericapiserver.go:373] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nW0127 04:20:35.192860       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 27 04:20:39.453 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-8h8qf node/ip-10-0-242-20.us-west-1.compute.internal container/webhook reason/ContainerExit code/2 cause/Error
Jan 27 04:20:40.642 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-784s6 node/ip-10-0-149-12.us-west-1.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 27 04:20:45.390 E ns/openshift-ingress-canary pod/ingress-canary-k6p9x node/ip-10-0-150-188.us-west-1.compute.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 27 04:20:51.159 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-167-211.us-west-1.compute.internal container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/27 03:29:46 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/27 03:29:46 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/27 03:29:46 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/27 03:29:46 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/27 03:29:46 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/27 03:29:46 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/27 03:29:46 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0127 03:29:46.987300       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/27 03:29:46 http.go:107: HTTPS: listening on [::]:9091\nE0127 03:32:39.789639       1 reflector.go:127] github.com/openshift/oauth-proxy/providers/openshift/provider.go:347: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0127 03:36:31.763068       1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/27 03:36:31 oauthproxy.go:791: requestauth: 10.131.0.20:582
Jan 27 04:20:51.159 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-167-211.us-west-1.compute.internal container/config-reloader reason/ContainerExit code/2 cause/Error :46.540268608Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-27T03:29:46.540571774Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-27T03:29:46.902673987Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T03:29:46.902780389Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T03:29:49.386723102Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T03:31:07.081814393Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T03:34:03.877775079Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nle
#1618298598275944448junit5 days ago
Jan 25 18:43:57.960 E ns/openshift-insights pod/insights-operator-854449444c-cmbgl node/ip-10-0-223-125.us-west-2.compute.internal container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 25 18:43:59.981 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-dff5h node/ip-10-0-223-125.us-west-2.compute.internal container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error 0.22.0-rc.0/tools/cache/reflector.go:167\nI0125 18:43:58.894069       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0125 18:43:58.894079       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0125 18:43:58.894071       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0125 18:43:58.894088       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0125 18:43:58.894097       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0125 18:43:58.894100       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0125 18:43:58.894111       1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0125 18:43:58.894117       1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0125 18:43:58.894127       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0125 18:43:58.894130       1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0125 18:43:58.894137       1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0125 18:43:58.894139       1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0125 18:43:58.894142       1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0125 18:43:58.894143       1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0125 18:43:58.894148       1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ...\nI0125 18:43:58.894151       1 base_controller.go:104] All UserCAObservationController workers have been terminated\nI0125 18:43:58.894156       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nW0125 18:43:58.894174       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 25 18:44:01.021 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-wjlgt node/ip-10-0-223-125.us-west-2.compute.internal container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 25 18:44:12.385 E ns/openshift-ingress-canary pod/ingress-canary-lf7c8 node/ip-10-0-160-128.us-west-2.compute.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 25 18:44:15.958 E ns/openshift-kube-storage-version-migrator pod/migrator-5554c9565f-svt5l node/ip-10-0-156-35.us-west-2.compute.internal container/migrator reason/ContainerExit code/2 cause/Error I0125 17:40:13.480238       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0125 17:40:13.480299       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0125 17:40:13.480302       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0125 17:40:13.480305       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0125 17:40:13.480307       1 migrator.go:18] FLAG: --kubeconfig=""\nI0125 17:40:13.480310       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0125 17:40:13.480340       1 migrator.go:18] FLAG: --log_dir=""\nI0125 17:40:13.480346       1 migrator.go:18] FLAG: --log_file=""\nI0125 17:40:13.480349       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0125 17:40:13.480350       1 migrator.go:18] FLAG: --logtostderr="true"\nI0125 17:40:13.480352       1 migrator.go:18] FLAG: --one_output="false"\nI0125 17:40:13.480354       1 migrator.go:18] FLAG: --skip_headers="false"\nI0125 17:40:13.480355       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0125 17:40:13.480357       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0125 17:40:13.480359       1 migrator.go:18] FLAG: --v="2"\nI0125 17:40:13.480360       1 migrator.go:18] FLAG: --vmodule=""\nI0125 17:40:13.481272       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0125 17:40:29.602078       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0125 17:40:29.676663       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0125 17:40:30.682099       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0125 17:40:30.721474       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\nI0125 17:49:04.911621       1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jan 25 18:44:16.042 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-nb7xh node/ip-10-0-156-35.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error me:"console-operator-6bbd4fcc8c-nb7xh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0125 18:44:14.845339       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-nb7xh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0125 18:44:14.845353       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0125 18:44:14.845367       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-nb7xh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0125 18:44:14.845413       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0125 18:44:14.845428       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0125 18:44:14.845437       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0125 18:44:14.845449       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0125 18:44:14.845460       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0125 18:44:14.845488       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0125 18:44:14.845502       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0125 18:44:14.845517       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0125 18:44:14.845533       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nW0125 18:44:14.845537       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 25 18:44:24.940 E ns/openshift-ingress-canary pod/ingress-canary-vkrjv node/ip-10-0-223-161.us-west-2.compute.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 25 18:44:31.286 E ns/openshift-controller-manager pod/controller-manager-jbcvg node/ip-10-0-156-35.us-west-2.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error    1 controller_manager.go:158] Started Origin Controllers\nI0125 17:53:51.967202       1 buildconfig_controller.go:212] Starting buildconfig controller\nI0125 17:53:51.973512       1 factory.go:85] deploymentconfig controller caches are synced. Starting workers.\nI0125 17:53:51.983388       1 shared_informer.go:247] Caches are synced for service account \nI0125 17:53:52.001734       1 templateinstance_controller.go:297] Starting TemplateInstance controller\nI0125 17:53:52.004235       1 templateinstance_finalizer.go:194] Starting TemplateInstanceFinalizer controller\nI0125 17:53:52.006488       1 shared_informer.go:247] Caches are synced for DefaultRoleBindingController \nI0125 17:53:52.087447       1 factory.go:80] Deployer controller caches are synced. Starting workers.\nI0125 17:53:52.179497       1 deleted_token_secrets.go:70] caches synced\nI0125 17:53:52.179524       1 deleted_dockercfg_secrets.go:75] caches synced\nI0125 17:53:52.179539       1 docker_registry_service.go:156] caches synced\nI0125 17:53:52.179545       1 create_dockercfg_secrets.go:219] urls found\nI0125 17:53:52.179669       1 create_dockercfg_secrets.go:225] caches synced\nI0125 17:53:52.179805       1 docker_registry_service.go:298] Updating registry URLs from map[172.30.140.15:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.140.15:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0125 17:53:52.210957       1 build_controller.go:475] Starting build controller\nI0125 17:53:52.210973       1 build_controller.go:477] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nE0125 17:53:52.716528       1 imagestream_controller.go:136] Error syncing image stream "openshift/oauth-proxy": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "oauth-proxy": the object has been modified; please apply your changes to the latest version and try again\n
Jan 25 18:44:37.028 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-8qjnv node/ip-10-0-223-125.us-west-2.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error raceful-termination] RunPreShutdownHooks has completed\nI0125 18:44:27.905392       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0125 18:44:27.905414       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0125 18:44:27.905432       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0125 18:44:27.905758       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0125 18:44:27.905774       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0125 18:44:27.905787       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0125 18:44:27.905893       1 secure_serving.go:311] Stopped listening on [::]:8443\nI0125 18:44:27.905950       1 genericapiserver.go:363] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nI0125 18:44:27.905981       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0125 18:44:27.906011       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/tmp/serving-cert-182734652/tls.crt::/tmp/serving-cert-182734652/tls.key"\nI0125 18:44:27.906156       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0125 18:44:27.906200       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0125 18:44:27.906226       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0125 18:44:27.906246       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0125 18:44:27.906269       1 base_controller.go:167] Shutting down StaticResourceController ...\nW0125 18:44:27.906293       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 25 18:44:38.802 E ns/openshift-ingress-canary pod/ingress-canary-r6vc6 node/ip-10-0-171-121.us-west-2.compute.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 25 18:44:42.172 E ns/openshift-controller-manager pod/controller-manager-dppz7 node/ip-10-0-223-125.us-west-2.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error I0125 17:51:30.426943       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0125 17:51:30.427981       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0125 17:51:30.427992       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0125 17:51:30.428064       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0125 17:51:30.428109       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
#1617762501737320448junit7 days ago
Jan 24 07:17:39.865 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-nmk6z node/ip-10-0-148-172.us-west-2.compute.internal container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error r.go:167\nI0124 07:17:36.668526       1 reflector.go:225] Stopping reflector *v1.Image (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 07:17:36.668546       1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 07:17:36.668566       1 reflector.go:225] Stopping reflector *v1.Proxy (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 07:17:36.668591       1 reflector.go:225] Stopping reflector *v1.Build (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 07:17:36.668610       1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 07:17:36.668629       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 07:17:36.668655       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 07:17:36.668675       1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 07:17:36.668733       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0124 07:17:36.668748       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0124 07:17:36.668752       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0124 07:17:36.668760       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0124 07:17:36.668769       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0124 07:17:36.668778       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0124 07:17:36.668785       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nW0124 07:17:36.668898       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 07:17:39.865 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-nmk6z node/ip-10-0-148-172.us-west-2.compute.internal container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 07:17:40.493 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-ccbtf node/ip-10-0-148-172.us-west-2.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error 1-24 07:17:35.425353632 +0000 UTC m=+3911.516554115\nI0124 07:17:36.064652       1 operator.go:159] Finished syncing operator at 639.291947ms\nI0124 07:17:36.064687       1 operator.go:157] Starting syncing operator at 2023-01-24 07:17:36.064684039 +0000 UTC m=+3912.155884502\nI0124 07:17:36.244994       1 operator.go:159] Finished syncing operator at 180.302357ms\nI0124 07:17:37.524437       1 operator.go:157] Starting syncing operator at 2023-01-24 07:17:37.524428383 +0000 UTC m=+3913.615628846\nI0124 07:17:37.791726       1 operator.go:159] Finished syncing operator at 267.288746ms\nI0124 07:17:37.791846       1 operator.go:157] Starting syncing operator at 2023-01-24 07:17:37.791842571 +0000 UTC m=+3913.883043063\nI0124 07:17:37.806795       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0124 07:17:37.807151       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0124 07:17:37.807210       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0124 07:17:37.808024       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0124 07:17:37.808068       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0124 07:17:37.807507       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0124 07:17:37.807517       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0124 07:17:37.807527       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0124 07:17:37.813795       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0124 07:17:37.807536       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0124 07:17:37.807543       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nW0124 07:17:37.807845       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 07:17:41.511 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-66dfw node/ip-10-0-210-133.us-west-2.compute.internal container/webhook reason/ContainerExit code/2 cause/Error
Jan 24 07:17:43.537 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-b7vkv node/ip-10-0-210-133.us-west-2.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 24 07:17:53.638 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-b4c2d node/ip-10-0-148-222.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error t-console-operator", Name:"console-operator-6bbd4fcc8c-b4c2d", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0124 07:17:47.684557       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0124 07:17:47.684575       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0124 07:17:47.684579       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0124 07:17:47.684586       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0124 07:17:47.684594       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0124 07:17:47.684604       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0124 07:17:47.684615       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0124 07:17:47.684624       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0124 07:17:47.684632       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0124 07:17:47.684639       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0124 07:17:47.684666       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0124 07:17:47.684676       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0124 07:17:47.684683       1 base_controller.go:114] Shutting down worker of ConsoleOperator controller ...\nI0124 07:17:47.684688       1 base_controller.go:104] All ConsoleOperator workers have been terminated\nI0124 07:17:47.684695       1 base_controller.go:114] Shutting down worker of ConsoleServiceController controller ...\nI0124 07:17:47.684698       1 base_controller.go:104] All ConsoleServiceController workers have been terminated\nW0124 07:17:47.684702       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 07:17:59.174 E ns/openshift-monitoring pod/cluster-monitoring-operator-894d44997-6d6bj node/ip-10-0-148-172.us-west-2.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 07:18:08.603 E ns/openshift-controller-manager pod/controller-manager-sfqvt node/ip-10-0-148-222.us-west-2.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error d; please apply your changes to the latest version and try again\nE0124 07:17:44.534087       1 imagestream_controller.go:136] Error syncing image stream "openshift/apicurito-ui": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "apicurito-ui": the object has been modified; please apply your changes to the latest version and try again\nE0124 07:17:44.590534       1 imagestream_controller.go:136] Error syncing image stream "openshift/jenkins": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "jenkins": the object has been modified; please apply your changes to the latest version and try again\nE0124 07:17:44.750599       1 imagestream_controller.go:136] Error syncing image stream "openshift/openjdk-11-rhel7": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "openjdk-11-rhel7": the object has been modified; please apply your changes to the latest version and try again\nE0124 07:17:44.905578       1 imagestream_controller.go:136] Error syncing image stream "openshift/ubi8-openjdk-11": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "ubi8-openjdk-11": the object has been modified; please apply your changes to the latest version and try again\nE0124 07:17:45.044717       1 imagestream_controller.go:136] Error syncing image stream "openshift/redis": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "redis": the object has been modified; please apply your changes to the latest version and try again\nE0124 07:17:45.254455       1 imagestream_controller.go:136] Error syncing image stream "openshift/dotnet": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "dotnet": the object has been modified; please apply your changes to the latest version and try again\nE0124 07:17:45.426361       1 imagestream_controller.go:136] Error syncing image stream "openshift/apicurito-ui": Operation cannot be fulfilled on imagestream.image.openshift.io "apicurito-ui": the image stream was updated from "48714" to "48900"\n
Jan 24 07:18:09.847 E ns/openshift-controller-manager pod/controller-manager-5wlp2 node/ip-10-0-210-133.us-west-2.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error I0124 06:22:19.525185       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0124 06:22:19.527009       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0124 06:22:19.527026       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0124 06:22:19.527112       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0124 06:22:19.527720       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
Jan 24 07:18:13.320 E ns/openshift-controller-manager pod/controller-manager-hpbcl node/ip-10-0-148-172.us-west-2.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error I0124 06:22:20.833040       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0124 06:22:20.838847       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0124 06:22:20.838954       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0124 06:22:20.838926       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0124 06:22:20.839575       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
Jan 24 07:18:17.281 E clusteroperator/openshift-controller-manager condition/Available status/False reason/_NoPodsAvailable changed: Available: no daemon pods available on any node.
#1616267665838444544junit11 days ago
Jan 20 04:05:35.337 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-zpfn9 node/ip-10-0-136-79.us-west-2.compute.internal container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 20 04:05:35.425 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-vsr6g node/ip-10-0-136-79.us-west-2.compute.internal container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 20 04:05:40.303 E ns/openshift-monitoring pod/node-exporter-vrqjb node/ip-10-0-209-230.us-west-2.compute.internal container/node-exporter reason/ContainerExit code/143 cause/Error 0T03:16:09.743Z caller=node_exporter.go:113 collector=meminfo\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=netclass\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=netdev\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=netstat\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=nfs\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=nfsd\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=powersupplyclass\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=pressure\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=rapl\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=schedstat\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=sockstat\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=softnet\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=stat\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=textfile\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=thermal_zone\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=time\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=timex\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=udp_queues\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=uname\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=vmstat\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=xfs\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:113 collector=zfs\nlevel=info ts=2023-01-20T03:16:09.743Z caller=node_exporter.go:195 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2023-01-20T03:16:09.743Z caller=tls_config.go:191 msg="TLS is disabled." http2=false\n
Jan 20 04:05:43.086 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-np8kj node/ip-10-0-221-26.us-west-2.compute.internal container/webhook reason/ContainerExit code/2 cause/Error
Jan 20 04:05:44.605 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-ghnj2 node/ip-10-0-129-37.us-west-2.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 20 04:05:46.626 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-zgxzc node/ip-10-0-129-37.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error minate, becoming unready, but keeping serving\nI0120 04:05:45.854994       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-zgxzc", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0120 04:05:45.855005       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0120 04:05:45.855020       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-zgxzc", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0120 04:05:45.855030       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0120 04:05:45.855192       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0120 04:05:45.855227       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0120 04:05:45.855244       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0120 04:05:45.855258       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0120 04:05:45.855270       1 secure_serving.go:311] Stopped listening on [::]:8443\nI0120 04:05:45.855289       1 genericapiserver.go:373] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nI0120 04:05:45.855292       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nW0120 04:05:45.855394       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 20 04:05:49.241 E ns/openshift-service-ca-operator pod/service-ca-operator-7f56db4fb6-f6v5t node/ip-10-0-136-79.us-west-2.compute.internal container/service-ca-operator reason/ContainerExit code/1 cause/Error
Jan 20 04:05:49.241 E ns/openshift-service-ca-operator pod/service-ca-operator-7f56db4fb6-f6v5t node/ip-10-0-136-79.us-west-2.compute.internal container/service-ca-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 20 04:05:50.425 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-lfhsv node/ip-10-0-136-79.us-west-2.compute.internal container/cluster-storage-operator reason/ContainerExit code/1 cause/Error r_storage ...\nI0120 04:05:42.138677       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0120 04:05:42.138686       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0120 04:05:42.138694       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0120 04:05:42.138703       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nI0120 04:05:42.138839       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0120 04:05:42.138848       1 base_controller.go:167] Shutting down AWSEBSCSIDriverOperatorDeployment ...\nI0120 04:05:42.138852       1 base_controller.go:145] All AWSEBSCSIDriverOperatorDeployment post start hooks have been terminated\nI0120 04:05:42.139061       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0120 04:05:42.139076       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0120 04:05:42.139087       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0120 04:05:42.139164       1 secure_serving.go:311] Stopped listening on [::]:8443\nI0120 04:05:42.139186       1 genericapiserver.go:363] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nI0120 04:05:42.139194       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0120 04:05:42.139205       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0120 04:05:42.139588       1 base_controller.go:114] Shutting down worker of StatusSyncer_storage controller ...\nI0120 04:05:42.139596       1 base_controller.go:104] All StatusSyncer_storage workers have been terminated\nW0120 04:05:42.139637       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 20 04:05:50.425 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-lfhsv node/ip-10-0-136-79.us-west-2.compute.internal container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 20 04:05:54.073 E ns/openshift-monitoring pod/kube-state-metrics-d95579cf4-nrbxb node/ip-10-0-189-6.us-west-2.compute.internal container/kube-state-metrics reason/ContainerExit code/2 cause/Error
#1615908317438152704junit12 days ago
Jan 19 04:25:07.283 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-cg6zl node/ip-10-0-139-232.us-west-1.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 19 04:25:12.579 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-m6thv node/ip-10-0-161-120.us-west-1.compute.internal container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error 725-4247-a577-fa1f67d2388b" srcIP="10.128.2.25:33712" resp=200\nI0119 04:24:39.030412       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.ClusterRole total 15 items received\nI0119 04:24:43.020690       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.ConfigMap total 165 items received\nI0119 04:24:46.753448       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="10.047595ms" userAgent="Prometheus/2.29.2" audit-ID="a932c2c3-3913-45af-a8eb-bdcf8252e054" srcIP="10.129.2.13:60504" resp=200\nI0119 04:24:49.027975       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Role total 9 items received\nI0119 04:24:53.027723       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Service total 8 items received\nI0119 04:24:57.636127       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0119 04:24:57.636188       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0119 04:24:57.636221       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0119 04:24:57.636234       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0119 04:24:57.636251       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0119 04:24:57.636386       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0119 04:24:57.636423       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0119 04:24:57.636435       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0119 04:24:57.636443       1 base_controller.go:167] Shutting down ResourceSyncController ...\n
Jan 19 04:25:12.579 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-m6thv node/ip-10-0-161-120.us-west-1.compute.internal container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 04:25:12.625 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-vks6g node/ip-10-0-161-120.us-west-1.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error 1-19 04:24:40.140157663 +0000 UTC m=+3755.307630302\nI0119 04:24:40.317089       1 operator.go:159] Finished syncing operator at 176.924441ms\nI0119 04:24:40.317126       1 operator.go:157] Starting syncing operator at 2023-01-19 04:24:40.317123445 +0000 UTC m=+3755.484596084\nI0119 04:24:40.578800       1 operator.go:159] Finished syncing operator at 261.667316ms\nI0119 04:24:43.865638       1 operator.go:157] Starting syncing operator at 2023-01-19 04:24:43.865628473 +0000 UTC m=+3759.033101102\nI0119 04:24:43.968915       1 operator.go:159] Finished syncing operator at 103.278958ms\nI0119 04:24:57.083624       1 operator.go:157] Starting syncing operator at 2023-01-19 04:24:57.083616847 +0000 UTC m=+3772.251089476\nI0119 04:24:57.118539       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0119 04:24:57.118836       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0119 04:24:57.118907       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0119 04:24:57.118940       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0119 04:24:57.118975       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0119 04:24:57.119338       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0119 04:24:57.119382       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0119 04:24:57.119408       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0119 04:24:57.119798       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0119 04:24:57.119838       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0119 04:24:57.119859       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nW0119 04:24:57.119964       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 19 04:25:12.706 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-bfb9v node/ip-10-0-161-120.us-west-1.compute.internal container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 04:25:22.262 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-rfzqn node/ip-10-0-139-232.us-west-1.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error , becoming unready, but keeping serving\nI0119 04:25:19.926281       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-rfzqn", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0119 04:25:19.926321       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0119 04:25:19.926409       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-rfzqn", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0119 04:25:19.926447       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0119 04:25:19.926812       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0119 04:25:19.926864       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0119 04:25:19.926884       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0119 04:25:19.926954       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0119 04:25:19.927027       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0119 04:25:19.927076       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0119 04:25:19.927110       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0119 04:25:19.927120       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nW0119 04:25:19.927480       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 19 04:25:24.978 E ns/openshift-ingress-canary pod/ingress-canary-4xqkc node/ip-10-0-184-22.us-west-1.compute.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 19 04:25:37.498 E ns/openshift-ingress-canary pod/ingress-canary-hsjgl node/ip-10-0-186-180.us-west-1.compute.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 19 04:25:43.329 E ns/openshift-cloud-credential-operator pod/pod-identity-webhook-6b544f8d6b-zm2lk node/ip-10-0-139-232.us-west-1.compute.internal container/pod-identity-webhook reason/ContainerExit code/137 cause/Error
Jan 19 04:25:46.854 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-sr9x8 node/ip-10-0-228-110.us-west-1.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 19 04:25:46.901 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-4qkt7 node/ip-10-0-228-110.us-west-1.compute.internal container/webhook reason/ContainerExit code/2 cause/Error
#1615908317278769152junit12 days ago
Jan 19 04:15:27.821 E ns/openshift-insights pod/insights-operator-854449444c-z2hvx node/ip-10-0-144-81.us-west-2.compute.internal container/insights-operator reason/ContainerExit code/2 cause/Error ent="Prometheus/2.29.2" audit-ID="72b599c3-fc67-4590-8146-e5d853583ec7" srcIP="10.131.0.15:56648" resp=200\nI0119 04:13:39.756826       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.818842ms" userAgent="Prometheus/2.29.2" audit-ID="ca9b3a02-4efd-4f74-8bb1-7f11be1c72e6" srcIP="10.128.2.21:43140" resp=200\nI0119 04:14:01.242811       1 status.go:354] The operator is healthy\nI0119 04:14:01.242859       1 status.go:441] No status update necessary, objects are identical\nI0119 04:14:09.497537       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="6.440245ms" userAgent="Prometheus/2.29.2" audit-ID="db9733c5-13ca-4f94-b9ea-198eb7bfab67" srcIP="10.131.0.15:56648" resp=200\nI0119 04:14:09.755136       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="1.728808ms" userAgent="Prometheus/2.29.2" audit-ID="d1004198-afd8-4f52-9c8c-d2e89ef1c0cd" srcIP="10.128.2.21:43140" resp=200\nI0119 04:14:39.498212       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="6.664618ms" userAgent="Prometheus/2.29.2" audit-ID="ff1178d9-8c60-49fc-a35f-16f2b98bce62" srcIP="10.131.0.15:56648" resp=200\nI0119 04:14:39.757944       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.396017ms" userAgent="Prometheus/2.29.2" audit-ID="75a6de13-f9a9-4ec1-8ddb-a2b480e1343e" srcIP="10.128.2.21:43140" resp=200\nI0119 04:14:51.559822       1 configobserver.go:77] Refreshing configuration from cluster pull secret\nI0119 04:14:51.564805       1 configobserver.go:102] Found cloud.openshift.com token\nI0119 04:14:51.564906       1 configobserver.go:120] Refreshing configuration from cluster secret\nI0119 04:15:09.516438       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="24.893853ms" userAgent="Prometheus/2.29.2" audit-ID="4d1c8d7b-cf5f-4542-9bfc-4289bd7ac856" srcIP="10.131.0.15:56648" resp=200\nI0119 04:15:09.755693       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="2.559634ms" userAgent="Prometheus/2.29.2" audit-ID="6f5d6cc7-447d-47ed-bf74-de3ec27002a8" srcIP="10.128.2.21:43140" resp=200\n
Jan 19 04:15:27.821 E ns/openshift-insights pod/insights-operator-854449444c-z2hvx node/ip-10-0-144-81.us-west-2.compute.internal container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 04:15:33.718 E ns/openshift-ingress-canary pod/ingress-canary-6fdxc node/ip-10-0-201-143.us-west-2.compute.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 19 04:15:43.260 E ns/openshift-ingress-canary pod/ingress-canary-r5499 node/ip-10-0-186-111.us-west-2.compute.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 19 04:15:44.349 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-fcxzb node/ip-10-0-255-140.us-west-2.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error 01-19 04:15:22.434947194 +0000 UTC m=+3105.773891023\nI0119 04:15:22.541320       1 operator.go:159] Finished syncing operator at 106.366617ms\nI0119 04:15:34.797195       1 operator.go:157] Starting syncing operator at 2023-01-19 04:15:34.797167049 +0000 UTC m=+3118.136110878\nI0119 04:15:34.862868       1 operator.go:159] Finished syncing operator at 65.692441ms\nI0119 04:15:38.537808       1 operator.go:157] Starting syncing operator at 2023-01-19 04:15:38.537800246 +0000 UTC m=+3121.876744095\nI0119 04:15:38.974918       1 operator.go:159] Finished syncing operator at 437.109175ms\nI0119 04:15:38.974973       1 operator.go:157] Starting syncing operator at 2023-01-19 04:15:38.974969683 +0000 UTC m=+3122.313913511\nI0119 04:15:39.021840       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0119 04:15:39.022165       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0119 04:15:39.022200       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0119 04:15:39.022209       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0119 04:15:39.022230       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0119 04:15:39.022289       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0119 04:15:39.022294       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0119 04:15:39.022365       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0119 04:15:39.022377       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0119 04:15:39.022386       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0119 04:15:39.022395       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nW0119 04:15:39.022438       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 19 04:15:46.300 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-qwlwb node/ip-10-0-144-81.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error 6bbd4fcc8c-qwlwb", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0119 04:15:41.571240       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0119 04:15:41.571257       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-qwlwb", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0119 04:15:41.571278       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0119 04:15:41.571288       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0119 04:15:41.571319       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0119 04:15:41.571333       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0119 04:15:41.571342       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0119 04:15:41.571349       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0119 04:15:41.571356       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0119 04:15:41.571364       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0119 04:15:41.571371       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0119 04:15:41.571392       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0119 04:15:41.571403       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0119 04:15:41.571411       1 base_controller.go:167] Shutting down ManagementStateController ...\nW0119 04:15:41.571418       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0119 04:15:41.571418       1 base_controller.go:167] Shutting down LoggingSyncer ...\n
Jan 19 04:15:47.576 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-nbqtg node/ip-10-0-144-81.us-west-2.compute.internal container/cluster-storage-operator reason/ContainerExit code/1 cause/Error figObserver controller ...\nI0119 04:15:39.595562       1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0119 04:15:39.595568       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0119 04:15:39.595571       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0119 04:15:39.595576       1 base_controller.go:114] Shutting down worker of DefaultStorageClassController controller ...\nI0119 04:15:39.595594       1 base_controller.go:104] All DefaultStorageClassController workers have been terminated\nI0119 04:15:39.595600       1 base_controller.go:114] Shutting down worker of StatusSyncer_storage controller ...\nI0119 04:15:39.595603       1 base_controller.go:104] All StatusSyncer_storage workers have been terminated\nI0119 04:15:39.595608       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0119 04:15:39.595611       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0119 04:15:39.595616       1 base_controller.go:114] Shutting down worker of VSphereProblemDetectorStarter controller ...\nI0119 04:15:39.595619       1 base_controller.go:104] All VSphereProblemDetectorStarter workers have been terminated\nI0119 04:15:39.595628       1 base_controller.go:114] Shutting down worker of AWSEBSCSIDriverOperator controller ...\nI0119 04:15:39.595632       1 base_controller.go:104] All AWSEBSCSIDriverOperator workers have been terminated\nI0119 04:15:39.595635       1 controller_manager.go:54] AWSEBSCSIDriverOperator controller terminated\nI0119 04:15:39.595640       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0119 04:15:39.595643       1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0119 04:15:39.595656       1 controller_manager.go:54] StaticResourceController controller terminated\nW0119 04:15:39.595850       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 19 04:15:47.576 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-nbqtg node/ip-10-0-144-81.us-west-2.compute.internal container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 04:15:47.683 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-w5cgl node/ip-10-0-144-81.us-west-2.compute.internal container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error \nI0119 04:15:39.196913       1 reflector.go:225] Stopping reflector *v1.Image (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0119 04:15:39.196949       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0119 04:15:39.196976       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0119 04:15:39.197009       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0119 04:15:39.197036       1 reflector.go:225] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0119 04:15:39.197062       1 reflector.go:225] Stopping reflector *v1.Proxy (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0119 04:15:39.197090       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0119 04:15:39.197118       1 reflector.go:225] Stopping reflector *v1.Service (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0119 04:15:39.197177       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0119 04:15:39.197188       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0119 04:15:39.197197       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0119 04:15:39.197206       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0119 04:15:39.197216       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0119 04:15:39.197219       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0119 04:15:39.197285       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nW0119 04:15:39.197441       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 19 04:15:47.683 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-w5cgl node/ip-10-0-144-81.us-west-2.compute.internal container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 04:15:51.544 E ns/openshift-monitoring pod/cluster-monitoring-operator-894d44997-lttqj node/ip-10-0-255-140.us-west-2.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1615529298418995200junit13 days ago
Jan 18 03:22:39.735 E ns/openshift-controller-manager pod/controller-manager-lq2ww node/ip-10-0-130-173.us-west-2.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error I0118 02:28:58.801727       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0118 02:28:58.803545       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0118 02:28:58.803679       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0118 02:28:58.803641       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0118 02:28:58.804456       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
Jan 18 03:22:47.501 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-m594v node/ip-10-0-155-5.us-west-2.compute.internal container/webhook reason/ContainerExit code/2 cause/Error
Jan 18 03:22:48.104 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-v4xsp node/ip-10-0-130-173.us-west-2.compute.internal container/cluster-storage-operator reason/ContainerExit code/1 cause/Error  1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0118 03:22:43.630327       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0118 03:22:43.630457       1 base_controller.go:114] Shutting down worker of AWSEBSCSIDriverOperator controller ...\nI0118 03:22:43.630348       1 base_controller.go:167] Shutting down AWSEBSCSIDriverOperator ...\nI0118 03:22:43.630696       1 base_controller.go:145] All AWSEBSCSIDriverOperator post start hooks have been terminated\nI0118 03:22:43.630714       1 base_controller.go:104] All AWSEBSCSIDriverOperator workers have been terminated\nI0118 03:22:43.630895       1 controller_manager.go:54] AWSEBSCSIDriverOperator controller terminated\nI0118 03:22:43.630359       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0118 03:22:43.630367       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0118 03:22:43.630375       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0118 03:22:43.630384       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0118 03:22:43.630393       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0118 03:22:43.631077       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0118 03:22:43.630400       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nI0118 03:22:43.630425       1 base_controller.go:114] Shutting down worker of CSIDriverStarter controller ...\nI0118 03:22:43.631165       1 base_controller.go:104] All CSIDriverStarter workers have been terminated\nI0118 03:22:43.630444       1 base_controller.go:167] Shutting down AWSEBSCSIDriverOperatorDeployment ...\nI0118 03:22:43.631209       1 base_controller.go:145] All AWSEBSCSIDriverOperatorDeployment post start hooks have been terminated\nI0118 03:22:43.630451       1 base_controller.go:167] Shutting down StaticResourceController ...\nW0118 03:22:43.630507       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 03:22:48.104 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-v4xsp node/ip-10-0-130-173.us-west-2.compute.internal container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 18 03:22:48.438 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-prvr2 node/ip-10-0-155-5.us-west-2.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 18 03:22:50.459 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-cgc99 node/ip-10-0-155-5.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error shed' The minimal shutdown duration of 0s finished\nI0118 03:22:49.422993       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0118 03:22:49.423009       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-cgc99", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0118 03:22:49.423021       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0118 03:22:49.424101       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0118 03:22:49.424330       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0118 03:22:49.424395       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0118 03:22:49.424416       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0118 03:22:49.424421       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0118 03:22:49.424431       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0118 03:22:49.424449       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0118 03:22:49.424458       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0118 03:22:49.424465       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0118 03:22:49.424472       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0118 03:22:49.424481       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0118 03:22:49.424550       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nW0118 03:22:49.424550       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0118 03:22:49.424591       1 base_controller.go:114] Shutting down worker of StatusSyncer_console controller ...\n
Jan 18 03:22:52.728 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-168-228.us-west-2.compute.internal container/thanos-sidecar reason/ContainerExit code/1 cause/Error probe status" status=not-healthy reason="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=warn ts=2023-01-18T03:22:51.101795289Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=info ts=2023-01-18T03:22:51.101820229Z caller=grpc.go:130 service=gRPC/server component=sidecar msg="internal server is shutting down" err="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=info ts=2023-01-18T03:22:51.10185138Z caller=grpc.go:143 service=gRPC/server component=sidecar msg="gracefully stopping internal server"\nlevel=info ts=2023-01-18T03:22:51.10187999Z caller=grpc.go:156 service=gRPC/server component=sidecar msg="internal server is shutdown gracefully" err="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=warn ts=2023-01-18T03:22:51.101950821Z caller=sidecar.go:159 msg="failed to fetch prometheus version. Is Prometheus running? Retrying" err="perform GET request against http://localhost:9090/api/v1/status/buildinfo: Get \"http://localhost:9090/api/v1/status/buildinfo\": context canceled"\nlevel=error ts=2023-01-18T03:22:51.102083773Z caller=main.go:156 err="listen tcp: lookup $(POD_IP): no such host\nlisten gRPC on address [$(POD_IP)]:10901\ngithub.com/thanos-io/thanos/pkg/server/grpc.(*Server).ListenAndServe\n\t/go/src/github.com/improbable-eng/thanos/pkg/server/grpc/grpc.g
Jan 18 03:22:54.546 E ns/openshift-monitoring pod/kube-state-metrics-d95579cf4-z25nj node/ip-10-0-151-160.us-west-2.compute.internal container/kube-state-metrics reason/ContainerExit code/2 cause/Error
Jan 18 03:22:57.609 E ns/openshift-monitoring pod/node-exporter-wn627 node/ip-10-0-222-45.us-west-2.compute.internal container/node-exporter reason/ContainerExit code/143 cause/Error 8T02:18:48.979Z caller=node_exporter.go:113 collector=meminfo\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=netclass\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=netdev\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=netstat\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=nfs\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=nfsd\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=powersupplyclass\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=pressure\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=rapl\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=schedstat\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=sockstat\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=softnet\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=stat\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=textfile\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=thermal_zone\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=time\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=timex\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=udp_queues\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=uname\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=vmstat\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=xfs\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:113 collector=zfs\nlevel=info ts=2023-01-18T02:18:48.979Z caller=node_exporter.go:195 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2023-01-18T02:18:48.979Z caller=tls_config.go:191 msg="TLS is disabled." http2=false\n
Jan 18 03:22:59.837 E ns/openshift-monitoring pod/grafana-76d6f65f8-jbd9d node/ip-10-0-168-228.us-west-2.compute.internal container/grafana-proxy reason/ContainerExit code/2 cause/Error
Jan 18 03:23:03.728 E ns/openshift-monitoring pod/node-exporter-ljrgf node/ip-10-0-168-228.us-west-2.compute.internal container/node-exporter reason/ContainerExit code/143 cause/Error 8T02:28:38.007Z caller=node_exporter.go:113 collector=meminfo\nlevel=info ts=2023-01-18T02:28:38.007Z caller=node_exporter.go:113 collector=netclass\nlevel=info ts=2023-01-18T02:28:38.007Z caller=node_exporter.go:113 collector=netdev\nlevel=info ts=2023-01-18T02:28:38.007Z caller=node_exporter.go:113 collector=netstat\nlevel=info ts=2023-01-18T02:28:38.007Z caller=node_exporter.go:113 collector=nfs\nlevel=info ts=2023-01-18T02:28:38.007Z caller=node_exporter.go:113 collector=nfsd\nlevel=info ts=2023-01-18T02:28:38.007Z caller=node_exporter.go:113 collector=powersupplyclass\nlevel=info ts=2023-01-18T02:28:38.007Z caller=node_exporter.go:113 collector=pressure\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=rapl\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=schedstat\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=sockstat\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=softnet\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=stat\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=textfile\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=thermal_zone\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=time\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=timex\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=udp_queues\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=uname\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=vmstat\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=xfs\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:113 collector=zfs\nlevel=info ts=2023-01-18T02:28:38.008Z caller=node_exporter.go:195 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2023-01-18T02:28:38.008Z caller=tls_config.go:191 msg="TLS is disabled." http2=false\n
#1615795966210740224junit12 days ago
Jan 18 21:01:23.018 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-192-169.us-west-1.compute.internal container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-18T20:10:11.247338913Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-18T20:10:11.247392614Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-18T20:10:11.247510165Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-18T20:10:11.247976112Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-01-18T20:10:12.499538076Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Jan 18 21:01:23.188 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-192-169.us-west-1.compute.internal container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/18 20:10:18 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/18 20:10:18 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/18 20:10:18 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/18 20:10:18 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/18 20:10:18 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/18 20:10:18 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/18 20:10:18 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2023/01/18 20:10:18 http.go:107: HTTPS: listening on [::]:9091\nI0118 20:10:18.499568       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/18 20:26:25 server.go:3120: http: TLS handshake error from 10.131.0.18:48142: read tcp 10.129.2.13:9091->10.131.0.18:48142: read: connection reset by peer\n
Jan 18 21:01:23.188 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-192-169.us-west-1.compute.internal container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-18T20:10:18.104570196Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-18T20:10:18.104631657Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-18T20:10:18.104733049Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-18T20:10:18.444209677Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T20:10:18.44440383Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T20:10:21.830439279Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T20:11:43.289907937Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T20:29:40.29788787Z caller=rel
Jan 18 21:01:24.004 E ns/openshift-monitoring pod/thanos-querier-557f98c768-wpqqz node/ip-10-0-192-169.us-west-1.compute.internal container/oauth-proxy reason/ContainerExit code/2 cause/Error 2023/01/18 20:10:08 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2023/01/18 20:10:08 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/18 20:10:08 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/18 20:10:08 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/18 20:10:08 oauthproxy.go:224: compiled skip-auth-regex => "^/-/(healthy|ready)$"\n2023/01/18 20:10:08 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2023/01/18 20:10:08 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/18 20:10:08 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0118 20:10:08.255619       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/18 20:10:08 http.go:107: HTTPS: listening on [::]:9091\n
Jan 18 21:01:24.515 E ns/openshift-monitoring pod/node-exporter-85sxm node/ip-10-0-153-244.us-west-1.compute.internal container/node-exporter reason/ContainerExit code/143 cause/Error 8T19:59:40.142Z caller=node_exporter.go:113 collector=meminfo\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=netclass\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=netdev\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=netstat\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=nfs\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=nfsd\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=powersupplyclass\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=pressure\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=rapl\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=schedstat\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=sockstat\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=softnet\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=stat\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=textfile\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=thermal_zone\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=time\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=timex\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=udp_queues\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=uname\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=vmstat\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=xfs\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:113 collector=zfs\nlevel=info ts=2023-01-18T19:59:40.142Z caller=node_exporter.go:195 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2023-01-18T19:59:40.142Z caller=tls_config.go:191 msg="TLS is disabled." http2=false\n
Jan 18 21:01:29.646 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-rw4q6 node/ip-10-0-203-70.us-west-1.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error 6", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0118 21:01:28.110139       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0118 21:01:28.110138       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-rw4q6", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0118 21:01:28.110154       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0118 21:01:28.110143       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0118 21:01:28.110169       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-rw4q6", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0118 21:01:28.110199       1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0118 21:01:28.110213       1 base_controller.go:114] Shutting down worker of ConsoleCLIDownloadsController controller ...\nI0118 21:01:28.110217       1 base_controller.go:104] All ConsoleCLIDownloadsController workers have been terminated\nI0118 21:01:28.110118       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0118 21:01:28.110225       1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0118 21:01:28.110190       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0118 21:01:28.110232       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nW0118 21:01:28.110232       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 21:01:32.656 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-g8rg9 node/ip-10-0-203-70.us-west-1.compute.internal container/webhook reason/ContainerExit code/2 cause/Error
Jan 18 21:01:35.425 E ns/openshift-controller-manager pod/controller-manager-prsxf node/ip-10-0-157-253.us-west-1.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error I0118 20:08:46.135306       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0118 20:08:46.136575       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0118 20:08:46.136587       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0118 20:08:46.136648       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0118 20:08:46.136680       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
Jan 18 21:01:35.717 E ns/openshift-controller-manager pod/controller-manager-nsjn8 node/ip-10-0-203-70.us-west-1.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error hift.io/templateinstancefinalizer"\nI0118 20:11:15.945712       1 controller_manager.go:155] Started "openshift.io/templateinstancefinalizer"\nI0118 20:11:15.945730       1 controller_manager.go:158] Started Origin Controllers\nI0118 20:11:15.945734       1 templateinstance_finalizer.go:189] TemplateInstanceFinalizer controller waiting for cache sync\nI0118 20:11:16.028972       1 templateinstance_controller.go:297] Starting TemplateInstance controller\nI0118 20:11:16.032322       1 shared_informer.go:247] Caches are synced for DefaultRoleBindingController \nI0118 20:11:16.041810       1 factory.go:85] deploymentconfig controller caches are synced. Starting workers.\nI0118 20:11:16.046210       1 templateinstance_finalizer.go:194] Starting TemplateInstanceFinalizer controller\nI0118 20:11:16.065716       1 shared_informer.go:247] Caches are synced for service account \nI0118 20:11:16.079430       1 buildconfig_controller.go:212] Starting buildconfig controller\nI0118 20:11:16.094130       1 factory.go:80] Deployer controller caches are synced. Starting workers.\nI0118 20:11:16.238500       1 build_controller.go:475] Starting build controller\nI0118 20:11:16.238515       1 build_controller.go:477] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nI0118 20:11:16.270753       1 deleted_token_secrets.go:70] caches synced\nI0118 20:11:16.270762       1 deleted_dockercfg_secrets.go:75] caches synced\nI0118 20:11:16.270763       1 docker_registry_service.go:156] caches synced\nI0118 20:11:16.270772       1 create_dockercfg_secrets.go:219] urls found\nI0118 20:11:16.270989       1 create_dockercfg_secrets.go:225] caches synced\nI0118 20:11:16.271085       1 docker_registry_service.go:298] Updating registry URLs from map[172.30.89.181:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.89.181:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\n
Jan 18 21:01:35.893 E ns/openshift-controller-manager pod/controller-manager-qg6kc node/ip-10-0-153-244.us-west-1.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error I0118 20:08:45.568625       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0118 20:08:45.569804       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0118 20:08:45.569815       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0118 20:08:45.569879       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0118 20:08:45.569910       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
Jan 18 21:01:36.402 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-b52ct node/ip-10-0-157-253.us-west-1.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
#1615529295000637440junit13 days ago
Jan 18 03:19:42.002 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-6prmj node/ip-10-0-254-31.us-west-1.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error ting syncing operator at 2023-01-18 03:19:32.412116373 +0000 UTC m=+3820.300305167\nI0118 03:19:32.444745       1 operator.go:159] Finished syncing operator at 32.623063ms\nI0118 03:19:39.784084       1 operator.go:157] Starting syncing operator at 2023-01-18 03:19:39.784076303 +0000 UTC m=+3827.672265097\nI0118 03:19:39.895007       1 operator.go:159] Finished syncing operator at 110.921606ms\nI0118 03:19:39.895050       1 operator.go:157] Starting syncing operator at 2023-01-18 03:19:39.89504574 +0000 UTC m=+3827.783234534\nI0118 03:19:39.995556       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0118 03:19:39.995656       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0118 03:19:39.995708       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0118 03:19:39.995732       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0118 03:19:39.995741       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0118 03:19:39.995790       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0118 03:19:39.995809       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0118 03:19:39.995819       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0118 03:19:39.995828       1 base_controller.go:114] Shutting down worker of StatusSyncer_csi-snapshot-controller controller ...\nI0118 03:19:39.995844       1 base_controller.go:104] All StatusSyncer_csi-snapshot-controller workers have been terminated\nI0118 03:19:39.995851       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0118 03:19:39.995856       1 base_controller.go:104] All ManagementStateController workers have been terminated\nW0118 03:19:39.995838       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 03:19:42.045 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-zk4bp node/ip-10-0-254-31.us-west-1.compute.internal container/cluster-storage-operator reason/ContainerExit code/1 cause/Error :"2023-01-18T02:15:55Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0118 02:25:51.896125       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-storage-operator", Name:"cluster-storage-operator", UID:"f7b96134-c256-420b-a25e-a9d4f97895e5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/storage changed: Progressing changed from True to False ("AWSEBSCSIDriverOperatorCRProgressing: All is well")\nI0118 02:27:36.697383       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0118 02:37:36.697410       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0118 02:37:36.734682       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0118 02:44:41.587510       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0118 02:46:33.682530       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0118 02:47:36.698503       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0118 02:57:36.698509       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0118 03:04:41.588126       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0118 03:06:33.682808       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0118 03:07:36.699228       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0118 03:17:36.699690       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0118 03:19:39.817049       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0118 03:19:39.817379       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0118 03:19:39.817400       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nW0118 03:19:39.818079       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 03:19:42.045 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-zk4bp node/ip-10-0-254-31.us-west-1.compute.internal container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 18 03:19:42.134 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-kb77g node/ip-10-0-254-31.us-west-1.compute.internal container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error cResourceController ...\nI0118 03:19:39.891967       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0118 03:19:39.891967       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 03:19:39.891976       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0118 03:19:39.891997       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 03:19:39.892026       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 03:19:39.892067       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0118 03:19:39.892074       1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0118 03:19:39.892828       1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0118 03:19:39.892079       1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0118 03:19:39.892876       1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0118 03:19:39.892082       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0118 03:19:39.892923       1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0118 03:19:39.892086       1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0118 03:19:39.892963       1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0118 03:19:39.892093       1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ...\nI0118 03:19:39.892997       1 base_controller.go:104] All UserCAObservationController workers have been terminated\nW0118 03:19:39.892156       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 03:19:42.134 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-kb77g node/ip-10-0-254-31.us-west-1.compute.internal container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 18 03:19:44.366 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-hmz6c node/ip-10-0-173-170.us-west-1.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error ed\nI0118 03:19:42.774781       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-hmz6c", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0118 03:19:42.774826       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0118 03:19:42.774848       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-hmz6c", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0118 03:19:42.774870       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-hmz6c", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0118 03:19:42.774896       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0118 03:19:42.774940       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-hmz6c", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0118 03:19:42.774980       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0118 03:19:42.775076       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0118 03:19:42.775117       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nW0118 03:19:42.775140       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 03:19:47.145 E ns/openshift-monitoring pod/cluster-monitoring-operator-894d44997-qhr4h node/ip-10-0-254-31.us-west-1.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 18 03:19:58.267 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-245-214.us-west-1.compute.internal container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/18 02:26:28 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/18 02:26:28 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/18 02:26:28 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/18 02:26:28 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/18 02:26:28 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/18 02:26:28 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/18 02:26:28 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0118 02:26:28.686943       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/18 02:26:28 http.go:107: HTTPS: listening on [::]:9091\n
Jan 18 03:19:58.267 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-245-214.us-west-1.compute.internal container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-18T02:26:28.299975229Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-18T02:26:28.30004562Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-18T02:26:28.300188532Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-18T02:26:28.60595006Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T02:26:28.606043711Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T02:26:31.191981887Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T02:27:50.465231818Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T02:35:22.474911861Z caller=rel
Jan 18 03:19:58.389 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-245-214.us-west-1.compute.internal container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/18 02:26:17 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/18 02:26:17 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/18 02:26:17 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/18 02:26:17 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/18 02:26:17 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/18 02:26:17 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0118 02:26:17.886206       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/18 02:26:17 http.go:107: HTTPS: listening on [::]:9095\n
Jan 18 03:19:58.389 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-245-214.us-west-1.compute.internal container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-18T02:26:17.713936735Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-18T02:26:17.714073127Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-18T02:26:17.71423832Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-18T02:26:17.714346382Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-01-18T02:26:18.86869488Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
#1615383252393005056junit13 days ago
Jan 17 17:55:22.339 E ns/openshift-ingress-canary pod/ingress-canary-nzmr9 node/ip-10-0-222-106.us-west-1.compute.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 17 17:55:24.280 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-6b4cbf84ff-xwnjx node/ip-10-0-139-52.us-west-1.compute.internal container/cluster-node-tuning-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 17 17:55:25.290 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-r92kx node/ip-10-0-139-52.us-west-1.compute.internal container/cluster-storage-operator reason/ContainerExit code/1 cause/Error ss gp2 found, reconciling\nI0117 17:25:54.915396       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0117 17:35:54.915590       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0117 17:37:24.742043       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0117 17:42:01.746956       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0117 17:45:54.916302       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0117 17:55:23.172333       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0117 17:55:23.176047       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0117 17:55:23.176132       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0117 17:55:23.176969       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0117 17:55:23.177018       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0117 17:55:23.177040       1 base_controller.go:167] Shutting down AWSEBSCSIDriverOperatorDeployment ...\nI0117 17:55:23.177057       1 base_controller.go:145] All AWSEBSCSIDriverOperatorDeployment post start hooks have been terminated\nI0117 17:55:23.177084       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0117 17:55:23.177113       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0117 17:55:23.177131       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0117 17:55:23.177153       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nI0117 17:55:23.177181       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0117 17:55:23.177207       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0117 17:55:23.177231       1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0117 17:55:23.177550       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 17 17:55:25.290 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-r92kx node/ip-10-0-139-52.us-west-1.compute.internal container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 17 17:55:25.734 E ns/openshift-image-registry pod/cluster-image-registry-operator-864d6d8695-g64h8 node/ip-10-0-139-52.us-west-1.compute.internal container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 17 17:55:27.002 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-tlnfx node/ip-10-0-225-52.us-west-1.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error pace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-tlnfx", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0117 17:55:25.050249       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-tlnfx", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0117 17:55:25.050263       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0117 17:55:25.050281       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-tlnfx", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0117 17:55:25.050295       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0117 17:55:25.050961       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0117 17:55:25.051014       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0117 17:55:25.051037       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0117 17:55:25.051062       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0117 17:55:25.051088       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0117 17:55:25.051112       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0117 17:55:25.051135       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0117 17:55:25.051159       1 base_controller.go:167] Shutting down ConsoleOperator ...\nW0117 17:55:25.051160       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 17 17:55:27.002 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-tlnfx node/ip-10-0-225-52.us-west-1.compute.internal container/console-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 17 17:55:27.274 - 3s    E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator
Jan 17 17:55:29.050 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-v42bl node/ip-10-0-225-52.us-west-1.compute.internal container/webhook reason/ContainerExit code/2 cause/Error
Jan 17 17:55:29.051 E ns/openshift-monitoring pod/prometheus-operator-6594997947-rvqh2 node/ip-10-0-225-52.us-west-1.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 17 17:55:29.604 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-181-132.us-west-1.compute.internal container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/17 16:56:33 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/17 16:56:33 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/17 16:56:33 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/17 16:56:33 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/17 16:56:33 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/17 16:56:33 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/17 16:56:33 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2023/01/17 16:56:33 http.go:107: HTTPS: listening on [::]:9091\nI0117 16:56:33.694048       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
#1615619883666509824junit13 days ago
Jan 18 09:12:24.374 E ns/openshift-insights pod/insights-operator-854449444c-l8ttz node/ip-10-0-200-133.us-west-2.compute.internal container/insights-operator reason/ContainerExit code/2 cause/Error 0511       1 periodic.go:147] Periodic gather conditional completed in 3ms\nI0118 09:10:26.970596       1 recorder.go:55] Recording insights-operator/gathers with fingerprint=\nI0118 09:10:26.970775       1 diskrecorder.go:69] Writing 153 records to /var/lib/insights-operator/insights-2023-01-18-091026.tar.gz\nI0118 09:10:26.979789       1 diskrecorder.go:50] Wrote 153 records to disk in 9ms\nI0118 09:10:44.480279       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.739386ms" userAgent="Prometheus/2.29.2" audit-ID="3c0ef80a-38d9-476d-a309-7aa6c1234a72" srcIP="10.131.0.19:40514" resp=200\nI0118 09:10:55.050553       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.70775ms" userAgent="Prometheus/2.29.2" audit-ID="2086b8e6-8332-4eb9-b232-86559f54291c" srcIP="10.129.2.13:52020" resp=200\nI0118 09:10:58.502740       1 status.go:354] The operator is healthy\nI0118 09:10:58.502906       1 status.go:441] No status update necessary, objects are identical\nI0118 09:11:14.483174       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="10.015935ms" userAgent="Prometheus/2.29.2" audit-ID="7605e54f-5d49-4b91-8242-e3067c608e5e" srcIP="10.131.0.19:40514" resp=200\nI0118 09:11:25.054912       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="9.055518ms" userAgent="Prometheus/2.29.2" audit-ID="710a163e-4e76-4403-81c9-bb393d82d2f8" srcIP="10.129.2.13:52020" resp=200\nI0118 09:11:44.481609       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="6.596346ms" userAgent="Prometheus/2.29.2" audit-ID="4f543b99-9943-4e21-a069-b5d2ae9ddcc5" srcIP="10.131.0.19:40514" resp=200\nI0118 09:11:55.051427       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.514088ms" userAgent="Prometheus/2.29.2" audit-ID="0ac8dccc-decc-48e4-90db-7882942fdf04" srcIP="10.129.2.13:52020" resp=200\nI0118 09:12:14.543014       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="70.331748ms" userAgent="Prometheus/2.29.2" audit-ID="94b1a79d-3af8-4d56-ae35-19bbb36a101e" srcIP="10.131.0.19:40514" resp=200\n
Jan 18 09:12:24.374 E ns/openshift-insights pod/insights-operator-854449444c-l8ttz node/ip-10-0-200-133.us-west-2.compute.internal container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 18 09:12:27.518 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-pksh4 node/ip-10-0-200-133.us-west-2.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error down ManagementStateController ...\nI0118 09:12:26.720919       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0118 09:12:26.720966       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0118 09:12:26.721010       1 base_controller.go:114] Shutting down worker of StatusSyncer_csi-snapshot-controller controller ...\nI0118 09:12:26.721024       1 base_controller.go:104] All StatusSyncer_csi-snapshot-controller workers have been terminated\nI0118 09:12:26.721032       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0118 09:12:26.721036       1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0118 09:12:26.721045       1 base_controller.go:114] Shutting down worker of CSISnapshotWebhookController controller ...\nI0118 09:12:26.721048       1 base_controller.go:104] All CSISnapshotWebhookController workers have been terminated\nI0118 09:12:26.721054       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0118 09:12:26.721057       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0118 09:12:26.721064       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0118 09:12:26.721068       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0118 09:12:26.721122       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/tmp/serving-cert-936255139/tls.crt::/tmp/serving-cert-936255139/tls.key"\nI0118 09:12:26.721131       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0118 09:12:26.721144       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nW0118 09:12:26.720993       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0118 09:12:26.721179       1 secure_serving.go:311] Stopped listening on [::]:8443\n
Jan 18 09:12:47.025 E ns/openshift-ingress-canary pod/ingress-canary-h7442 node/ip-10-0-165-114.us-west-2.compute.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 18 09:12:52.969 - 3s    E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator
Jan 18 09:12:57.698 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-dft2v node/ip-10-0-176-236.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0118 09:12:56.354080       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0118 09:12:56.354087       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0118 09:12:56.354097       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0118 09:12:56.354103       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-dft2v", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0118 09:12:56.354115       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0118 09:12:56.354107       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0118 09:12:56.354114       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0118 09:12:56.354125       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0118 09:12:56.354137       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0118 09:12:56.354146       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0118 09:12:56.354155       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0118 09:12:56.354158       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0118 09:12:56.354167       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0118 09:12:56.354176       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0118 09:12:56.354184       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nW0118 09:12:56.354305       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 09:12:57.698 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-dft2v node/ip-10-0-176-236.us-west-2.compute.internal container/console-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 18 09:12:58.689 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-dz8nv node/ip-10-0-128-32.us-west-2.compute.internal container/webhook reason/ContainerExit code/2 cause/Error
Jan 18 09:12:59.788 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-df9qn node/ip-10-0-128-32.us-west-2.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 18 09:13:02.041 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-ls2sw node/ip-10-0-200-133.us-west-2.compute.internal container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error 7\nI0118 09:12:53.507465       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 09:12:53.507473       1 reflector.go:225] Stopping reflector *v1.ClusterRole (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 09:12:53.507478       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0118 09:12:53.509021       1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0118 09:12:53.507484       1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0118 09:12:53.509064       1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0118 09:12:53.507489       1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ...\nI0118 09:12:53.509098       1 base_controller.go:104] All UserCAObservationController workers have been terminated\nI0118 09:12:53.507494       1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0118 09:12:53.509132       1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0118 09:12:53.507499       1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0118 09:12:53.509165       1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0118 09:12:53.507510       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 09:12:53.507529       1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 09:12:53.507610       1 reflector.go:225] Stopping reflector *v1.Namespace (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0118 09:12:53.507669       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 09:13:02.041 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-ls2sw node/ip-10-0-200-133.us-west-2.compute.internal container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

Found in 18.57% of runs (32.50% of failures) across 70 total runs and 1 jobs (57.14% failed) in 229ms - clear search | chart view - source code located on github