Job:
#1921157bug23 months ago[sig-api-machinery] Kubernetes APIs remain available for new connections ASSIGNED
T2: At 06:45:58: systemd-shutdown was sending SIGTERM to remaining processes...
T3: At 06:45:58: kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Received signal to terminate, becoming unready, but keeping serving (TerminationStart event)
T4: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: The minimal shutdown duration of 1m10s finished (TerminationMinimalShutdownDurationFinished event)
T5: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Server has stopped listening (TerminationStoppedServing event)
T5 is the last event reported from that api server. At T5 the server might wait up to 60s for all requests to complete and then it fires TerminationGracefulTerminationFinished event.
periodic-ci-openshift-release-master-nightly-4.9-e2e-aws-upgrade (all) - 14 runs, 36% failed, 100% of failures match = 36% impact
#1620306040258039808junitAbout an hour ago
Jan 31 07:32:08.815 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-f95f6779-xd442 node/ip-10-0-133-118.ec2.internal container/kube-storage-version-migrator-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 31 07:32:17.894 E ns/openshift-kube-storage-version-migrator pod/migrator-5554c9565f-jbqvq node/ip-10-0-255-182.ec2.internal container/migrator reason/ContainerExit code/2 cause/Error I0131 06:35:29.323372       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0131 06:35:29.323540       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0131 06:35:29.323543       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0131 06:35:29.323547       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0131 06:35:29.323549       1 migrator.go:18] FLAG: --kubeconfig=""\nI0131 06:35:29.323551       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0131 06:35:29.323554       1 migrator.go:18] FLAG: --log_dir=""\nI0131 06:35:29.323556       1 migrator.go:18] FLAG: --log_file=""\nI0131 06:35:29.323558       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0131 06:35:29.323560       1 migrator.go:18] FLAG: --logtostderr="true"\nI0131 06:35:29.323562       1 migrator.go:18] FLAG: --one_output="false"\nI0131 06:35:29.323564       1 migrator.go:18] FLAG: --skip_headers="false"\nI0131 06:35:29.323565       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0131 06:35:29.323567       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0131 06:35:29.323569       1 migrator.go:18] FLAG: --v="2"\nI0131 06:35:29.323570       1 migrator.go:18] FLAG: --vmodule=""\nI0131 06:35:29.324281       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0131 06:35:45.444055       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0131 06:35:45.512195       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0131 06:35:46.518599       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0131 06:35:46.558584       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\nI0131 06:42:19.891197       1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jan 31 07:35:37.568 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-77c7c44dcd-wbbhs node/ip-10-0-133-118.ec2.internal container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error :36.575252       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 07:35:36.575259       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0131 07:35:36.575270       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0131 07:35:36.575278       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0131 07:35:36.575286       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0131 07:35:36.575298       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0131 07:35:36.575611       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 07:35:36.575617       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0131 07:35:36.575444       1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0131 07:35:36.575708       1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0131 07:35:36.575725       1 reflector.go:225] Stopping reflector *v1.ClusterRoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 07:35:36.575798       1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 07:35:36.575837       1 reflector.go:225] Stopping reflector *v1.ServiceAccount (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 07:35:36.575871       1 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0131 07:35:36.575873       1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0131 07:35:36.575424       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 31 07:35:39.544 E ns/openshift-insights pod/insights-operator-7958685c99-sfbhb node/ip-10-0-133-118.ec2.internal container/insights-operator reason/ContainerExit code/2 cause/Error 3:38.264960       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.787472ms" userAgent="Prometheus/2.29.2" audit-ID="e72c2d29-1c1f-4ffd-bc4a-78bbccde3100" srcIP="10.129.2.12:42746" resp=200\nI0131 07:33:52.160536       1 reflector.go:535] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Watch close - *v1.ConfigMap total 10 items received\nI0131 07:34:04.497995       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="2.733939ms" userAgent="Prometheus/2.29.2" audit-ID="670b929c-5752-4c87-854f-8f2081bec0e6" srcIP="10.128.2.12:47738" resp=200\nI0131 07:34:08.261907       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.126615ms" userAgent="Prometheus/2.29.2" audit-ID="9bc238d0-a765-4eec-bc2c-48e473c4d2b8" srcIP="10.129.2.12:42746" resp=200\nI0131 07:34:21.301909       1 status.go:354] The operator is healthy\nI0131 07:34:21.301965       1 status.go:441] No status update necessary, objects are identical\nI0131 07:34:34.504461       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.411732ms" userAgent="Prometheus/2.29.2" audit-ID="b592c585-7d13-4f01-8b1d-71decc88cc57" srcIP="10.128.2.12:47738" resp=200\nI0131 07:34:38.264092       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.197322ms" userAgent="Prometheus/2.29.2" audit-ID="8ac0ffce-a269-452d-a850-bc2867c8b6f8" srcIP="10.129.2.12:42746" resp=200\nI0131 07:35:04.500284       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.237933ms" userAgent="Prometheus/2.29.2" audit-ID="1a0dbfbc-2827-4fd2-8973-293905a5d93d" srcIP="10.128.2.12:47738" resp=200\nI0131 07:35:08.262643       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.35023ms" userAgent="Prometheus/2.29.2" audit-ID="2901b09c-90a2-46c6-9702-169aadab95a7" srcIP="10.129.2.12:42746" resp=200\nI0131 07:35:34.498359       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.088365ms" userAgent="Prometheus/2.29.2" audit-ID="1a3d5e2e-22a5-4bac-9452-bae1a7812f4b" srcIP="10.128.2.12:47738" resp=200\n
Jan 31 07:35:39.544 E ns/openshift-insights pod/insights-operator-7958685c99-sfbhb node/ip-10-0-133-118.ec2.internal container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 31 07:35:50.824 E ns/openshift-console-operator pod/console-operator-7cd47d569f-x5qc7 node/ip-10-0-255-182.ec2.internal container/console-operator reason/ContainerExit code/1 cause/Error eason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0131 07:35:47.362865       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7cd47d569f-x5qc7", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0131 07:35:47.362891       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0131 07:35:47.362458       1 base_controller.go:114] Shutting down worker of ConsoleDownloadsDeploymentSyncController controller ...\nI0131 07:35:47.362905       1 base_controller.go:104] All ConsoleDownloadsDeploymentSyncController workers have been terminated\nI0131 07:35:47.362462       1 base_controller.go:114] Shutting down worker of ConsoleOperator controller ...\nI0131 07:35:47.362912       1 base_controller.go:104] All ConsoleOperator workers have been terminated\nI0131 07:35:47.362910       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0131 07:35:47.362466       1 base_controller.go:114] Shutting down worker of HealthCheckController controller ...\nI0131 07:35:47.362997       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0131 07:35:47.363016       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0131 07:35:47.362481       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0131 07:35:47.362489       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0131 07:35:47.362504       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nW0131 07:35:47.362506       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 31 07:35:52.893 E ns/openshift-ingress-canary pod/ingress-canary-j78jx node/ip-10-0-130-172.ec2.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 31 07:35:54.425 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-240-11.ec2.internal container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/31 06:46:29 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/31 06:46:29 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/31 06:46:29 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/31 06:46:29 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/31 06:46:29 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/31 06:46:29 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/31 06:46:29 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0131 06:46:29.597260       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/31 06:46:29 http.go:107: HTTPS: listening on [::]:9091\n
Jan 31 07:35:54.425 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-240-11.ec2.internal container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-31T06:46:29.156228759Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-31T06:46:29.156273489Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-31T06:46:29.156392791Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-31T06:46:29.563215731Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-31T06:46:29.563631087Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-31T06:47:36.25982013Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\n
Jan 31 07:35:57.213 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5dc5c8f588-dzn9z node/ip-10-0-133-118.ec2.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error :159] Finished syncing operator at 90.91574ms\nI0131 07:35:36.237046       1 operator.go:157] Starting syncing operator at 2023-01-31 07:35:36.237042329 +0000 UTC m=+3613.359844929\nI0131 07:35:36.883164       1 operator.go:159] Finished syncing operator at 646.094872ms\nI0131 07:35:47.612541       1 operator.go:157] Starting syncing operator at 2023-01-31 07:35:47.61252732 +0000 UTC m=+3624.735329920\nI0131 07:35:47.815881       1 operator.go:159] Finished syncing operator at 203.34641ms\nI0131 07:35:47.815918       1 operator.go:157] Starting syncing operator at 2023-01-31 07:35:47.815914861 +0000 UTC m=+3624.938717471\nI0131 07:35:48.035606       1 operator.go:159] Finished syncing operator at 219.682829ms\nI0131 07:35:53.848921       1 operator.go:157] Starting syncing operator at 2023-01-31 07:35:53.848860303 +0000 UTC m=+3630.971662914\nI0131 07:35:53.945911       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0131 07:35:53.946309       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0131 07:35:53.946345       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0131 07:35:53.946356       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0131 07:35:53.946824       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0131 07:35:53.947295       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0131 07:35:53.947322       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0131 07:35:53.947332       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0131 07:35:53.947337       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0131 07:35:53.947352       1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0131 07:35:53.947474       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 31 07:35:57.498 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-165-52.ec2.internal container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/31 06:46:15 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/31 06:46:15 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/31 06:46:15 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/31 06:46:15 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/31 06:46:15 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/31 06:46:15 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0131 06:46:15.611807       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/31 06:46:15 http.go:107: HTTPS: listening on [::]:9095\n
#1617919515658555392junit6 days ago
Jan 24 17:11:24.441 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-167-18.us-east-2.compute.internal container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-24T16:41:34.515039416Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-24T16:41:34.515098407Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-24T16:41:34.51525005Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-24T16:41:34.879622435Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-24T16:41:34.879875519Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-24T16:41:36.954037092Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-24T16:43:08.928891664Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-24T16:48:55.017456628Z caller=re
Jan 24 17:11:24.658 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-167-18.us-east-2.compute.internal container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/24 16:41:18 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/24 16:41:18 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/24 16:41:18 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/24 16:41:18 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/24 16:41:18 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/24 16:41:18 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/24 16:41:18 http.go:107: HTTPS: listening on [::]:9095\nI0124 16:41:18.498739       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jan 24 17:11:24.658 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-167-18.us-east-2.compute.internal container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-24T16:41:18.241443387Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-24T16:41:18.241499617Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-24T16:41:18.24161881Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-24T16:41:18.242080768Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-01-24T16:41:19.381706334Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Jan 24 17:11:25.073 E ns/openshift-multus pod/multus-additional-cni-plugins-pcj85 node/ip-10-0-151-35.us-east-2.compute.internal container/kube-multus-additional-cni-plugins reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 17:11:44.024 E ns/openshift-machine-config-operator pod/machine-config-controller-64c44dcd64-7lh2c node/ip-10-0-186-81.us-east-2.compute.internal container/machine-config-controller reason/ContainerExit code/2 cause/Error  17:11:29.552056       1 node_controller.go:424] Pool master: node ip-10-0-186-81.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-a5da020e6c682d8778a60d74c7a207f0\nI0124 17:11:29.556184       1 event.go:282] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"e58e21cb-1e50-497f-b33d-11e9f6711bed", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"35401", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-186-81.us-east-2.compute.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-a5da020e6c682d8778a60d74c7a207f0\nI0124 17:11:30.566693       1 node_controller.go:424] Pool master: node ip-10-0-186-81.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working\nI0124 17:11:30.567118       1 event.go:282] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"e58e21cb-1e50-497f-b33d-11e9f6711bed", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"41963", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-186-81.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Working\nI0124 17:11:30.693116       1 node_controller.go:424] Pool master: node ip-10-0-186-81.us-east-2.compute.internal: Reporting unready: node ip-10-0-186-81.us-east-2.compute.internal is reporting Unschedulable\nE0124 17:11:34.619956       1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again\nI0124 17:11:34.619975       1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again\n
Jan 24 17:11:45.316 E ns/openshift-console-operator pod/console-operator-7c69bf9d46-4lqjz node/ip-10-0-186-81.us-east-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error 5       1 base_controller.go:73] Caches are synced for ConsoleDownloadsDeploymentSyncController \nI0124 17:08:13.317129       1 base_controller.go:110] Starting #1 worker of ConsoleDownloadsDeploymentSyncController controller ...\nI0124 17:08:13.317080       1 base_controller.go:73] Caches are synced for HealthCheckController \nI0124 17:08:13.317141       1 base_controller.go:110] Starting #1 worker of HealthCheckController controller ...\nI0124 17:11:42.575877       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0124 17:11:42.576307       1 genericapiserver.go:398] [graceful-termination] RunPreShutdownHooks has completed\nI0124 17:11:42.577138       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7c69bf9d46-4lqjz", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0124 17:11:42.576463       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0124 17:11:42.577233       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7c69bf9d46-4lqjz", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0124 17:11:42.577266       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7c69bf9d46-4lqjz", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0124 17:11:42.577306       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nW0124 17:11:42.576832       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 17:11:45.420 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-8b5c4fc4c-b7n4k node/ip-10-0-186-81.us-east-2.compute.internal container/aws-ebs-csi-driver-operator reason/ContainerExit code/1 cause/Error
Jan 24 17:11:47.715 E ns/openshift-service-ca-operator pod/service-ca-operator-f7bcc5757-8z2tt node/ip-10-0-186-81.us-east-2.compute.internal container/service-ca-operator reason/ContainerExit code/1 cause/Error
Jan 24 17:11:48.802 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-f74fcfcf8-6bl82 node/ip-10-0-186-81.us-east-2.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error g for Deployment to deploy csi-snapshot-controller pods")\nI0124 17:11:45.559478       1 operator.go:159] Finished syncing operator at 69.706312ms\nI0124 17:11:45.559519       1 operator.go:157] Starting syncing operator at 2023-01-24 17:11:45.559516662 +0000 UTC m=+737.089935868\nI0124 17:11:45.655106       1 operator.go:159] Finished syncing operator at 95.579456ms\nI0124 17:11:45.904431       1 operator.go:157] Starting syncing operator at 2023-01-24 17:11:45.904422621 +0000 UTC m=+737.434841873\nI0124 17:11:45.982022       1 operator.go:159] Finished syncing operator at 77.593363ms\nI0124 17:11:45.982063       1 operator.go:157] Starting syncing operator at 2023-01-24 17:11:45.982060165 +0000 UTC m=+737.512479367\nI0124 17:11:46.055206       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0124 17:11:46.055317       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0124 17:11:46.055344       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0124 17:11:46.055355       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0124 17:11:46.055378       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0124 17:11:46.055868       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0124 17:11:46.055880       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0124 17:11:46.055891       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0124 17:11:46.055900       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0124 17:11:46.055910       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0124 17:11:46.055914       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nW0124 17:11:46.056010       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 17:11:48.851 E ns/openshift-machine-config-operator pod/machine-config-operator-589bf6bf4f-5rkl6 node/ip-10-0-186-81.us-east-2.compute.internal container/machine-config-operator reason/ContainerExit code/2 cause/Error I0124 17:03:04.050259       1 start.go:43] Version: 4.9.0-0.nightly-2023-01-24-161323 (Raw: v4.9.0-202212051626.p0.gb2055c0.assembly.stream-dirty, Hash: b2055c07f694f100de0d45cde8e8ca72b661826d)\nI0124 17:03:04.052266       1 leaderelection.go:248] attempting to acquire leader lease openshift-machine-config-operator/machine-config...\nI0124 17:04:59.703796       1 leaderelection.go:258] successfully acquired lease openshift-machine-config-operator/machine-config\nI0124 17:05:00.049417       1 operator.go:262] Starting MachineConfigOperator\nI0124 17:05:00.053208       1 event.go:282] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"8db42085-2b81-4da2-b3af-675d019ddc49", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 4.9.0-0.nightly-2023-01-18-114336}] to [{operator 4.9.0-0.nightly-2023-01-24-161323}]\n
Jan 24 17:11:48.907 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-cn4v5 node/ip-10-0-186-81.us-east-2.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
#1618408834177437696junit5 days ago
Jan 26 01:49:04.570 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-75bf6d7c8f-vj6ng node/ip-10-0-169-65.us-east-2.compute.internal container/csi-liveness-probe reason/ContainerExit code/2 cause/Error
Jan 26 01:49:04.570 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-75bf6d7c8f-vj6ng node/ip-10-0-169-65.us-east-2.compute.internal container/csi-driver reason/ContainerExit code/2 cause/Error
Jan 26 01:49:04.570 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-75bf6d7c8f-vj6ng node/ip-10-0-169-65.us-east-2.compute.internal container/csi-attacher reason/ContainerExit code/2 cause/Error
Jan 26 01:49:04.570 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-75bf6d7c8f-vj6ng node/ip-10-0-169-65.us-east-2.compute.internal container/csi-snapshotter reason/ContainerExit code/2 cause/Error
Jan 26 01:49:04.570 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-75bf6d7c8f-vj6ng node/ip-10-0-169-65.us-east-2.compute.internal container/csi-resizer reason/ContainerExit code/2 cause/Error
Jan 26 01:49:05.607 E ns/openshift-console-operator pod/console-operator-7cd47d569f-vldkq node/ip-10-0-169-65.us-east-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error 709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7cd47d569f-vldkq", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0126 01:49:01.475075       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0126 01:49:01.475153       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0126 01:49:01.475157       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0126 01:49:01.475165       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0126 01:49:01.475174       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0126 01:49:01.475182       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0126 01:49:01.475190       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0126 01:49:01.475198       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0126 01:49:01.475206       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0126 01:49:01.475221       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0126 01:49:01.475229       1 base_controller.go:167] Shutting down ConsoleOperator ...\nE0126 01:49:01.475251       1 operator.go:194] infrastructure config error: context canceled\nE0126 01:49:01.475260       1 base_controller.go:272] ConsoleOperator reconciliation failed: context canceled\nI0126 01:49:01.475275       1 base_controller.go:114] Shutting down worker of ConsoleOperator controller ...\nI0126 01:49:01.475287       1 base_controller.go:104] All ConsoleOperator workers have been terminated\nI0126 01:49:01.475295       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nW0126 01:49:01.475576       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 26 01:49:06.408 E clusteroperator/kube-storage-version-migrator condition/Available status/False reason/KubeStorageVersionMigrator_Deploying changed: KubeStorageVersionMigratorAvailable: Waiting for Deployment
Jan 26 01:49:06.408 - 4s    E clusteroperator/kube-storage-version-migrator condition/Available status/False reason/KubeStorageVersionMigratorAvailable: Waiting for Deployment
Jan 26 01:49:07.550 E ns/openshift-kube-storage-version-migrator pod/migrator-5554c9565f-64rwx node/ip-10-0-169-65.us-east-2.compute.internal container/migrator reason/ContainerExit code/2 cause/Error I0126 00:54:44.976637       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0126 00:54:44.976713       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0126 00:54:44.976716       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0126 00:54:44.976719       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0126 00:54:44.976722       1 migrator.go:18] FLAG: --kubeconfig=""\nI0126 00:54:44.976724       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0126 00:54:44.976727       1 migrator.go:18] FLAG: --log_dir=""\nI0126 00:54:44.976731       1 migrator.go:18] FLAG: --log_file=""\nI0126 00:54:44.976732       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0126 00:54:44.976734       1 migrator.go:18] FLAG: --logtostderr="true"\nI0126 00:54:44.976736       1 migrator.go:18] FLAG: --one_output="false"\nI0126 00:54:44.976737       1 migrator.go:18] FLAG: --skip_headers="false"\nI0126 00:54:44.976739       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0126 00:54:44.976741       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0126 00:54:44.976742       1 migrator.go:18] FLAG: --v="2"\nI0126 00:54:44.976745       1 migrator.go:18] FLAG: --vmodule=""\nI0126 00:54:44.978178       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0126 00:54:58.095708       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0126 00:54:58.193075       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0126 00:54:59.199729       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0126 00:54:59.239818       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\nI0126 00:59:59.936614       1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jan 26 01:49:08.711 E ns/openshift-machine-config-operator pod/machine-config-controller-64c44dcd64-cz5nd node/ip-10-0-169-65.us-east-2.compute.internal container/machine-config-controller reason/ContainerExit code/2 cause/Error  01:48:47.578085       1 node_controller.go:424] Pool master: node ip-10-0-169-65.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-ebaccc325c720bc956f83da2f561c8d7\nI0126 01:48:47.578323       1 event.go:282] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"9d196714-ea4d-43c4-b533-0c14b6c65b41", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"45899", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-169-65.us-east-2.compute.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ebaccc325c720bc956f83da2f561c8d7\nI0126 01:48:48.587123       1 node_controller.go:424] Pool master: node ip-10-0-169-65.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working\nI0126 01:48:48.587372       1 event.go:282] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"9d196714-ea4d-43c4-b533-0c14b6c65b41", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"45899", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-169-65.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Working\nI0126 01:48:48.667880       1 node_controller.go:424] Pool master: node ip-10-0-169-65.us-east-2.compute.internal: Reporting unready: node ip-10-0-169-65.us-east-2.compute.internal is reporting Unschedulable\nE0126 01:48:52.726022       1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again\nI0126 01:48:52.726042       1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again\n
Jan 26 01:49:08.711 E ns/openshift-machine-config-operator pod/machine-config-controller-64c44dcd64-cz5nd node/ip-10-0-169-65.us-east-2.compute.internal container/machine-config-controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1617515716976578560junit7 days ago
Jan 23 14:24:37.629 E ns/openshift-insights pod/insights-operator-79c7d54757-ndpbb node/ip-10-0-134-56.ec2.internal container/insights-operator reason/ContainerExit code/2 cause/Error 06:40Z\nI0123 14:23:07.073806       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="8.269472ms" userAgent="Prometheus/2.29.2" audit-ID="178d64ef-ff33-48d2-92c9-f91657822156" srcIP="10.131.0.21:50722" resp=200\nI0123 14:23:16.987005       1 insightsuploader.go:120] Nothing to report since 2023-01-23T14:06:40Z\nI0123 14:23:19.931635       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.63138ms" userAgent="Prometheus/2.29.2" audit-ID="88a4309e-8976-45ba-b606-fc726aa7e02f" srcIP="10.128.2.16:46810" resp=200\nI0123 14:23:31.987572       1 insightsuploader.go:120] Nothing to report since 2023-01-23T14:06:40Z\nI0123 14:23:37.070829       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.020718ms" userAgent="Prometheus/2.29.2" audit-ID="61eaedd7-163e-4010-9fd1-503d01baa435" srcIP="10.131.0.21:50722" resp=200\nI0123 14:23:46.988585       1 insightsuploader.go:120] Nothing to report since 2023-01-23T14:06:40Z\nI0123 14:23:49.927737       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="2.358826ms" userAgent="Prometheus/2.29.2" audit-ID="e0dac331-2cf7-4bcc-b0a7-4be19dee4611" srcIP="10.128.2.16:46810" resp=200\nI0123 14:24:01.991144       1 insightsuploader.go:120] Nothing to report since 2023-01-23T14:06:40Z\nI0123 14:24:07.073674       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.937304ms" userAgent="Prometheus/2.29.2" audit-ID="d818deaf-3869-4d20-ac17-45ca6dfdb989" srcIP="10.131.0.21:50722" resp=200\nI0123 14:24:16.991718       1 insightsuploader.go:120] Nothing to report since 2023-01-23T14:06:40Z\nI0123 14:24:19.933772       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.599537ms" userAgent="Prometheus/2.29.2" audit-ID="7bf00546-a4ad-4d63-be1a-dced8d6a1011" srcIP="10.128.2.16:46810" resp=200\nI0123 14:24:24.221870       1 status.go:354] The operator is healthy\nI0123 14:24:24.221940       1 status.go:441] No status update necessary, objects are identical\nI0123 14:24:31.992234       1 insightsuploader.go:120] Nothing to report since 2023-01-23T14:06:40Z\n
Jan 23 14:24:37.629 E ns/openshift-insights pod/insights-operator-79c7d54757-ndpbb node/ip-10-0-134-56.ec2.internal container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 14:24:38.744 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-649ccd8b7b-r8c6r node/ip-10-0-134-56.ec2.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error 1-23 14:24:33.815055792 +0000 UTC m=+2324.812819538\nI0123 14:24:33.994732       1 operator.go:159] Finished syncing operator at 179.669145ms\nI0123 14:24:33.996010       1 operator.go:157] Starting syncing operator at 2023-01-23 14:24:33.996005172 +0000 UTC m=+2324.993768878\nI0123 14:24:34.102034       1 operator.go:159] Finished syncing operator at 106.021223ms\nI0123 14:24:34.102088       1 operator.go:157] Starting syncing operator at 2023-01-23 14:24:34.102085266 +0000 UTC m=+2325.099848982\nI0123 14:24:34.652541       1 operator.go:159] Finished syncing operator at 550.444879ms\nI0123 14:24:36.720090       1 operator.go:157] Starting syncing operator at 2023-01-23 14:24:36.720079268 +0000 UTC m=+2327.717842994\nI0123 14:24:36.797734       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0123 14:24:36.798122       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0123 14:24:36.798153       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0123 14:24:36.798163       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0123 14:24:36.798179       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0123 14:24:36.798472       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0123 14:24:36.798525       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0123 14:24:36.798548       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0123 14:24:36.798572       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0123 14:24:36.798596       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0123 14:24:36.798621       1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0123 14:24:36.798725       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 23 14:24:42.998 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fbc644d58-4txpp node/ip-10-0-134-56.ec2.internal container/cluster-storage-operator reason/ContainerExit code/1 cause/Error        1 base_controller.go:104] All SnapshotCRDController workers have been terminated\nI0123 14:24:40.577289       1 base_controller.go:114] Shutting down worker of StatusSyncer_storage controller ...\nI0123 14:24:40.577292       1 base_controller.go:104] All StatusSyncer_storage workers have been terminated\nI0123 14:24:40.577298       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0123 14:24:40.577301       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0123 14:24:40.577307       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0123 14:24:40.577311       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0123 14:24:40.577319       1 base_controller.go:114] Shutting down worker of CSIDriverStarter controller ...\nI0123 14:24:40.577323       1 base_controller.go:104] All CSIDriverStarter workers have been terminated\nI0123 14:24:40.577468       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0123 14:24:40.577484       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0123 14:24:40.577495       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0123 14:24:40.577689       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0123 14:24:40.577706       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0123 14:24:40.577746       1 secure_serving.go:311] Stopped listening on [::]:8443\nI0123 14:24:40.577764       1 genericapiserver.go:363] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nW0123 14:24:40.577957       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 23 14:24:42.998 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fbc644d58-4txpp node/ip-10-0-134-56.ec2.internal container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 14:24:49.454 E ns/openshift-console-operator pod/console-operator-7bddd84bf-ddx7m node/ip-10-0-197-169.ec2.internal container/console-operator reason/ContainerExit code/1 cause/Error bf-ddx7m", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0123 14:24:46.611563       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0123 14:24:46.611601       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7bddd84bf-ddx7m", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0123 14:24:46.611629       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0123 14:24:46.611988       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0123 14:24:46.612016       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0123 14:24:46.612037       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0123 14:24:46.612046       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0123 14:24:46.612057       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0123 14:24:46.612102       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0123 14:24:46.612120       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0123 14:24:46.612129       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0123 14:24:46.612141       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0123 14:24:46.612151       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0123 14:24:46.612163       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0123 14:24:46.612195       1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0123 14:24:46.612410       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 23 14:24:50.052 E ns/openshift-service-ca-operator pod/service-ca-operator-b54f5d694-c88mp node/ip-10-0-134-56.ec2.internal container/service-ca-operator reason/ContainerExit code/1 cause/Error
Jan 23 14:25:44.617 E ns/openshift-controller-manager pod/controller-manager-s586f node/ip-10-0-197-169.ec2.internal container/controller-manager reason/ContainerExit code/137 cause/Error      1 imagestream_controller.go:66] Starting image stream controller\nI0123 14:00:04.970696       1 controller_manager.go:155] Started "openshift.io/image-import"\nI0123 14:00:04.970709       1 controller_manager.go:158] Started Origin Controllers\nI0123 14:00:04.971579       1 scheduled_image_controller.go:68] Starting scheduled import controller\nI0123 14:00:05.022990       1 templateinstance_controller.go:297] Starting TemplateInstance controller\nI0123 14:00:05.037558       1 shared_informer.go:247] Caches are synced for DefaultRoleBindingController \nI0123 14:00:05.059814       1 buildconfig_controller.go:212] Starting buildconfig controller\nI0123 14:00:05.078708       1 factory.go:85] deploymentconfig controller caches are synced. Starting workers.\nI0123 14:00:05.087331       1 templateinstance_finalizer.go:194] Starting TemplateInstanceFinalizer controller\nI0123 14:00:05.134374       1 shared_informer.go:247] Caches are synced for service account \nI0123 14:00:05.167627       1 factory.go:80] Deployer controller caches are synced. Starting workers.\nI0123 14:00:05.245458       1 deleted_token_secrets.go:70] caches synced\nI0123 14:00:05.245476       1 docker_registry_service.go:156] caches synced\nI0123 14:00:05.245485       1 create_dockercfg_secrets.go:219] urls found\nI0123 14:00:05.245680       1 create_dockercfg_secrets.go:225] caches synced\nI0123 14:00:05.245495       1 deleted_dockercfg_secrets.go:75] caches synced\nI0123 14:00:05.245651       1 docker_registry_service.go:298] Updating registry URLs from map[172.30.108.198:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.108.198:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0123 14:00:05.275244       1 build_controller.go:475] Starting build controller\nI0123 14:00:05.275264       1 build_controller.go:477] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\n
Jan 23 14:25:45.221 E ns/openshift-controller-manager pod/controller-manager-ntxsj node/ip-10-0-134-56.ec2.internal container/controller-manager reason/ContainerExit code/137 cause/Error I0123 13:57:21.315441       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0123 13:57:21.316991       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0123 13:57:21.317008       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0123 13:57:21.317088       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0123 13:57:21.317396       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
Jan 23 14:25:45.453 E ns/openshift-controller-manager pod/controller-manager-vnrrt node/ip-10-0-166-131.ec2.internal container/controller-manager reason/ContainerExit code/137 cause/Error I0123 13:57:22.353534       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0123 13:57:22.355696       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0123 13:57:22.355742       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0123 13:57:22.356088       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0123 13:57:22.356353       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
Jan 23 14:29:28.882 E ns/openshift-machine-config-operator pod/machine-config-operator-66769c8cc4-k6ff6 node/ip-10-0-134-56.ec2.internal container/machine-config-operator reason/ContainerExit code/2 cause/Error  1 status.go:322] Error checking version skew: kube-apiserver does not yet have a version, kubelet skew status: KubeletSkewUnchecked, status reason: KubeletSkewUnchecked, status message: An error occurred when checking kubelet version skew: kube-apiserver does not yet have a version\nE0123 13:48:17.100318       1 status.go:322] Error checking version skew: kube-apiserver does not yet have a version, kubelet skew status: KubeletSkewUnchecked, status reason: KubeletSkewUnchecked, status message: An error occurred when checking kubelet version skew: kube-apiserver does not yet have a version\nE0123 13:48:24.160502       1 status.go:322] Error checking version skew: kube-apiserver does not yet have a version, kubelet skew status: KubeletSkewUnchecked, status reason: KubeletSkewUnchecked, status message: An error occurred when checking kubelet version skew: kube-apiserver does not yet have a version\nE0123 13:48:40.149144       1 status.go:322] Error checking version skew: kube-apiserver does not yet have a version, kubelet skew status: KubeletSkewUnchecked, status reason: KubeletSkewUnchecked, status message: An error occurred when checking kubelet version skew: kube-apiserver does not yet have a version\nI0123 13:49:48.467973       1 event.go:282] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"206eb701-5f4d-4f23-add1-915be4189b96", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'OperatorDegraded: MachineConfigDaemonFailed' Failed to resync 4.9.0-0.nightly-2023-01-18-114336 because: Operation cannot be fulfilled on daemonsets.apps "machine-config-daemon": the object has been modified; please apply your changes to the latest version and try again\nI0123 13:49:48.497554       1 event.go:282] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"206eb701-5f4d-4f23-add1-915be4189b96", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'OperatorNotAvailable' Cluster has deployed [{operator 4.9.0-0.nightly-2023-01-18-114336}]\n
#1617558347253288960junit7 days ago
Jan 23 17:11:40.061 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-649ccd8b7b-bttf2 node/ip-10-0-236-16.us-east-2.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error -01-23 17:11:34.370678835 +0000 UTC m=+1683.692456206\nI0123 17:11:34.443075       1 operator.go:159] Finished syncing operator at 72.389621ms\nI0123 17:11:34.443105       1 operator.go:157] Starting syncing operator at 2023-01-23 17:11:34.443102594 +0000 UTC m=+1683.764879953\nI0123 17:11:34.880293       1 operator.go:159] Finished syncing operator at 437.18362ms\nI0123 17:11:34.880330       1 operator.go:157] Starting syncing operator at 2023-01-23 17:11:34.880326756 +0000 UTC m=+1684.202104175\nI0123 17:11:35.478480       1 operator.go:159] Finished syncing operator at 598.146515ms\nI0123 17:11:38.352764       1 operator.go:157] Starting syncing operator at 2023-01-23 17:11:38.352748847 +0000 UTC m=+1687.674526440\nI0123 17:11:38.387850       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0123 17:11:38.387922       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0123 17:11:38.387942       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0123 17:11:38.387953       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0123 17:11:38.387974       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0123 17:11:38.388301       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0123 17:11:38.388321       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0123 17:11:38.388325       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0123 17:11:38.388335       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0123 17:11:38.388839       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0123 17:11:38.388853       1 base_controller.go:167] Shutting down ManagementStateController ...\nW0123 17:11:38.388953       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 23 17:11:40.061 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-649ccd8b7b-bttf2 node/ip-10-0-236-16.us-east-2.compute.internal container/csi-snapshot-controller-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 17:11:41.968 E ns/openshift-monitoring pod/cluster-monitoring-operator-55748f77fd-4gj5x node/ip-10-0-236-16.us-east-2.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 17:11:45.045 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fbc644d58-5jgbq node/ip-10-0-236-16.us-east-2.compute.internal container/cluster-storage-operator reason/ContainerExit code/1 cause/Error verOperatorDeployment controller terminated\nI0123 17:11:44.183551       1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0123 17:11:44.184212       1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0123 17:11:44.183557       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0123 17:11:44.184250       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0123 17:11:44.183562       1 base_controller.go:114] Shutting down worker of DefaultStorageClassController controller ...\nI0123 17:11:44.184299       1 base_controller.go:104] All DefaultStorageClassController workers have been terminated\nI0123 17:11:44.183567       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0123 17:11:44.184338       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0123 17:11:44.183572       1 base_controller.go:114] Shutting down worker of VSphereProblemDetectorStarter controller ...\nI0123 17:11:44.184392       1 base_controller.go:104] All VSphereProblemDetectorStarter workers have been terminated\nI0123 17:11:44.183581       1 base_controller.go:114] Shutting down worker of CSIDriverStarter controller ...\nI0123 17:11:44.184435       1 base_controller.go:104] All CSIDriverStarter workers have been terminated\nI0123 17:11:44.183592       1 base_controller.go:167] Shutting down AWSEBSCSIDriverOperator ...\nI0123 17:11:44.184558       1 base_controller.go:145] All AWSEBSCSIDriverOperator post start hooks have been terminated\nI0123 17:11:44.183616       1 base_controller.go:114] Shutting down worker of AWSEBSCSIDriverOperator controller ...\nI0123 17:11:44.184572       1 base_controller.go:104] All AWSEBSCSIDriverOperator workers have been terminated\nI0123 17:11:44.184577       1 controller_manager.go:54] AWSEBSCSIDriverOperator controller terminated\nW0123 17:11:44.183882       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 23 17:11:46.046 E ns/openshift-marketplace pod/marketplace-operator-57bd58b96d-sc568 node/ip-10-0-236-16.us-east-2.compute.internal container/marketplace-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 17:11:46.179 E ns/openshift-console-operator pod/console-operator-7bddd84bf-g6t52 node/ip-10-0-166-214.us-east-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error erminate, becoming unready, but keeping serving\nI0123 17:11:44.604142       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7bddd84bf-g6t52", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0123 17:11:44.604169       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0123 17:11:44.604184       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7bddd84bf-g6t52", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0123 17:11:44.604196       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0123 17:11:44.604351       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0123 17:11:44.604373       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0123 17:11:44.604385       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0123 17:11:44.604394       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0123 17:11:44.604587       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0123 17:11:44.604679       1 secure_serving.go:311] Stopped listening on [::]:8443\nI0123 17:11:44.604762       1 genericapiserver.go:373] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nW0123 17:11:44.606890       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 23 17:11:46.179 E ns/openshift-console-operator pod/console-operator-7bddd84bf-g6t52 node/ip-10-0-166-214.us-east-2.compute.internal container/console-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 17:11:48.067 E ns/openshift-service-ca-operator pod/service-ca-operator-b54f5d694-tjcws node/ip-10-0-236-16.us-east-2.compute.internal container/service-ca-operator reason/ContainerExit code/1 cause/Error
Jan 23 17:12:11.149 E ns/openshift-controller-manager pod/controller-manager-jccdp node/ip-10-0-236-16.us-east-2.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error I0123 16:46:03.250462       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0123 16:46:03.251525       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0123 16:46:03.251540       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0123 16:46:03.251591       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0123 16:46:03.251612       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
Jan 23 17:12:11.447 E ns/openshift-controller-manager pod/controller-manager-lm99x node/ip-10-0-166-214.us-east-2.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error Controller: unknown (get replicationcontrollers)\nE0123 16:50:16.932833       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.Route: unknown (get routes.route.openshift.io)\nE0123 16:54:56.357944       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.Build: the server could not find the requested resource (get builds.build.openshift.io)\nE0123 16:54:56.357971       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.BuildConfig: the server could not find the requested resource (get buildconfigs.build.openshift.io)\nE0123 16:54:56.423809       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.Image: the server could not find the requested resource (get images.image.openshift.io)\nE0123 16:54:56.423817       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.DeploymentConfig: the server could not find the requested resource (get deploymentconfigs.apps.openshift.io)\nE0123 16:54:56.423824       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.TemplateInstance: the server could not find the requested resource (get templateinstances.template.openshift.io)\nE0123 16:54:56.423851       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)\nE0123 16:54:56.834540       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.Route: the server could not find the requested resource (get routes.route.openshift.io)\nE0123 16:54:58.737443       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.Route: failed to list *v1.Route: the server could not find the requested resource (get routes.route.openshift.io)\n
Jan 23 17:12:13.193 E ns/openshift-controller-manager pod/controller-manager-jw2x6 node/ip-10-0-145-243.us-east-2.compute.internal container/controller-manager reason/ContainerExit code/137 cause/Error I0123 16:46:03.004108       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0123 16:46:03.006443       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0123 16:46:03.006460       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0123 16:46:03.006537       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0123 16:46:03.006713       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n

Found in 35.71% of runs (100.00% of failures) across 14 total runs and 1 jobs (35.71% failed) in 92ms - clear search | chart view - source code located on github