Job:
#1842002bug2 years agoKubePodCrashLooping kube-contoller-manager cluster-policy-controller: 6443: connect: connection refused RELEASE_PENDING
$ curl -s https://storage.googleapis.com/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-4.5/2428/artifacts/e2e-gcp/events.json | jq -r '.items[] | select(.metadata.namespace == "openshift-kube-apiserver") | .firstTimestamp + " " + .lastTimestamp + " " + .message' | sort
...
2020-05-30T01:10:53Z 2020-05-30T01:10:53Z All pending requests processed
2020-05-30T01:10:53Z 2020-05-30T01:10:53Z Server has stopped listening
2020-05-30T01:10:53Z 2020-05-30T01:10:53Z The minimal shutdown duration of 1m10s finished
...
2020-05-30T01:11:58Z 2020-05-30T01:11:58Z Created container kube-apiserver-cert-regeneration-controller
2020-05-30T01:11:58Z 2020-05-30T01:11:58Z Created container kube-apiserver-cert-syncer
2020-05-30T01:11:58Z 2020-05-30T01:11:58Z Started container kube-apiserver
...
#1934628bug17 months agoAPI server stopped reporting healthy during upgrade to 4.7.0 ASSIGNED
during that time the API server was restarted by kubelet due to a failed liveness probe
14:18:00	openshift-kube-apiserver	kubelet	kube-apiserver-ip-10-0-159-123.ec2.internal	Killing	Container kube-apiserver failed liveness probe, will be restarted
14:19:17	openshift-kube-apiserver	apiserver	kube-apiserver-ip-10-0-159-123.ec2.internal	TerminationMinimalShutdownDurationFinished	The minimal shutdown duration of 1m10s finished
moving to etcd team to investigate why etcd was unavailable during that time
Comment 15200626 by mfojtik@redhat.com at 2021-06-17T18:29:50Z
The LifecycleStale keyword was removed because the bug got commented on recently.
#1943804bug18 months agoAPI server on AWS takes disruption between 70s and 110s after pod begins termination via external LB RELEASE_PENDING
    "name": "kube-apiserver-ip-10-0-131-183.ec2.internal",
    "namespace": "openshift-kube-apiserver"
  },
  "kind": "Event",
  "lastTimestamp": null,
  "message": "The minimal shutdown duration of 1m10s finished",
  "metadata": {
    "creationTimestamp": "2021-03-29T12:18:04Z",
    "name": "kube-apiserver-ip-10-0-131-183.ec2.internal.1670cf61b0f72d2d",
    "namespace": "openshift-kube-apiserver",
    "resourceVersion": "89139",
#1921157bug23 months ago[sig-api-machinery] Kubernetes APIs remain available for new connections ASSIGNED
T2: At 06:45:58: systemd-shutdown was sending SIGTERM to remaining processes...
T3: At 06:45:58: kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Received signal to terminate, becoming unready, but keeping serving (TerminationStart event)
T4: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: The minimal shutdown duration of 1m10s finished (TerminationMinimalShutdownDurationFinished event)
T5: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Server has stopped listening (TerminationStoppedServing event)
T5 is the last event reported from that api server. At T5 the server might wait up to 60s for all requests to complete and then it fires TerminationGracefulTerminationFinished event.
#1932097bug18 months agoApiserver liveness probe is marking it as unhealthy during normal shutdown RELEASE_PENDING
Feb 23 20:18:04.212 - 1s    E kube-apiserver-new-connection kube-apiserver-new-connection is not responding to GET requests
Feb 23 20:18:05.318 I kube-apiserver-new-connection kube-apiserver-new-connection started responding to GET requests
Deeper detail from the node log shows that right as we get this error one of the instances finishes its connection ,which is right when the error happens.
Feb 23 20:18:02.505 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-203-7.us-east-2.compute.internal node/ip-10-0-203-7 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Feb 23 20:18:02.509 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-203-7.us-east-2.compute.internal node/ip-10-0-203-7 reason/TerminationStoppedServing Server has stopped listening
Feb 23 20:18:03.148 I ns/openshift-console-operator deployment/console-operator reason/OperatorStatusChanged Status for clusteroperator/console changed: Degraded message changed from "CustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nSyncLoopRefreshDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)" to "SyncLoopRefreshDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)" (2 times)
Feb 23 20:18:03.880 E kube-apiserver-reused-connection kube-apiserver-reused-connection started failing: Get "https://api.ci-op-ivyvzgrr-0b477.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 3.21.250.132:6443: connect: connection refused
This kind of looks like the load balancer didn't remove the kube-apiserver and kept sending traffic and the connection didn't cleanly shut down - did something regress in the apiserver traffic connection?
#1995804bug15 months agoRewrite carry "UPSTREAM: <carry>: create termination events" to lifecycleEvents RELEASE_PENDING
Use the new lifecycle event names for the events that we generate when an apiserver is gracefully terminating.
Comment 15454963 by kewang@redhat.com at 2021-09-03T09:36:37Z
$ w3m -dump -cols 200 'https://search.ci.openshift.org/?search=The+minimal+shutdown+duration&maxAge=168h&context=5&type=build-log&name=4%5C.9&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job' | grep -E 'kube-system node\/apiserver|openshift-kube-apiserver|openshift-apiserver' > test.log
$ grep 'The minimal shutdown duration of' test.log | head -2
Sep 03 05:22:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-71.us-west-1.compute.internal node/ip-10-0-163-71 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Sep 03 05:22:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-71.us-west-1.compute.internal node/ip-10-0-163-71 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
$ grep 'Received signal to terminate' test.log | head -2
Sep 03 08:49:11.000 I ns/default namespace/kube-system node/apiserver-75cf4778cb-9zk42 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Sep 03 08:53:40.000 I ns/default namespace/kube-system node/apiserver-75cf4778cb-c8429 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
#1955333bug11 months ago"Kubernetes APIs remain available for new connections" and similar failing on 4.8 Azure updates NEW
  2021-05-01T03:59:42Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Killing: Stopping container kube-apiserver-check-endpoints
  2021-05-01T03:59:42Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Killing: Stopping container kube-apiserver-insecure-readyz
  2021-05-01T03:59:43Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationPreShutdownHooksFinished: All pre-shutdown hooks have been finished
  2021-05-01T03:59:43Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationStart: Received signal to terminate, becoming unready, but keeping serving
  2021-05-01T03:59:49Z 1 cert-regeneration-controller-lock LeaderElection: ip-10-0-239-74_02f2b687-97f4-44c4-9516-e3fb364deb85 became leader
  2021-05-01T04:00:53Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationMinimalShutdownDurationFinished: The minimal shutdown duration of 1m10s finished
  2021-05-01T04:00:53Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationStoppedServing: Server has stopped listening
  2021-05-01T04:01:53Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationGracefulTerminationFinished: All pending requests processed
  2021-05-01T04:01:55Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Pulling: Pulling image "registry.ci.openshift.org/ocp/4.8-2021-04-30-212732@sha256:e4c7be2f0e8b1e9ef1ad9161061449ec1bdc6953a58f6d456971ee945a8d3197"
  2021-05-01T04:02:05Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Created: Created container setup
  2021-05-01T04:02:05Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Pulled: Container image "registry.ci.openshift.org/ocp/4.8-2021-04-30-212732@sha256:e4c7be2f0e8b1e9ef1ad9161061449ec1bdc6953a58f6d456971ee945a8d3197" already present on machine
That really looks like kube-apiserver is rolling out a new version, and for some reason there is not the graceful LB handoff we need to avoid connection issues.  Unifying the two timelines:
* 03:59:43Z TerminationPreShutdownHooksFinished
* 03:59:43Z TerminationStart: Received signal to terminate, becoming unready, but keeping serving
* 04:00:53Z TerminationMinimalShutdownDurationFinished: The minimal shutdown duration of 1m10s finished
* 04:00:53Z TerminationStoppedServing: Server has stopped listening
* 04:00:58.307Z kube-apiserver-new-connection started failing... connection refused
* 04:00:59.314Z kube-apiserver-new-connection started responding to GET requests
* 04:01:03.307Z kube-apiserver-new-connection started failing... connection refused
* 04:01:04.313Z kube-apiserver-new-connection started responding to GET requests
#1979916bug18 months agokube-apiserver constantly receiving signals to terminate after a fresh install, but still keeps serving ASSIGNED
kube-apiserver-master-0-2
Server has stopped listening
kube-apiserver-master-0-2
The minimal shutdown duration of 1m10s finished
redhat-operators-7p4nb
Stopping container registry-server
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.8" in 3.09180991s
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-azure-upgrade (all) - 67 runs, 85% failed, 23% of failures match = 19% impact
#1618915395844968448junit3 days ago
Jan 27 11:26:30.447 E ns/openshift-monitoring pod/prometheus-operator-6594997947-7sf5v node/ci-op-p2lym4fh-253f3-q8mbw-master-1 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 27 11:26:30.580 E ns/openshift-ingress-canary pod/ingress-canary-4h7w7 node/ci-op-p2lym4fh-253f3-q8mbw-worker-eastus22-w7rxs container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 27 11:26:30.771 E ns/openshift-kube-storage-version-migrator pod/migrator-5554c9565f-7sb6k node/ci-op-p2lym4fh-253f3-q8mbw-master-2 container/migrator reason/ContainerExit code/2 cause/Error I0127 10:40:15.248412       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0127 10:40:15.248484       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0127 10:40:15.248489       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0127 10:40:15.248493       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0127 10:40:15.248498       1 migrator.go:18] FLAG: --kubeconfig=""\nI0127 10:40:15.248502       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0127 10:40:15.248507       1 migrator.go:18] FLAG: --log_dir=""\nI0127 10:40:15.248511       1 migrator.go:18] FLAG: --log_file=""\nI0127 10:40:15.248514       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0127 10:40:15.248517       1 migrator.go:18] FLAG: --logtostderr="true"\nI0127 10:40:15.248520       1 migrator.go:18] FLAG: --one_output="false"\nI0127 10:40:15.248523       1 migrator.go:18] FLAG: --skip_headers="false"\nI0127 10:40:15.248526       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0127 10:40:15.248529       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0127 10:40:15.248532       1 migrator.go:18] FLAG: --v="2"\nI0127 10:40:15.248535       1 migrator.go:18] FLAG: --vmodule=""\nI0127 10:40:15.250049       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0127 10:40:32.387962       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0127 10:40:32.583102       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0127 10:40:33.597692       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0127 10:40:33.657091       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\n
Jan 27 11:26:31.250 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-p2lym4fh-253f3-q8mbw-worker-eastus21-zwwgn container/prometheus-proxy reason/ContainerExit code/2 cause/Error 8: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/27 10:53:17 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/27 10:53:17 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/27 10:53:17 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/27 10:53:17 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/27 10:53:17 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/27 10:53:18 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0127 10:53:18.001440       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/27 10:53:18 http.go:107: HTTPS: listening on [::]:9091\nE0127 10:54:46.130184       1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/27 10:54:46 oauthproxy.go:791: requestauth: 10.129.2.10:36972 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0127 10:55:16.123396       1 webhook.go:111] Failed to make webhook authentic
Jan 27 11:26:31.250 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-p2lym4fh-253f3-q8mbw-worker-eastus21-zwwgn container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-27T10:53:17.190929159Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-27T10:53:17.191018258Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-27T10:53:17.191222256Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-27T10:53:17.888118987Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T10:53:17.888469283Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T10:53:22.33999255Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T10:56:06.568358928Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T10:58:35.515992528Z caller=re
Jan 27 11:26:31.365 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-dmrw5 node/ci-op-p2lym4fh-253f3-q8mbw-master-1 container/console-operator reason/ContainerExit code/1 cause/Error     1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-dmrw5", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0127 11:26:29.912954       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-dmrw5", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0127 11:26:29.913001       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0127 11:26:29.913047       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-dmrw5", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0127 11:26:29.913098       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0127 11:26:29.914070       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0127 11:26:29.914212       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0127 11:26:29.914347       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0127 11:26:29.914402       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0127 11:26:29.914488       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0127 11:26:29.915146       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0127 11:26:29.914599       1 base_controller.go:167] Shutting down ResourceSyncController ...\nW0127 11:26:29.914609       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 27 11:26:31.834 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-p2lym4fh-253f3-q8mbw-worker-eastus23-zxp2w container/config-reloader reason/ContainerExit code/2 cause/Error 3:12.276175409Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-27T10:53:12.27635801Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-27T10:53:13.113011748Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T10:53:13.115926555Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T10:53:15.226025747Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T10:54:21.941452311Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T10:56:30.154135851Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nle
Jan 27 11:26:31.834 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-p2lym4fh-253f3-q8mbw-worker-eastus23-zxp2w container/prometheus-proxy reason/ContainerExit code/2 cause/Error pping path "/" => upstream "http://localhost:9090/"\n2023/01/27 10:53:13 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/27 10:53:13 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/27 10:53:13 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0127 10:53:13.213854       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/27 10:53:13 http.go:107: HTTPS: listening on [::]:9091\nE0127 10:54:29.344032       1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/27 10:54:29 oauthproxy.go:791: requestauth: 10.129.2.10:41210 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0127 10:54:59.339507       1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/27 10:54:59 oauthproxy.go:791: requestauth: 10.129.2.10:41210 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0127 10:55:29.340641       1 webhook.go:111] Failed to make webhook authentic
Jan 27 11:26:31.945 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-p2lym4fh-253f3-q8mbw-worker-eastus23-zxp2w container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/27 10:52:58 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/27 10:52:58 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/27 10:52:58 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/27 10:52:58 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/27 10:52:58 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/27 10:52:58 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/27 10:52:58 http.go:107: HTTPS: listening on [::]:9095\nI0127 10:52:58.208295       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nE0127 10:55:34.033254       1 reflector.go:127] github.com/openshift/oauth-proxy/providers/openshift/provider.go:347: Failed to watch *v1.ConfigMap: unknown (get configmaps)\n
Jan 27 11:26:31.945 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-p2lym4fh-253f3-q8mbw-worker-eastus23-zxp2w container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-27T10:52:57.649023998Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-27T10:52:57.649339499Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-27T10:52:57.6497693Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-27T10:52:57.650190601Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-01-27T10:52:58.875162339Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\nlevel=info ts=2023-01-27T10:53:04.800379041Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Jan 27 11:26:32.227 E ns/openshift-monitoring pod/node-exporter-jhg4h node/ci-op-p2lym4fh-253f3-q8mbw-worker-eastus21-zwwgn container/node-exporter reason/ContainerExit code/143 cause/Error 7T10:46:52.791Z caller=node_exporter.go:113 collector=meminfo\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=netclass\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=netdev\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=netstat\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=nfs\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=nfsd\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=powersupplyclass\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=pressure\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=rapl\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=schedstat\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=sockstat\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=softnet\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=stat\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=textfile\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=thermal_zone\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=time\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=timex\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=udp_queues\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=uname\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=vmstat\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=xfs\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:113 collector=zfs\nlevel=info ts=2023-01-27T10:46:52.791Z caller=node_exporter.go:195 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2023-01-27T10:46:52.792Z caller=tls_config.go:191 msg="TLS is disabled." http2=false\n
#1618808039375114240junit4 days ago
Jan 27 04:31:43.000 - 2s    E disruption/kube-api connection/reused disruption/kube-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-ns1hz8z5-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 27 04:31:43.844 - 1s    E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-ns1hz8z5-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 27 04:31:43.844 - 1s    E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-ns1hz8z5-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
Jan 27 04:31:43.844 - 1s    E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-ns1hz8z5-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers
Jan 27 04:31:44.000 - 1s    E disruption/openshift-api connection/reused disruption/openshift-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-ns1hz8z5-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
Jan 27 04:31:50.660 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-rsrqv node/ci-op-ns1hz8z5-253f3-z64q6-master-2 container/console-operator reason/ContainerExit code/1 cause/Error hift-console-operator", Name:"console-operator-6bbd4fcc8c-rsrqv", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0127 04:31:36.631106       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0127 04:31:36.631109       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-rsrqv", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0127 04:31:36.631130       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0127 04:31:36.631133       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0127 04:31:36.631144       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0127 04:31:36.631159       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0127 04:31:36.631157       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-rsrqv", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0127 04:31:36.631171       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0127 04:31:36.631179       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0127 04:31:36.631184       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0127 04:31:36.631196       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0127 04:31:36.631208       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nW0127 04:31:36.631390       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 27 04:32:06.079 E ns/openshift-monitoring pod/cluster-monitoring-operator-894d44997-tzbf7 node/ci-op-ns1hz8z5-253f3-z64q6-master-0 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 27 04:32:06.208 - 4s    E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator
Jan 27 04:32:06.249 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-mfvwc node/ci-op-ns1hz8z5-253f3-z64q6-master-0 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error controller-manager ...\nI0127 04:32:01.134342       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0127 04:32:01.134380       1 reflector.go:225] Stopping reflector *v1.Proxy (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0127 04:32:01.134529       1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0127 04:32:01.134607       1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0127 04:32:01.134633       1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0127 04:32:01.134643       1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0127 04:32:01.134650       1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0127 04:32:01.134659       1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ...\nI0127 04:32:01.134666       1 base_controller.go:104] All UserCAObservationController workers have been terminated\nI0127 04:32:01.134678       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0127 04:32:01.134684       1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0127 04:32:01.134708       1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0127 04:32:01.134725       1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0127 04:32:01.134758       1 reflector.go:225] Stopping reflector *v1.Network (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0127 04:32:01.134855       1 reflector.go:225] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0127 04:32:01.134563       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 27 04:32:06.249 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-mfvwc node/ci-op-ns1hz8z5-253f3-z64q6-master-0 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 27 04:32:06.336 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-rrwcl node/ci-op-ns1hz8z5-253f3-z64q6-master-0 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1618529901302779904junit4 days ago
Jan 26 09:54:16.000 - 1s    E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-mx161pb8-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
Jan 26 09:54:16.727 - 999ms E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-mx161pb8-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 26 09:54:16.727 - 999ms E disruption/oauth-api connection/reused disruption/oauth-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-mx161pb8-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers
Jan 26 09:54:23.318 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-mx161pb8-253f3-gbtq6-worker-centralus2-tlcc2 container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/26 09:21:15 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/26 09:21:15 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/26 09:21:15 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/26 09:21:15 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/26 09:21:15 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/26 09:21:15 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/26 09:21:15 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0126 09:21:15.226573       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/26 09:21:15 http.go:107: HTTPS: listening on [::]:9091\n
Jan 26 09:54:23.318 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-mx161pb8-253f3-gbtq6-worker-centralus2-tlcc2 container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-26T09:21:14.599935799Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-26T09:21:14.600227301Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-26T09:21:14.600621705Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-26T09:21:15.117728259Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-26T09:21:15.11784656Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-26T09:22:47.867679119Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-26T09:25:10.942088993Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-26T09:31:25.633675248Z caller=re
Jan 26 09:54:23.432 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-67w5q node/ci-op-mx161pb8-253f3-gbtq6-master-2 container/console-operator reason/ContainerExit code/1 cause/Error :"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-67w5q", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0126 09:54:13.937236       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0126 09:54:13.937255       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-67w5q", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0126 09:54:13.937300       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0126 09:54:13.937359       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0126 09:54:13.937395       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0126 09:54:13.937414       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0126 09:54:13.937426       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0126 09:54:13.937436       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0126 09:54:13.937447       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0126 09:54:13.937457       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0126 09:54:13.937461       1 base_controller.go:114] Shutting down worker of StatusSyncer_console controller ...\nI0126 09:54:13.937472       1 base_controller.go:104] All StatusSyncer_console workers have been terminated\nI0126 09:54:13.937472       1 base_controller.go:114] Shutting down worker of ConsoleOperator controller ...\nW0126 09:54:13.937280       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0126 09:54:13.937494       1 base_controller.go:167] Shutting down ConsoleRouteController ...\n
Jan 26 09:54:23.507 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-mx161pb8-253f3-gbtq6-worker-centralus3-fkwzq container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/26 09:20:31 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/26 09:20:31 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/26 09:20:31 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/26 09:20:31 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/26 09:20:31 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/26 09:20:31 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0126 09:20:31.928071       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/26 09:20:31 http.go:107: HTTPS: listening on [::]:9095\n
Jan 26 09:54:23.507 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-mx161pb8-253f3-gbtq6-worker-centralus3-fkwzq container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-26T09:20:31.644493306Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-26T09:20:31.644566108Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-26T09:20:31.644762013Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-26T09:20:31.645187623Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-01-26T09:20:32.853792722Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Jan 26 09:54:23.760 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-mx161pb8-253f3-gbtq6-worker-centralus1-kp9hl container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/26 09:21:21 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/26 09:21:21 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/26 09:21:21 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/26 09:21:21 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/26 09:21:21 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/26 09:21:21 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/26 09:21:21 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0126 09:21:21.143197       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/26 09:21:21 http.go:107: HTTPS: listening on [::]:9091\n
Jan 26 09:54:23.760 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-mx161pb8-253f3-gbtq6-worker-centralus1-kp9hl container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-26T09:21:20.559244829Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-26T09:21:20.55931863Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-26T09:21:20.559482933Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-26T09:21:20.975962671Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-26T09:21:20.976079872Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-26T09:22:38.820109161Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-26T09:24:49.627916309Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-26T09:31:26.557310515Z caller=re
Jan 26 09:54:24.289 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-mx161pb8-253f3-gbtq6-worker-centralus2-tlcc2 container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/26 09:20:34 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/26 09:20:34 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/26 09:20:34 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/26 09:20:34 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/26 09:20:34 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/26 09:20:34 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/26 09:20:34 http.go:107: HTTPS: listening on [::]:9095\nI0126 09:20:34.403846       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
#1618759321695293440junit4 days ago
Jan 27 01:22:26.483 - 999ms E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-8p8zz22x-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 27 01:22:31.910 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-kmz4x node/ci-op-8p8zz22x-253f3-z7dvx-master-2 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error ontroller ...\nI0127 01:22:22.719559       1 base_controller.go:114] Shutting down worker of CSISnapshotWebhookController controller ...\nI0127 01:22:22.720440       1 base_controller.go:104] All CSISnapshotWebhookController workers have been terminated\nI0127 01:22:22.719565       1 base_controller.go:114] Shutting down worker of StatusSyncer_csi-snapshot-controller controller ...\nI0127 01:22:22.720535       1 base_controller.go:104] All StatusSyncer_csi-snapshot-controller workers have been terminated\nI0127 01:22:22.720779       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0127 01:22:22.720897       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0127 01:22:22.720968       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0127 01:22:22.721067       1 secure_serving.go:311] Stopped listening on [::]:8443\nI0127 01:22:22.721225       1 genericapiserver.go:363] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nI0127 01:22:22.721135       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0127 01:22:22.719575       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0127 01:22:22.721337       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0127 01:22:22.719580       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0127 01:22:22.721407       1 base_controller.go:104] All StaticResourceController workers have been terminated\nW0127 01:22:22.719995       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0127 01:22:22.721154       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/tmp/serving-cert-179899664/tls.crt::/tmp/serving-cert-179899664/tls.key"\n
Jan 27 01:22:32.359 E ns/openshift-monitoring pod/cluster-monitoring-operator-894d44997-v9sdb node/ci-op-8p8zz22x-253f3-z7dvx-master-2 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 27 01:22:33.039 E ns/openshift-ingress-canary pod/ingress-canary-ths9z node/ci-op-8p8zz22x-253f3-z7dvx-worker-centralus2-r6rhc container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 27 01:22:39.257 E ns/openshift-image-registry pod/cluster-image-registry-operator-864d6d8695-4p6wp node/ci-op-8p8zz22x-253f3-z7dvx-master-2 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 27 01:22:39.342 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-rc7p8 node/ci-op-8p8zz22x-253f3-z7dvx-master-0 container/console-operator reason/ContainerExit code/1 cause/Error ionStart' Received signal to terminate, becoming unready, but keeping serving\nI0127 01:22:38.347964       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0127 01:22:38.347816       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0127 01:22:38.347953       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-rc7p8", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0127 01:22:38.347978       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0127 01:22:38.347985       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0127 01:22:38.347990       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0127 01:22:38.348000       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0127 01:22:38.348002       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-rc7p8", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0127 01:22:38.348013       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0127 01:22:38.348017       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nW0127 01:22:38.348020       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0127 01:22:38.348028       1 base_controller.go:114] Shutting down worker of ConsoleDownloadsDeploymentSyncController controller ...\nI0127 01:22:38.348036       1 base_controller.go:104] All ConsoleDownloadsDeploymentSyncController workers have been terminated\nI0127 01:22:38.348039       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\n
Jan 27 01:22:39.912 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-8p8zz22x-253f3-z7dvx-worker-centralus1-b2slr container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/27 00:53:24 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/27 00:53:24 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/27 00:53:24 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/27 00:53:24 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/27 00:53:24 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/27 00:53:24 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/27 00:53:24 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2023/01/27 00:53:24 http.go:107: HTTPS: listening on [::]:9091\nI0127 00:53:24.127166       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/27 01:04:02 server.go:3120: http: TLS handshake error from 10.131.0.4:40112: read tcp 10.129.2.11:9091->10.131.0.4:40112: read: connection reset by peer\n
Jan 27 01:22:39.912 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-8p8zz22x-253f3-z7dvx-worker-centralus1-b2slr container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-27T00:53:20.583177981Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-27T00:53:20.58327638Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-27T00:53:20.583417178Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-27T00:53:21.016080902Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T00:53:21.016180601Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T00:56:26.437538367Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-27T00:59:47.1336971Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\n
Jan 27 01:22:42.198 E ns/openshift-monitoring pod/node-exporter-vpk5r node/ci-op-8p8zz22x-253f3-z7dvx-master-1 container/node-exporter reason/ContainerExit code/143 cause/Error 7T00:20:44.691Z caller=node_exporter.go:113 collector=meminfo\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=netclass\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=netdev\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=netstat\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=nfs\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=nfsd\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=powersupplyclass\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=pressure\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=rapl\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=schedstat\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=sockstat\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=softnet\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=stat\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=textfile\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=thermal_zone\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=time\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=timex\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=udp_queues\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=uname\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=vmstat\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=xfs\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:113 collector=zfs\nlevel=info ts=2023-01-27T00:20:44.691Z caller=node_exporter.go:195 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2023-01-27T00:20:44.691Z caller=tls_config.go:191 msg="TLS is disabled." http2=false\n
Jan 27 01:22:42.528 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-8p8zz22x-253f3-z7dvx-worker-centralus3-vwgjz container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-27T00:33:08.295946643Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-27T00:33:08.296044843Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-27T00:33:08.296263645Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-27T00:33:08.296519046Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-01-27T00:33:09.996882866Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Jan 27 01:22:42.528 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-8p8zz22x-253f3-z7dvx-worker-centralus3-vwgjz container/alertmanager-proxy reason/ContainerExit code/2 cause/Error ttps://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/27 00:38:50 oauthproxy.go:791: requestauth: 10.131.0.29:58888 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0127 00:38:55.580610       1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/27 00:38:55 oauthproxy.go:791: requestauth: 10.131.0.21:50442 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0127 00:39:16.538578       1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/27 00:39:16 oauthproxy.go:791: requestauth: 10.131.0.29:40652 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0127 00:39:16.555826       1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/27 00:39:16 oauthproxy.go:791: requestauth: 10.131.0.21:58618 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/27 00:44:19 server.go:3120: http: TLS handshake error from 10.128.2.5:52162: read tcp 10.131.0.26:9095->10.128.2.5:52162: read: connection reset by peer\n2023/01/27 00:47:14 server.go:3120: http: TLS handshake error from 10.128.2.5:56558: read tcp 10.131.0.26:9095->10.128.2.5:56558: read: connection reset by peer\n2023/01/27 00:56:31 server.go:3120: http: TLS handshake error from 10.128.2.5:51548: read tcp 10.131.0.26:9095->10.128.2.5:51548: read: connection reset by peer\n
#1618484993225396224junit5 days ago
Jan 26 07:06:57.190 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-565qs node/ci-op-xrnryb33-253f3-84qdh-master-1 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 07:06:57.270 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-db7d4 node/ci-op-xrnryb33-253f3-84qdh-master-1 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 07:06:59.802 E ns/openshift-ingress-canary pod/ingress-canary-q2sgc node/ci-op-xrnryb33-253f3-84qdh-worker-centralus3-657x2 container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 26 07:07:00.198 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-6bfwc node/ci-op-xrnryb33-253f3-84qdh-master-1 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error  base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0126 07:06:58.239276       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0126 07:06:58.239282       1 base_controller.go:114] Shutting down worker of CSIDriverStarter controller ...\nI0126 07:06:58.239287       1 base_controller.go:104] All CSIDriverStarter workers have been terminated\nI0126 07:06:58.239295       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0126 07:06:58.239303       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0126 07:06:58.239310       1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0126 07:06:58.239311       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0126 07:06:58.239327       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0126 07:06:58.239345       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0126 07:06:58.239346       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nI0126 07:06:58.239362       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0126 07:06:58.239368       1 base_controller.go:114] Shutting down worker of VSphereProblemDetectorStarter controller ...\nI0126 07:06:58.239377       1 base_controller.go:104] All VSphereProblemDetectorStarter workers have been terminated\nI0126 07:06:58.239315       1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0126 07:06:58.239293       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nW0126 07:06:58.239715       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 26 07:07:00.198 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-6bfwc node/ci-op-xrnryb33-253f3-84qdh-master-1 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 07:07:07.303 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-qjhzm node/ci-op-xrnryb33-253f3-84qdh-master-2 container/console-operator reason/ContainerExit code/1 cause/Error 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-qjhzm", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0126 07:07:05.302777       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0126 07:07:05.303748       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0126 07:07:05.303999       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0126 07:07:05.304054       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0126 07:07:05.304108       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0126 07:07:05.304144       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0126 07:07:05.304177       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0126 07:07:05.304211       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0126 07:07:05.304245       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0126 07:07:05.304285       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0126 07:07:05.304310       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0126 07:07:05.304323       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0126 07:07:05.304329       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0126 07:07:05.304341       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0126 07:07:05.304354       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0126 07:07:05.304367       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nW0126 07:07:05.304590       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 26 07:07:07.376 - 999ms E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-xrnryb33-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers
Jan 26 07:07:07.376 - 2s    E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-xrnryb33-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 26 07:07:08.000 - 1s    E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-xrnryb33-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 26 07:07:08.376 - 1s    E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-xrnryb33-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
Jan 26 07:07:09.000 - 1s    E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-xrnryb33-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers
#1620001766827364352junit20 hours ago
Jan 30 11:35:42.000 - 2s    E disruption/oauth-api connection/reused disruption/oauth-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-si98zlil-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers
Jan 30 11:35:42.554 - 999ms E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-si98zlil-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
Jan 30 11:35:42.554 - 999ms E disruption/kube-api connection/reused disruption/kube-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-si98zlil-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 30 11:35:52.000 - 1s    E ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new stopped responding to GET requests over new connections: Get "https://test-disruption-new-openshift-image-registry.apps.ci-op-si98zlil-253f3.ci.azure.devcluster.openshift.com/healthz": dial tcp 20.237.149.40:443: i/o timeout
Jan 30 11:35:52.726 E ns/openshift-ingress-canary pod/ingress-canary-qz5lk node/ci-op-si98zlil-253f3-x6rsw-worker-westus-t9plx container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 30 11:35:52.939 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-l25nz node/ci-op-si98zlil-253f3-x6rsw-master-0 container/console-operator reason/ContainerExit code/1 cause/Error ersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0130 11:35:50.942219       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0130 11:35:50.942235       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-l25nz", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0130 11:35:50.942255       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-l25nz", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0130 11:35:50.942276       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0130 11:35:50.942299       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-l25nz", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0130 11:35:50.942318       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0130 11:35:50.943173       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0130 11:35:50.943314       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nW0130 11:35:50.943368       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0130 11:35:50.943439       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0130 11:35:50.943473       1 base_controller.go:167] Shutting down ManagementStateController ...\n
Jan 30 11:35:54.512 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-47d7p node/ci-op-si98zlil-253f3-x6rsw-master-1 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error +2700.475801464\nI0130 11:35:26.434510       1 operator.go:159] Finished syncing operator at 68.245416ms\nI0130 11:35:26.434588       1 operator.go:157] Starting syncing operator at 2023-01-30 11:35:26.434581317 +0000 UTC m=+2700.544132180\nI0130 11:35:26.489529       1 operator.go:159] Finished syncing operator at 54.938634ms\nI0130 11:35:26.489583       1 operator.go:157] Starting syncing operator at 2023-01-30 11:35:26.489579152 +0000 UTC m=+2700.599129915\nI0130 11:35:26.528352       1 operator.go:159] Finished syncing operator at 38.761935ms\nI0130 11:35:40.921592       1 operator.go:157] Starting syncing operator at 2023-01-30 11:35:40.921578327 +0000 UTC m=+2715.031129190\nI0130 11:35:41.054280       1 operator.go:159] Finished syncing operator at 132.684804ms\nI0130 11:35:41.097265       1 operator.go:157] Starting syncing operator at 2023-01-30 11:35:41.097249392 +0000 UTC m=+2715.206800255\nI0130 11:35:41.107206       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0130 11:35:41.108140       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0130 11:35:41.109696       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0130 11:35:41.109897       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0130 11:35:41.109993       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0130 11:35:41.110608       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0130 11:35:41.118197       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0130 11:35:41.118812       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0130 11:35:41.118216       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nW0130 11:35:41.118880       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 30 11:35:54.512 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-47d7p node/ci-op-si98zlil-253f3-x6rsw-master-1 container/csi-snapshot-controller-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 30 11:35:55.459 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-bvmpj node/ci-op-si98zlil-253f3-x6rsw-master-1 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error 7\nI0130 11:35:40.743892       1 reflector.go:225] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0130 11:35:40.744281       1 reflector.go:225] Stopping reflector *v1.ClusterRole (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0130 11:35:40.744376       1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0130 11:35:40.744456       1 reflector.go:225] Stopping reflector *v1.OpenShiftControllerManager (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0130 11:35:40.744545       1 reflector.go:225] Stopping reflector *v1.ClusterRoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0130 11:35:40.744746       1 reflector.go:225] Stopping reflector *v1.Namespace (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0130 11:35:40.744806       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0130 11:35:40.744828       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0130 11:35:40.746016       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0130 11:35:40.744841       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0130 11:35:40.744853       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0130 11:35:40.744863       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0130 11:35:40.744868       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nW0130 11:35:40.744930       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0130 11:35:40.744972       1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0130 11:35:40.744980       1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\n
Jan 30 11:35:55.459 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-bvmpj node/ci-op-si98zlil-253f3-x6rsw-master-1 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 30 11:35:55.522 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-6b4cbf84ff-w5s87 node/ci-op-si98zlil-253f3-x6rsw-master-1 container/cluster-node-tuning-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1618390046711222272junit5 days ago
Jan 26 00:52:52.681 E ns/openshift-monitoring pod/openshift-state-metrics-7cff956b85-lrwl9 node/ci-op-kifn63xg-253f3-dqr22-worker-westus-9wcvn container/openshift-state-metrics reason/ContainerExit code/2 cause/Error
Jan 26 00:52:53.437 E ns/openshift-monitoring pod/prometheus-operator-6594997947-sfpcb node/ci-op-kifn63xg-253f3-dqr22-master-2 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 00:52:54.790 E ns/openshift-image-registry pod/cluster-image-registry-operator-864d6d8695-s9wdj node/ci-op-kifn63xg-253f3-dqr22-master-0 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 00:52:54.873 E ns/openshift-monitoring pod/thanos-querier-65f59cd747-qfg69 node/ci-op-kifn63xg-253f3-dqr22-worker-westus-wm7nf container/oauth-proxy reason/ContainerExit code/2 cause/Error uild-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found]\n2023/01/26 00:16:01 oauthproxy.go:791: requestauth: 10.129.0.15:33076 tokenreviews.authentication.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:thanos-querier" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "thanos-querier" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found]\n
Jan 26 00:52:55.596 E ns/openshift-monitoring pod/thanos-querier-65f59cd747-skczb node/ci-op-kifn63xg-253f3-dqr22-worker-westus-9wcvn container/oauth-proxy reason/ContainerExit code/2 cause/Error bhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/26 00:15:52 oauthproxy.go:791: requestauth: 10.129.0.15:33008 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:52.570607       1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/26 00:15:52 oauthproxy.go:791: requestauth: 10.129.0.15:33016 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:52.728245       1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/26 00:15:52 oauthproxy.go:791: requestauth: 10.129.0.15:33024 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:52.904676       1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/26 00:15:52 oauthproxy.go:791: requestauth: 10.129.0.15:33036 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:53.241242       1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/01/26 00:15:53 oauthproxy.go:791: requestauth: 10.129.0.15:33052 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n
Jan 26 00:52:58.697 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-ldwqd node/ci-op-kifn63xg-253f3-dqr22-master-2 container/console-operator reason/ContainerExit code/1 cause/Error 6bbd4fcc8c-ldwqd", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0126 00:52:54.012853       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0126 00:52:54.012878       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-ldwqd", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0126 00:52:54.012898       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0126 00:52:54.012912       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0126 00:52:54.012927       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0126 00:52:54.012939       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0126 00:52:54.012953       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0126 00:52:54.012968       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0126 00:52:54.012999       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0126 00:52:54.013014       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0126 00:52:54.013028       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0126 00:52:54.013040       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0126 00:52:54.013061       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0126 00:52:54.013074       1 base_controller.go:167] Shutting down ConsoleOperator ...\nW0126 00:52:54.013085       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0126 00:52:54.013091       1 base_controller.go:167] Shutting down DownloadsRouteController ...\n
Jan 26 00:53:03.528 - 3s    E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator
Jan 26 00:53:03.530 E ns/openshift-controller-manager pod/controller-manager-8kgv6 node/ci-op-kifn63xg-253f3-dqr22-master-1 container/controller-manager reason/ContainerExit code/137 cause/Error Get "https://172.30.0.1:443/apis/route.openshift.io/v1/routes?resourceVersion=24204": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:24.569422       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: Get "https://172.30.0.1:443/apis/build.openshift.io/v1/buildconfigs?resourceVersion=24380": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:24.906834       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:29.641987       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://172.30.0.1:443/apis/image.openshift.io/v1/imagestreams?resourceVersion=24286": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:50.907000       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:51.333088       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://172.30.0.1:443/apis/route.openshift.io/v1/routes?resourceVersion=24204": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:16:00.819291       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.Build: unknown (get builds.config.openshift.io)\nE0126 00:16:00.819306       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.Deployment: unknown (get deployments.apps)\n
Jan 26 00:53:03.671 E ns/openshift-controller-manager pod/controller-manager-k2vfq node/ci-op-kifn63xg-253f3-dqr22-master-2 container/controller-manager reason/ContainerExit code/137 cause/Error oller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0126 00:07:29.391242       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0126 00:07:29.391261       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0126 00:07:29.391376       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0126 00:07:29.391461       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\nE0126 00:09:29.615258       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:11:39.374670       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:12:25.171042       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:08.299487       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\n
Jan 26 00:53:04.990 E ns/openshift-controller-manager pod/controller-manager-x82fc node/ci-op-kifn63xg-253f3-dqr22-master-0 container/controller-manager reason/ContainerExit code/137 cause/Error ecks at 0.0.0.0:8443\nI0126 00:07:32.517962       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\nE0126 00:09:18.992547       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:09:47.705806       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:11:33.779159       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:12:27.592245       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:13.920651       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0126 00:15:45.140187       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\n
Jan 26 00:53:05.050 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-fcg5q node/ci-op-kifn63xg-253f3-dqr22-master-0 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error down event" name="ShutdownInitiated"\nI0126 00:52:58.778732       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0126 00:52:58.778759       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0126 00:52:58.778873       1 reflector.go:225] Stopping reflector *v1.Build (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0126 00:52:58.778902       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0126 00:52:58.778929       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0126 00:52:58.778936       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0126 00:52:58.778951       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0126 00:52:58.778965       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0126 00:52:58.778978       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0126 00:52:58.778978       1 reflector.go:225] Stopping reflector *v1.Namespace (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0126 00:52:58.779058       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0126 00:52:58.779133       1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0126 00:52:58.779216       1 reflector.go:225] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0126 00:52:58.779234       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0126 00:52:58.779306       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0126 00:52:58.779323       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
#1617671998660415488junit7 days ago
Jan 24 01:04:44.682 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-2gb4q node/ci-op-hmx5vnk0-253f3-xgz92-master-1 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 01:04:44.774 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-skxrp node/ci-op-hmx5vnk0-253f3-xgz92-master-1 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error  1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0124 01:04:40.500947       1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 01:04:40.500953       1 reflector.go:225] Stopping reflector *v1.ServiceAccount (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 01:04:40.500848       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0124 01:04:40.500836       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 01:04:40.500684       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0124 01:04:40.501013       1 reflector.go:225] Stopping reflector *v1.Build (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 01:04:40.501044       1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 01:04:40.501058       1 reflector.go:225] Stopping reflector *v1.Image (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 01:04:40.501064       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nW0124 01:04:40.501085       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0124 01:04:40.501090       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0124 01:04:40.501093       1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0124 01:04:40.501101       1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0124 01:04:40.501123       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\n
Jan 24 01:04:44.774 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-skxrp node/ci-op-hmx5vnk0-253f3-xgz92-master-1 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 01:04:48.658 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-q9tr6 node/ci-op-hmx5vnk0-253f3-xgz92-master-1 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error 5.746520       1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0124 01:04:45.746538       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0124 01:04:45.746543       1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0124 01:04:45.746547       1 base_controller.go:114] Shutting down worker of VSphereProblemDetectorStarter controller ...\nI0124 01:04:45.746553       1 base_controller.go:104] All VSphereProblemDetectorStarter workers have been terminated\nI0124 01:04:45.746567       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0124 01:04:45.746571       1 base_controller.go:114] Shutting down worker of CSIDriverStarter controller ...\nI0124 01:04:45.746571       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0124 01:04:45.746578       1 base_controller.go:104] All CSIDriverStarter workers have been terminated\nI0124 01:04:45.746579       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0124 01:04:45.746572       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0124 01:04:45.746587       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0124 01:04:45.746589       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0124 01:04:45.746599       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0124 01:04:45.746610       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0124 01:04:45.746621       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nW0124 01:04:45.746640       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 01:04:48.658 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-q9tr6 node/ci-op-hmx5vnk0-253f3-xgz92-master-1 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 01:04:50.849 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-2ckzl node/ci-op-hmx5vnk0-253f3-xgz92-master-0 container/console-operator reason/ContainerExit code/1 cause/Error ] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-2ckzl", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0124 01:04:48.287368       1 base_controller.go:114] Shutting down worker of ConsoleRouteController controller ...\nI0124 01:04:48.287379       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0124 01:04:48.287381       1 base_controller.go:104] All ConsoleRouteController workers have been terminated\nI0124 01:04:48.287388       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0124 01:04:48.287395       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0124 01:04:48.287397       1 base_controller.go:114] Shutting down worker of StatusSyncer_console controller ...\nI0124 01:04:48.287403       1 base_controller.go:114] Shutting down worker of HealthCheckController controller ...\nI0124 01:04:48.287405       1 base_controller.go:104] All StatusSyncer_console workers have been terminated\nI0124 01:04:48.287408       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nW0124 01:04:48.287406       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0124 01:04:48.287422       1 base_controller.go:114] Shutting down worker of ConsoleOperator controller ...\nI0124 01:04:48.287424       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0124 01:04:48.287429       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0124 01:04:48.287438       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0124 01:04:48.287439       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0124 01:04:48.287415       1 base_controller.go:114] Shutting down worker of ConsoleServiceController controller ...\n
Jan 24 01:04:53.000 - 1s    E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-hmx5vnk0-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers
Jan 24 01:04:53.000 - 1s    E disruption/openshift-api connection/reused disruption/openshift-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-hmx5vnk0-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
Jan 24 01:04:54.000 - 1s    E disruption/kube-api connection/reused disruption/kube-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-hmx5vnk0-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 24 01:04:54.000 - 1s    E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-hmx5vnk0-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
Jan 24 01:04:54.063 - 1s    E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-hmx5vnk0-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers
#1617894426783256576junit6 days ago
Jan 24 16:02:03.953 - 999ms E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-c9gsxnni-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 24 16:02:03.953 - 1s    E disruption/openshift-api connection/reused disruption/openshift-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-c9gsxnni-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
Jan 24 16:02:15.878 E ns/openshift-ingress-canary pod/ingress-canary-gqn8s node/ci-op-c9gsxnni-253f3-2q2c4-worker-eastus2-crq9p container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 24 16:02:20.045 E ns/openshift-monitoring pod/cluster-monitoring-operator-894d44997-nhhv7 node/ci-op-c9gsxnni-253f3-2q2c4-master-0 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 16:02:21.354 E ns/openshift-kube-storage-version-migrator pod/migrator-5554c9565f-qvrw2 node/ci-op-c9gsxnni-253f3-2q2c4-master-1 container/migrator reason/ContainerExit code/2 cause/Error I0124 15:05:42.731341       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0124 15:05:42.731427       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0124 15:05:42.731432       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0124 15:05:42.731436       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0124 15:05:42.731440       1 migrator.go:18] FLAG: --kubeconfig=""\nI0124 15:05:42.731444       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0124 15:05:42.731449       1 migrator.go:18] FLAG: --log_dir=""\nI0124 15:05:42.731452       1 migrator.go:18] FLAG: --log_file=""\nI0124 15:05:42.731455       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0124 15:05:42.731458       1 migrator.go:18] FLAG: --logtostderr="true"\nI0124 15:05:42.731461       1 migrator.go:18] FLAG: --one_output="false"\nI0124 15:05:42.731464       1 migrator.go:18] FLAG: --skip_headers="false"\nI0124 15:05:42.731466       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0124 15:05:42.731469       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0124 15:05:42.731472       1 migrator.go:18] FLAG: --v="2"\nI0124 15:05:42.731475       1 migrator.go:18] FLAG: --vmodule=""\nI0124 15:05:42.734041       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0124 15:05:56.867306       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0124 15:05:57.004676       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0124 15:05:58.077793       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0124 15:05:58.779798       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\nI0124 15:11:58.008335       1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jan 24 16:02:21.451 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-2cb45 node/ci-op-c9gsxnni-253f3-2q2c4-master-1 container/console-operator reason/ContainerExit code/1 cause/Error start hooks have been terminated\nI0124 16:02:16.854218       1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated\nI0124 16:02:16.854226       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0124 16:02:16.854230       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0124 16:02:16.854232       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0124 16:02:16.854237       1 base_controller.go:114] Shutting down worker of StatusSyncer_console controller ...\nI0124 16:02:16.854242       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0124 16:02:16.854243       1 base_controller.go:104] All StatusSyncer_console workers have been terminated\nI0124 16:02:16.854248       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0124 16:02:16.854258       1 base_controller.go:114] Shutting down worker of ConsoleOperator controller ...\nI0124 16:02:16.854265       1 base_controller.go:104] All ConsoleOperator workers have been terminated\nI0124 16:02:16.854295       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-2cb45", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0124 16:02:16.854351       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-2cb45", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0124 16:02:16.854393       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nW0124 16:02:16.854004       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 16:02:27.395 E ns/openshift-controller-manager pod/controller-manager-grv2w node/ci-op-c9gsxnni-253f3-2q2c4-master-2 container/controller-manager reason/ContainerExit code/137 cause/Error I0124 15:17:37.735554       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0124 15:17:37.738517       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0124 15:17:37.738556       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0124 15:17:37.738626       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0124 15:17:37.738852       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\nE0124 15:19:27.560355       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0124 15:20:32.699864       1 leaderelection.go:367] Failed to update lock: Operation cannot be fulfilled on configmaps "openshift-master-controllers": the object has been modified; please apply your changes to the latest version and try again\nE0124 15:22:20.595511       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0124 15:22:58.474664       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\n
Jan 24 16:02:28.394 E ns/openshift-controller-manager pod/controller-manager-f8lsb node/ci-op-c9gsxnni-253f3-2q2c4-master-1 container/controller-manager reason/ContainerExit code/137 cause/Error : dial tcp 172.30.0.1:443: connect: connection refused\nE0124 15:23:08.814596       1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0124 15:23:22.199122       1 reflector.go:138] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Failed to watch *v1.DeploymentConfig: the server could not find the requested resource (get deploymentconfigs.apps.openshift.io)\nE0124 16:02:03.423526       1 imagestream_controller.go:136] Error syncing image stream "openshift/openjdk-11-rhel7": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "openjdk-11-rhel7": the object has been modified; please apply your changes to the latest version and try again\nE0124 16:02:03.905515       1 imagestream_controller.go:136] Error syncing image stream "openshift/jenkins-agent-maven": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "jenkins-agent-maven": the object has been modified; please apply your changes to the latest version and try again\nE0124 16:02:10.383507       1 imagestream_controller.go:136] Error syncing image stream "openshift/apicurito-ui": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "apicurito-ui": the object has been modified; please apply your changes to the latest version and try again\nE0124 16:02:10.397298       1 imagestream_controller.go:136] Error syncing image stream "openshift/nodejs": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "nodejs": the object has been modified; please apply your changes to the latest version and try again\nE0124 16:02:10.740495       1 imagestream_controller.go:136] Error syncing image stream "openshift/jenkins-agent-maven": Operation cannot be fulfilled on imagestream.image.openshift.io "jenkins-agent-maven": the image stream was updated from "42195" to "42448"\n
Jan 24 16:02:34.999 E ns/openshift-ingress-canary pod/ingress-canary-nqnkr node/ci-op-c9gsxnni-253f3-2q2c4-worker-eastus3-7gbqt container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 24 16:02:35.086 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-c9gsxnni-253f3-2q2c4-worker-eastus2-crq9p container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/24 15:20:45 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/24 15:20:45 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/24 15:20:45 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/24 15:20:45 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/24 15:20:45 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/24 15:20:45 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/24 15:20:45 http.go:107: HTTPS: listening on [::]:9095\nI0124 15:20:45.742756       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jan 24 16:02:35.086 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-c9gsxnni-253f3-2q2c4-worker-eastus2-crq9p container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-24T15:20:45.338774797Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-24T15:20:45.338823598Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-24T15:20:45.33938181Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-24T15:20:45.339504113Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-01-24T15:20:46.671265023Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
#1616627232170577920junit10 days ago
Jan 21 03:54:48.298 E ns/openshift-controller-manager pod/controller-manager-8zs48 node/ci-op-yqmmihcn-253f3-fjprt-master-0 container/controller-manager reason/ContainerExit code/137 cause/Error I0121 03:20:46.786340       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0121 03:20:46.788590       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0121 03:20:46.788588       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0121 03:20:46.788703       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0121 03:20:46.788885       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
Jan 21 03:54:48.498 E ns/openshift-controller-manager pod/controller-manager-lh7nr node/ci-op-yqmmihcn-253f3-fjprt-master-2 container/controller-manager reason/ContainerExit code/137 cause/Error I0121 03:20:46.542128       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202212051626.p0.g79857a3.assembly.stream-79857a3)\nI0121 03:20:46.543938       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9f4cc933f9bced10b1e8b7ebd0695e02f09eba30ac0a43c9cca51c04adc9589"\nI0121 03:20:46.543958       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6601b9ef96b38632311dfced9f4588402fed41a0112586f7dad45ef62474beb1"\nI0121 03:20:46.544070       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0121 03:20:46.544175       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
Jan 21 03:54:48.606 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-f4w7x node/ci-op-yqmmihcn-253f3-fjprt-master-2 container/webhook reason/ContainerExit code/2 cause/Error
Jan 21 03:54:48.715 E ns/openshift-controller-manager pod/controller-manager-m624b node/ci-op-yqmmihcn-253f3-fjprt-master-1 container/controller-manager reason/ContainerExit code/137 cause/Error 35031       1 imagestream_controller.go:136] Error syncing image stream "openshift/rhdm-kieserver-rhel8": Operation cannot be fulfilled on imagestream.image.openshift.io "rhdm-kieserver-rhel8": the image stream was updated from "39442" to "39530"\nE0121 03:54:34.547777       1 imagestream_controller.go:136] Error syncing image stream "openshift/rhdm-kieserver-rhel8": Operation cannot be fulfilled on imagestream.image.openshift.io "rhdm-kieserver-rhel8": the image stream was updated from "39442" to "39530"\nE0121 03:54:34.561794       1 imagestream_controller.go:136] Error syncing image stream "openshift/rhdm-kieserver-rhel8": Operation cannot be fulfilled on imagestream.image.openshift.io "rhdm-kieserver-rhel8": the image stream was updated from "39442" to "39530"\nE0121 03:54:34.584874       1 imagestream_controller.go:136] Error syncing image stream "openshift/rhdm-kieserver-rhel8": Operation cannot be fulfilled on imagestream.image.openshift.io "rhdm-kieserver-rhel8": the image stream was updated from "39442" to "39530"\nE0121 03:54:34.617354       1 imagestream_controller.go:136] Error syncing image stream "openshift/rhdm-kieserver-rhel8": Operation cannot be fulfilled on imagestream.image.openshift.io "rhdm-kieserver-rhel8": the image stream was updated from "39442" to "39530"\nE0121 03:54:34.794919       1 imagestream_controller.go:136] Error syncing image stream "openshift/rhdm-kieserver-rhel8": Operation cannot be fulfilled on imagestream.image.openshift.io "rhdm-kieserver-rhel8": the image stream was updated from "39442" to "39530"\nE0121 03:54:34.868492       1 imagestream_controller.go:136] Error syncing image stream "openshift/apicast-gateway": Operation cannot be fulfilled on imagestream.image.openshift.io "apicast-gateway": the image stream was updated from "39336" to "39583"\nE0121 03:54:35.022165       1 imagestream_controller.go:136] Error syncing image stream "openshift/nginx": Operation cannot be fulfilled on imagestream.image.openshift.io "nginx": the image stream was updated from "39335" to "39595"\n
Jan 21 03:54:48.796 E ns/openshift-ingress-canary pod/ingress-canary-cp94w node/ci-op-yqmmihcn-253f3-fjprt-worker-centralus1-h57bl container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 21 03:54:49.505 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-b8rf7 node/ci-op-yqmmihcn-253f3-fjprt-master-2 container/console-operator reason/ContainerExit code/1 cause/Error ase_controller.go:167] Shutting down ConsoleRouteController ...\nI0121 03:54:48.232737       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0121 03:54:48.232745       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0121 03:54:48.232753       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0121 03:54:48.232762       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0121 03:54:48.232770       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0121 03:54:48.232778       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0121 03:54:48.232682       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-b8rf7", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0121 03:54:48.232887       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-b8rf7", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0121 03:54:48.232951       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0121 03:54:48.233006       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-b8rf7", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0121 03:54:48.233073       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nW0121 03:54:48.233136       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 21 03:54:49.643 E ns/openshift-image-registry pod/cluster-image-registry-operator-864d6d8695-5lhr4 node/ci-op-yqmmihcn-253f3-fjprt-master-1 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 21 03:54:52.677 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-rfmdp node/ci-op-yqmmihcn-253f3-fjprt-master-1 container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 21 03:54:54.119 E clusteroperator/storage condition/Available status/False reason/AzureDiskCSIDriverOperatorCR_WaitForOperator changed: AzureDiskCSIDriverOperatorCRAvailable: Waiting for AzureDisk operator to report status
Jan 21 03:54:54.119 - 67s   E clusteroperator/storage condition/Available status/False reason/AzureDiskCSIDriverOperatorCRAvailable: Waiting for AzureDisk operator to report status
Jan 21 03:54:58.572 E ns/openshift-ingress-canary pod/ingress-canary-kbqx2 node/ci-op-yqmmihcn-253f3-fjprt-worker-centralus2-jvb4q container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
#1617572352420220928junit7 days ago
Jan 23 18:35:35.743 E clusteroperator/storage condition/Available status/False reason/AzureDiskCSIDriverOperatorCR_WaitForOperator changed: AzureDiskCSIDriverOperatorCRAvailable: Waiting for AzureDisk operator to report status
Jan 23 18:35:35.743 - 39s   E clusteroperator/storage condition/Available status/False reason/AzureDiskCSIDriverOperatorCRAvailable: Waiting for AzureDisk operator to report status
Jan 23 18:35:38.342 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-65wct node/ci-op-0558kv1s-253f3-p2vm2-master-1 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-76f948cf74-65wct_8e58f917-4c55-49ae-9777-cb90cbd107ea/csi-snapshot-controller-operator/0.log": lstat /var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-76f948cf74-65wct_8e58f917-4c55-49ae-9777-cb90cbd107ea: no such file or directory
Jan 23 18:35:38.610 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-0558kv1s-253f3-p2vm2-worker-centralus2-q8rxc container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/23 18:03:56 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/23 18:03:56 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/23 18:03:56 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/23 18:03:56 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/23 18:03:56 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/23 18:03:56 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/23 18:03:56 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2023/01/23 18:03:56 http.go:107: HTTPS: listening on [::]:9091\nI0123 18:03:56.914828       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jan 23 18:35:38.610 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-0558kv1s-253f3-p2vm2-worker-centralus2-q8rxc container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-23T18:03:56.288377436Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-23T18:03:56.288516538Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-23T18:03:56.288703341Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-23T18:03:56.865195104Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-23T18:03:56.865320806Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-23T18:05:25.260667521Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-23T18:08:01.197909634Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-23T18:13:22.257931484Z caller=r
Jan 23 18:35:40.436 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-tbzzn node/ci-op-0558kv1s-253f3-p2vm2-master-0 container/console-operator reason/ContainerExit code/1 cause/Error 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-tbzzn", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0123 18:35:35.886723       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0123 18:35:35.889634       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0123 18:35:35.889722       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0123 18:35:35.889768       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0123 18:35:35.889807       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0123 18:35:35.889881       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0123 18:35:35.889923       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0123 18:35:35.889950       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0123 18:35:35.889984       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0123 18:35:35.890016       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0123 18:35:35.890062       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0123 18:35:35.890100       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0123 18:35:35.890123       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0123 18:35:35.890161       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0123 18:35:35.890175       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0123 18:35:35.890187       1 base_controller.go:167] Shutting down HealthCheckController ...\nW0123 18:35:35.890595       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 23 18:35:42.367 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-0558kv1s-253f3-p2vm2-worker-centralus1-295tv container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/23 18:03:22 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/23 18:03:22 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/23 18:03:22 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/23 18:03:22 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/23 18:03:22 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/23 18:03:22 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0123 18:03:22.229444       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/23 18:03:22 http.go:107: HTTPS: listening on [::]:9095\n
Jan 23 18:35:42.367 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-0558kv1s-253f3-p2vm2-worker-centralus1-295tv container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-23T18:03:22.002355822Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-23T18:03:22.002409221Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-23T18:03:22.002861814Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-23T18:03:22.00305111Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-01-23T18:03:23.204073795Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Jan 23 18:35:43.017 E ns/openshift-monitoring pod/thanos-querier-cf7db74bb-ngghv node/ci-op-0558kv1s-253f3-p2vm2-worker-centralus3-lrfjd container/oauth-proxy reason/ContainerExit code/2 cause/Error 2023/01/23 18:03:23 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2023/01/23 18:03:23 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/23 18:03:23 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/23 18:03:23 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/23 18:03:23 oauthproxy.go:224: compiled skip-auth-regex => "^/-/(healthy|ready)$"\n2023/01/23 18:03:23 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2023/01/23 18:03:23 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/23 18:03:23 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2023/01/23 18:03:23 http.go:107: HTTPS: listening on [::]:9091\nI0123 18:03:23.287988       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/23 18:19:12 server.go:3120: http: TLS handshake error from 10.128.2.12:38032: read tcp 10.129.2.12:9091->10.128.2.12:38032: read: connection reset by peer\n
Jan 23 18:35:44.167 E ns/openshift-monitoring pod/telemeter-client-68bd8fc88d-r62qf node/ci-op-0558kv1s-253f3-p2vm2-worker-centralus2-q8rxc container/telemeter-client reason/ContainerExit code/2 cause/Error
Jan 23 18:35:44.167 E ns/openshift-monitoring pod/telemeter-client-68bd8fc88d-r62qf node/ci-op-0558kv1s-253f3-p2vm2-worker-centralus2-q8rxc container/reload reason/ContainerExit code/2 cause/Error
#1615665531946274816junit12 days ago
Jan 18 12:13:57.884 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-c859f7bd5-4m8fj node/ci-op-bt2ljb4p-253f3-p7pqf-master-0 container/kube-storage-version-migrator-operator reason/ContainerExit code/1 cause/Error response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: " to "All is well"\nI0118 12:13:57.180546       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0118 12:13:57.180696       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0118 12:13:57.180723       1 base_controller.go:167] Shutting down StaticConditionsController ...\nI0118 12:13:57.180721       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0118 12:13:57.180742       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0118 12:13:57.180747       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0118 12:13:57.180755       1 base_controller.go:114] Shutting down worker of StaticConditionsController controller ...\nI0118 12:13:57.180760       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0118 12:13:57.180762       1 base_controller.go:104] All StaticConditionsController workers have been terminated\nI0118 12:13:57.180768       1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0118 12:13:57.180783       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0118 12:13:57.180796       1 base_controller.go:167] Shutting down KubeStorageVersionMigrator ...\nI0118 12:13:57.180807       1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ...\nI0118 12:13:57.180813       1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated\nW0118 12:13:57.180816       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0118 12:13:57.180820       1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigrator controller ...\nI0118 12:13:57.180827       1 base_controller.go:104] All KubeStorageVersionMigrator workers have been terminated\n
Jan 18 12:14:05.947 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-sfqqk node/ci-op-bt2ljb4p-253f3-p7pqf-master-0 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 18 12:14:08.965 E ns/openshift-insights pod/insights-operator-854449444c-ghs9l node/ci-op-bt2ljb4p-253f3-p7pqf-master-0 container/insights-operator reason/ContainerExit code/2 cause/Error .2.13:42526" resp=200\nI0118 12:12:33.332395       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="6.83692ms" userAgent="Prometheus/2.29.2" audit-ID="847c2ab3-05f1-4197-a883-4ba17adc7f73" srcIP="10.129.2.12:49160" resp=200\nI0118 12:12:48.418452       1 reflector.go:535] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Watch close - *v1.ConfigMap total 8 items received\nI0118 12:12:48.465957       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.128795ms" userAgent="Prometheus/2.29.2" audit-ID="de95ec3d-4023-4781-9b7f-27f74ef3c22d" srcIP="10.128.2.13:42526" resp=200\nI0118 12:13:03.340958       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="14.784658ms" userAgent="Prometheus/2.29.2" audit-ID="832e8a5e-84e9-4f32-8c18-269816d0a47b" srcIP="10.129.2.12:49160" resp=200\nI0118 12:13:18.470973       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.761121ms" userAgent="Prometheus/2.29.2" audit-ID="d03c31ff-f13d-4054-80d1-0fc1f10d0bc3" srcIP="10.128.2.13:42526" resp=200\nI0118 12:13:33.334214       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.191567ms" userAgent="Prometheus/2.29.2" audit-ID="2a0fbeda-3694-4df0-8cd3-1b57fd0df94d" srcIP="10.129.2.12:49160" resp=200\nI0118 12:13:45.424248       1 reflector.go:535] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Watch close - *v1.ConfigMap total 9 items received\nI0118 12:13:48.466524       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.282261ms" userAgent="Prometheus/2.29.2" audit-ID="a5ecd9a5-40bd-42e2-bd88-61ba726cd741" srcIP="10.128.2.13:42526" resp=200\nI0118 12:13:53.137911       1 status.go:354] The operator is healthy\nI0118 12:13:53.137992       1 status.go:441] No status update necessary, objects are identical\nI0118 12:14:03.342586       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="15.138871ms" userAgent="Prometheus/2.29.2" audit-ID="bdbd76b4-2b7d-40e5-9d36-13d7a38e71a0" srcIP="10.129.2.12:49160" resp=200\n
Jan 18 12:14:13.241 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65ddc5dd7b-bpjsw node/ci-op-bt2ljb4p-253f3-p7pqf-master-0 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error 167\nI0118 12:14:12.100568       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 12:14:12.101420       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0118 12:14:12.100757       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 12:14:12.100767       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0118 12:14:12.100787       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0118 12:14:12.100802       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0118 12:14:12.100815       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0118 12:14:12.101492       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0118 12:14:12.100815       1 reflector.go:225] Stopping reflector *v1.OpenShiftControllerManager (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 12:14:12.100828       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0118 12:14:12.100858       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 12:14:12.101523       1 secure_serving.go:301] Stopped listening on [::]:8443\nI0118 12:14:12.100906       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0118 12:14:12.100908       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0118 12:14:12.100948       1 reflector.go:225] Stopping reflector *v1.Namespace (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0118 12:14:12.100971       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 12:14:20.789 E ns/openshift-image-registry pod/cluster-image-registry-operator-864d6d8695-dm26t node/ci-op-bt2ljb4p-253f3-p7pqf-master-0 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 18 12:14:21.560 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-2rt7z node/ci-op-bt2ljb4p-253f3-p7pqf-master-2 container/console-operator reason/ContainerExit code/1 cause/Error nationStart' Received signal to terminate, becoming unready, but keeping serving\nI0118 12:14:17.816789       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-2rt7z", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0118 12:14:17.816879       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0118 12:14:17.816889       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0118 12:14:17.816868       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0118 12:14:17.816852       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0118 12:14:17.817305       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0118 12:14:17.816918       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0118 12:14:17.816927       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0118 12:14:17.816936       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0118 12:14:17.816945       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0118 12:14:17.816957       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0118 12:14:17.816959       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0118 12:14:17.816966       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0118 12:14:17.816970       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0118 12:14:17.816979       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0118 12:14:17.816980       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nW0118 12:14:17.816983       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 12:14:24.759 - 9s    E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator
Jan 18 12:14:25.095 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-4nq62 node/ci-op-bt2ljb4p-253f3-p7pqf-master-0 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error 6:58.972203       1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0118 11:48:19.279895       1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0118 11:51:16.227296       1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0118 11:58:19.280499       1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0118 12:00:53.556965       1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0118 12:08:19.283115       1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0118 12:11:16.228361       1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0118 12:14:24.163778       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0118 12:14:24.163894       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0118 12:14:24.163990       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0118 12:14:24.164301       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0118 12:14:24.164327       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0118 12:14:24.164345       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0118 12:14:24.164351       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0118 12:14:24.164365       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0118 12:14:24.164378       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0118 12:14:24.164392       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0118 12:14:24.164404       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0118 12:14:24.164418       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nW0118 12:14:24.164606       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 12:14:26.000 - 1s    E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-bt2ljb4p-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 18 12:14:26.000 - 1s    E disruption/kube-api connection/reused disruption/kube-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-bt2ljb4p-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 18 12:14:26.000 - 1s    E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-bt2ljb4p-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
#1615529286649778176junit13 days ago
Jan 18 03:19:53.654 - 1s    E disruption/openshift-api connection/reused disruption/openshift-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-zckm7vfk-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
Jan 18 03:19:53.654 - 999ms E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-zckm7vfk-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Jan 18 03:19:53.654 - 999ms E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-zckm7vfk-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers
Jan 18 03:19:53.654 - 1s    E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-zckm7vfk-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
Jan 18 03:19:54.000 - 1s    E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-zckm7vfk-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers
Jan 18 03:19:56.958 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-fn84r node/ci-op-zckm7vfk-253f3-gd2hs-master-0 container/console-operator reason/ContainerExit code/1 cause/Error TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0118 03:19:48.268032       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0118 03:19:48.268062       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0118 03:19:48.268088       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0118 03:19:48.268100       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0118 03:19:48.268116       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0118 03:19:48.268138       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0118 03:19:48.268149       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0118 03:19:48.268161       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0118 03:19:48.268173       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0118 03:19:48.268183       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0118 03:19:48.268196       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0118 03:19:48.268208       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0118 03:19:48.268219       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0118 03:19:48.268228       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0118 03:19:48.268125       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-fn84r", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0118 03:19:48.268303       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nW0118 03:19:48.268375       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 03:20:02.782 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-ct64h node/ci-op-zckm7vfk-253f3-gd2hs-master-1 container/webhook reason/ContainerExit code/2 cause/Error
Jan 18 03:20:04.080 E ns/openshift-ingress-canary pod/ingress-canary-npbkw node/ci-op-zckm7vfk-253f3-gd2hs-worker-eastus2-64khk container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 18 03:20:04.821 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-zckm7vfk-253f3-gd2hs-worker-eastus1-qkfjh container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/18 02:38:39 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/18 02:38:39 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/18 02:38:39 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/18 02:38:39 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/18 02:38:39 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/18 02:38:39 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/18 02:38:39 http.go:107: HTTPS: listening on [::]:9095\nI0118 02:38:39.141870       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/18 03:16:54 server.go:3120: http: TLS handshake error from 10.128.2.7:40244: read tcp 10.131.0.19:9095->10.128.2.7:40244: read: connection reset by peer\n
Jan 18 03:20:04.821 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-zckm7vfk-253f3-gd2hs-worker-eastus1-qkfjh container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-18T02:38:38.689834456Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-18T02:38:38.689892657Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-18T02:38:38.690045559Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-18T02:38:38.690543566Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-01-18T02:38:40.044442932Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Jan 18 03:20:05.103 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-zckm7vfk-253f3-gd2hs-worker-eastus2-64khk container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/18 02:39:17 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/18 02:39:17 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/18 02:39:17 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/18 02:39:17 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/18 02:39:17 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/18 02:39:17 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/18 02:39:17 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0118 02:39:17.490398       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/18 02:39:17 http.go:107: HTTPS: listening on [::]:9091\n

Found in 19.40% of runs (22.81% of failures) across 67 total runs and 1 jobs (85.07% failed) in 199ms - clear search | chart view - source code located on github