#1842002 | bug | 2 years ago | KubePodCrashLooping kube-contoller-manager cluster-policy-controller: 6443: connect: connection refused RELEASE_PENDING |
$ curl -s https://storage.googleapis.com/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-4.5/2428/artifacts/e2e-gcp/events.json | jq -r '.items[] | select(.metadata.namespace == "openshift-kube-apiserver") | .firstTimestamp + " " + .lastTimestamp + " " + .message' | sort ... 2020-05-30T01:10:53Z 2020-05-30T01:10:53Z All pending requests processed 2020-05-30T01:10:53Z 2020-05-30T01:10:53Z Server has stopped listening 2020-05-30T01:10:53Z 2020-05-30T01:10:53Z The minimal shutdown duration of 1m10s finished ... 2020-05-30T01:11:58Z 2020-05-30T01:11:58Z Created container kube-apiserver-cert-regeneration-controller 2020-05-30T01:11:58Z 2020-05-30T01:11:58Z Created container kube-apiserver-cert-syncer 2020-05-30T01:11:58Z 2020-05-30T01:11:58Z Started container kube-apiserver ... | |||
#1934628 | bug | 19 months ago | API server stopped reporting healthy during upgrade to 4.7.0 ASSIGNED |
during that time the API server was restarted by kubelet due to a failed liveness probe 14:18:00 openshift-kube-apiserver kubelet kube-apiserver-ip-10-0-159-123.ec2.internal Killing Container kube-apiserver failed liveness probe, will be restarted 14:19:17 openshift-kube-apiserver apiserver kube-apiserver-ip-10-0-159-123.ec2.internal TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished moving to etcd team to investigate why etcd was unavailable during that time Comment 15200626 by mfojtik@redhat.com at 2021-06-17T18:29:50Z The LifecycleStale keyword was removed because the bug got commented on recently. | |||
#1943804 | bug | 20 months ago | API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB RELEASE_PENDING |
"name": "kube-apiserver-ip-10-0-131-183.ec2.internal", "namespace": "openshift-kube-apiserver" }, "kind": "Event", "lastTimestamp": null, "message": "The minimal shutdown duration of 1m10s finished", "metadata": { "creationTimestamp": "2021-03-29T12:18:04Z", "name": "kube-apiserver-ip-10-0-131-183.ec2.internal.1670cf61b0f72d2d", "namespace": "openshift-kube-apiserver", "resourceVersion": "89139", | |||
#1921157 | bug | 2 years ago | [sig-api-machinery] Kubernetes APIs remain available for new connections ASSIGNED |
T2: At 06:45:58: systemd-shutdown was sending SIGTERM to remaining processes... T3: At 06:45:58: kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Received signal to terminate, becoming unready, but keeping serving (TerminationStart event) T4: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: The minimal shutdown duration of 1m10s finished (TerminationMinimalShutdownDurationFinished event) T5: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Server has stopped listening (TerminationStoppedServing event) T5 is the last event reported from that api server. At T5 the server might wait up to 60s for all requests to complete and then it fires TerminationGracefulTerminationFinished event. | |||
#1932097 | bug | 20 months ago | Apiserver liveness probe is marking it as unhealthy during normal shutdown RELEASE_PENDING |
Feb 23 20:18:04.212 - 1s E kube-apiserver-new-connection kube-apiserver-new-connection is not responding to GET requests Feb 23 20:18:05.318 I kube-apiserver-new-connection kube-apiserver-new-connection started responding to GET requests Deeper detail from the node log shows that right as we get this error one of the instances finishes its connection ,which is right when the error happens. Feb 23 20:18:02.505 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-203-7.us-east-2.compute.internal node/ip-10-0-203-7 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished Feb 23 20:18:02.509 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-203-7.us-east-2.compute.internal node/ip-10-0-203-7 reason/TerminationStoppedServing Server has stopped listening Feb 23 20:18:03.148 I ns/openshift-console-operator deployment/console-operator reason/OperatorStatusChanged Status for clusteroperator/console changed: Degraded message changed from "CustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nSyncLoopRefreshDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)" to "SyncLoopRefreshDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)" (2 times) Feb 23 20:18:03.880 E kube-apiserver-reused-connection kube-apiserver-reused-connection started failing: Get "https://api.ci-op-ivyvzgrr-0b477.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 3.21.250.132:6443: connect: connection refused This kind of looks like the load balancer didn't remove the kube-apiserver and kept sending traffic and the connection didn't cleanly shut down - did something regress in the apiserver traffic connection? | |||
#1995804 | bug | 17 months ago | Rewrite carry "UPSTREAM: <carry>: create termination events" to lifecycleEvents RELEASE_PENDING |
Use the new lifecycle event names for the events that we generate when an apiserver is gracefully terminating. Comment 15454963 by kewang@redhat.com at 2021-09-03T09:36:37Z $ w3m -dump -cols 200 'https://search.ci.openshift.org/?search=The+minimal+shutdown+duration&maxAge=168h&context=5&type=build-log&name=4%5C.9&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job' | grep -E 'kube-system node\/apiserver|openshift-kube-apiserver|openshift-apiserver' > test.log $ grep 'The minimal shutdown duration of' test.log | head -2 Sep 03 05:22:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-71.us-west-1.compute.internal node/ip-10-0-163-71 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished Sep 03 05:22:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-71.us-west-1.compute.internal node/ip-10-0-163-71 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished $ grep 'Received signal to terminate' test.log | head -2 Sep 03 08:49:11.000 I ns/default namespace/kube-system node/apiserver-75cf4778cb-9zk42 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving Sep 03 08:53:40.000 I ns/default namespace/kube-system node/apiserver-75cf4778cb-c8429 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving | |||
#1955333 | bug | 13 months ago | "Kubernetes APIs remain available for new connections" and similar failing on 4.8 Azure updates NEW |
2021-05-01T03:59:42Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Killing: Stopping container kube-apiserver-check-endpoints 2021-05-01T03:59:42Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Killing: Stopping container kube-apiserver-insecure-readyz 2021-05-01T03:59:43Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationPreShutdownHooksFinished: All pre-shutdown hooks have been finished 2021-05-01T03:59:43Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationStart: Received signal to terminate, becoming unready, but keeping serving 2021-05-01T03:59:49Z 1 cert-regeneration-controller-lock LeaderElection: ip-10-0-239-74_02f2b687-97f4-44c4-9516-e3fb364deb85 became leader 2021-05-01T04:00:53Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationMinimalShutdownDurationFinished: The minimal shutdown duration of 1m10s finished 2021-05-01T04:00:53Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationStoppedServing: Server has stopped listening 2021-05-01T04:01:53Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationGracefulTerminationFinished: All pending requests processed 2021-05-01T04:01:55Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Pulling: Pulling image "registry.ci.openshift.org/ocp/4.8-2021-04-30-212732@sha256:e4c7be2f0e8b1e9ef1ad9161061449ec1bdc6953a58f6d456971ee945a8d3197" 2021-05-01T04:02:05Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Created: Created container setup 2021-05-01T04:02:05Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Pulled: Container image "registry.ci.openshift.org/ocp/4.8-2021-04-30-212732@sha256:e4c7be2f0e8b1e9ef1ad9161061449ec1bdc6953a58f6d456971ee945a8d3197" already present on machine That really looks like kube-apiserver is rolling out a new version, and for some reason there is not the graceful LB handoff we need to avoid connection issues. Unifying the two timelines: * 03:59:43Z TerminationPreShutdownHooksFinished * 03:59:43Z TerminationStart: Received signal to terminate, becoming unready, but keeping serving * 04:00:53Z TerminationMinimalShutdownDurationFinished: The minimal shutdown duration of 1m10s finished * 04:00:53Z TerminationStoppedServing: Server has stopped listening * 04:00:58.307Z kube-apiserver-new-connection started failing... connection refused * 04:00:59.314Z kube-apiserver-new-connection started responding to GET requests * 04:01:03.307Z kube-apiserver-new-connection started failing... connection refused * 04:01:04.313Z kube-apiserver-new-connection started responding to GET requests | |||
#1979916 | bug | 20 months ago | kube-apiserver constantly receiving signals to terminate after a fresh install, but still keeps serving ASSIGNED |
kube-apiserver-master-0-2 Server has stopped listening kube-apiserver-master-0-2 The minimal shutdown duration of 1m10s finished redhat-operators-7p4nb Stopping container registry-server Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.8" in 3.09180991s | |||
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-azure-upgrade (all) - 66 runs, 80% failed, 38% of failures match = 30% impact | |||
#1639280309952319488 | junit | 2 days ago | |
Mar 24 16:28:27.587 - 1s E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-n398vln3-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers Mar 24 16:28:28.000 - 1s E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-n398vln3-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 24 16:28:28.587 - 1s E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-n398vln3-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 24 16:28:29.000 - 1s E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-n398vln3-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers Mar 24 16:28:29.000 - 1s E disruption/openshift-api connection/reused disruption/openshift-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-n398vln3-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 24 16:28:30.137 E ns/openshift-console-operator pod/console-operator-8d4486798-d9c4n node/ci-op-n398vln3-253f3-7xfdj-master-0 container/console-operator reason/ContainerExit code/1 cause/Error rsion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0324 16:28:24.264036 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-d9c4n", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0324 16:28:24.264053 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0324 16:28:24.264071 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-d9c4n", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0324 16:28:24.264088 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0324 16:28:24.264155 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0324 16:28:24.264185 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0324 16:28:24.264204 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0324 16:28:24.264313 1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0324 16:28:24.264313 1 base_controller.go:167] Shutting down HealthCheckController ...\nI0324 16:28:24.264350 1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0324 16:28:24.264363 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nW0324 16:28:24.264560 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 24 16:28:34.774 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-95fdb55dc-drhwt node/ci-op-n398vln3-253f3-7xfdj-master-1 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error 7:58.756198 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0324 16:08:08.523010 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0324 16:09:57.503551 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0324 16:13:07.445167 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0324 16:18:08.524082 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0324 16:23:29.366014 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0324 16:28:08.530167 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0324 16:28:24.406804 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0324 16:28:24.407468 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0324 16:28:24.407535 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0324 16:28:24.407680 1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0324 16:28:24.407739 1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0324 16:28:24.407789 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0324 16:28:24.407828 1 base_controller.go:167] Shutting down ConfigObserver ...\nI0324 16:28:24.407881 1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0324 16:28:24.407926 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0324 16:28:24.407967 1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0324 16:28:24.408008 1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0324 16:28:24.408047 1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nW0324 16:28:24.408304 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 24 16:28:34.774 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-95fdb55dc-drhwt node/ci-op-n398vln3-253f3-7xfdj-master-1 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 24 16:28:34.953 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-cb5bf8fc7-jw2lr node/ci-op-n398vln3-253f3-7xfdj-master-1 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error tor.go:157] Starting syncing operator at 2023-03-24 16:28:06.563976469 +0000 UTC m=+3734.680929157\nI0324 16:28:06.829873 1 operator.go:159] Finished syncing operator at 265.88533ms\nI0324 16:28:06.830022 1 operator.go:157] Starting syncing operator at 2023-03-24 16:28:06.83001571 +0000 UTC m=+3734.946968498\nI0324 16:28:07.058190 1 operator.go:159] Finished syncing operator at 228.16406ms\nI0324 16:28:07.058392 1 operator.go:157] Starting syncing operator at 2023-03-24 16:28:07.058386286 +0000 UTC m=+3735.175339074\nI0324 16:28:07.137236 1 operator.go:159] Finished syncing operator at 78.839498ms\nI0324 16:28:09.570840 1 operator.go:157] Starting syncing operator at 2023-03-24 16:28:09.570827903 +0000 UTC m=+3737.687780591\nI0324 16:28:09.620118 1 operator.go:159] Finished syncing operator at 49.280051ms\nI0324 16:28:21.740728 1 operator.go:157] Starting syncing operator at 2023-03-24 16:28:21.740698634 +0000 UTC m=+3749.857651422\nI0324 16:28:21.831804 1 operator.go:159] Finished syncing operator at 91.095306ms\nI0324 16:28:21.831867 1 operator.go:157] Starting syncing operator at 2023-03-24 16:28:21.831862645 +0000 UTC m=+3749.948815433\nI0324 16:28:21.836746 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0324 16:28:21.837802 1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0324 16:28:21.837898 1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0324 16:28:21.837944 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0324 16:28:21.837985 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0324 16:28:21.838024 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0324 16:28:21.838062 1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nW0324 16:28:21.838198 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 24 16:28:39.329 E ns/openshift-monitoring pod/cluster-monitoring-operator-5bcb464c46-wbmvd node/ci-op-n398vln3-253f3-7xfdj-master-1 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 24 16:28:40.694 E ns/openshift-ingress-canary pod/ingress-canary-p84rd node/ci-op-n398vln3-253f3-7xfdj-worker-westus-cgl84 container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8080\nserving on 8888\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n | |||
#1638868752781021184 | junit | 3 days ago | |
Mar 23 12:49:50.370 E ns/openshift-monitoring pod/telemeter-client-8688c8847-2h8n2 node/ci-op-hp9hics2-253f3-7jdn9-worker-eastus1-m7rh4 container/telemeter-client reason/ContainerExit code/2 cause/Error Mar 23 12:49:50.370 E ns/openshift-monitoring pod/telemeter-client-8688c8847-2h8n2 node/ci-op-hp9hics2-253f3-7jdn9-worker-eastus1-m7rh4 container/reload reason/ContainerExit code/2 cause/Error Mar 23 12:49:51.533 E ns/openshift-monitoring pod/thanos-querier-78dbdf95d-cbtm8 node/ci-op-hp9hics2-253f3-7jdn9-worker-eastus1-m7rh4 container/oauth-proxy reason/ContainerExit code/2 cause/Error 2023/03/23 12:20:02 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2023/03/23 12:20:02 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/03/23 12:20:02 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/03/23 12:20:02 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/03/23 12:20:02 oauthproxy.go:224: compiled skip-auth-regex => "^/-/(healthy|ready)$"\n2023/03/23 12:20:02 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2023/03/23 12:20:02 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/03/23 12:20:02 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0323 12:20:02.362303 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/03/23 12:20:02 http.go:107: HTTPS: listening on [::]:9091\n Mar 23 12:49:54.171 E ns/openshift-kube-storage-version-migrator pod/migrator-97d6f6595-bk5vw node/ci-op-hp9hics2-253f3-7jdn9-master-1 container/migrator reason/ContainerExit code/2 cause/Error I0323 12:08:28.052707 1 migrator.go:18] FLAG: --add_dir_header="false"\nI0323 12:08:28.052872 1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0323 12:08:28.052877 1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0323 12:08:28.052883 1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0323 12:08:28.052891 1 migrator.go:18] FLAG: --kubeconfig=""\nI0323 12:08:28.052899 1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0323 12:08:28.052910 1 migrator.go:18] FLAG: --log_dir=""\nI0323 12:08:28.052918 1 migrator.go:18] FLAG: --log_file=""\nI0323 12:08:28.052922 1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0323 12:08:28.052926 1 migrator.go:18] FLAG: --logtostderr="true"\nI0323 12:08:28.052929 1 migrator.go:18] FLAG: --one_output="false"\nI0323 12:08:28.052933 1 migrator.go:18] FLAG: --skip_headers="false"\nI0323 12:08:28.052937 1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0323 12:08:28.052940 1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0323 12:08:28.052944 1 migrator.go:18] FLAG: --v="2"\nI0323 12:08:28.052948 1 migrator.go:18] FLAG: --vmodule=""\nI0323 12:08:28.056059 1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0323 12:08:52.175109 1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0323 12:08:52.318728 1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0323 12:08:53.334675 1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0323 12:08:53.417025 1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\n Mar 23 12:49:54.457 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-hp9hics2-253f3-7jdn9-worker-eastus1-m7rh4 container/thanos-sidecar reason/ContainerExit code/1 cause/Error probe status" status=not-healthy reason="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=warn ts=2023-03-23T12:49:52.075019366Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=info ts=2023-03-23T12:49:52.075051966Z caller=grpc.go:130 service=gRPC/server component=sidecar msg="internal server is shutting down" err="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=info ts=2023-03-23T12:49:52.075102467Z caller=grpc.go:143 service=gRPC/server component=sidecar msg="gracefully stopping internal server"\nlevel=info ts=2023-03-23T12:49:52.075144067Z caller=grpc.go:156 service=gRPC/server component=sidecar msg="internal server is shutdown gracefully" err="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=warn ts=2023-03-23T12:49:52.07539787Z caller=sidecar.go:159 msg="failed to fetch prometheus version. Is Prometheus running? Retrying" err="perform GET request against http://localhost:9090/api/v1/status/buildinfo: Get \"http://localhost:9090/api/v1/status/buildinfo\": context canceled"\nlevel=error ts=2023-03-23T12:49:52.07709689Z caller=main.go:156 err="listen tcp: lookup $(POD_IP): no such host\nlisten gRPC on address [$(POD_IP)]:10901\ngithub.com/thanos-io/thanos/pkg/server/grpc.(*Server).ListenAndServe\n\t/go/src/github.com/improbable-eng/thanos/pkg/server/grpc/grpc.g Mar 23 12:49:54.934 E ns/openshift-console-operator pod/console-operator-8d4486798-b46vq node/ci-op-hp9hics2-253f3-7jdn9-master-2 container/console-operator reason/ContainerExit code/1 cause/Error rminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0323 12:49:51.921085 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0323 12:49:51.921120 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-b46vq", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0323 12:49:51.921139 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0323 12:49:51.920093 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0323 12:49:51.920111 1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0323 12:49:51.921156 1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0323 12:49:51.920121 1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0323 12:49:51.920132 1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0323 12:49:51.920140 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0323 12:49:51.920148 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0323 12:49:51.920160 1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0323 12:49:51.920169 1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0323 12:49:51.920178 1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0323 12:49:51.920186 1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0323 12:49:51.920194 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0323 12:49:51.920200 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nW0323 12:49:51.920314 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 23 12:49:54.934 E ns/openshift-console-operator pod/console-operator-8d4486798-b46vq node/ci-op-hp9hics2-253f3-7jdn9-master-2 container/console-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 23 12:50:03.000 E ns/openshift-monitoring pod/node-exporter-hd8t2 node/ci-op-hp9hics2-253f3-7jdn9-worker-eastus3-vtzfs container/node-exporter reason/ContainerExit code/143 cause/Error 3T12:16:11.614Z caller=node_exporter.go:113 collector=meminfo\nlevel=info ts=2023-03-23T12:16:11.614Z caller=node_exporter.go:113 collector=netclass\nlevel=info ts=2023-03-23T12:16:11.614Z caller=node_exporter.go:113 collector=netdev\nlevel=info ts=2023-03-23T12:16:11.614Z caller=node_exporter.go:113 collector=netstat\nlevel=info ts=2023-03-23T12:16:11.614Z caller=node_exporter.go:113 collector=nfs\nlevel=info ts=2023-03-23T12:16:11.614Z caller=node_exporter.go:113 collector=nfsd\nlevel=info ts=2023-03-23T12:16:11.614Z caller=node_exporter.go:113 collector=powersupplyclass\nlevel=info ts=2023-03-23T12:16:11.614Z caller=node_exporter.go:113 collector=pressure\nlevel=info ts=2023-03-23T12:16:11.614Z caller=node_exporter.go:113 collector=rapl\nlevel=info ts=2023-03-23T12:16:11.614Z caller=node_exporter.go:113 collector=schedstat\nlevel=info ts=2023-03-23T12:16:11.614Z caller=node_exporter.go:113 collector=sockstat\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:113 collector=softnet\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:113 collector=stat\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:113 collector=textfile\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:113 collector=thermal_zone\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:113 collector=time\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:113 collector=timex\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:113 collector=udp_queues\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:113 collector=uname\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:113 collector=vmstat\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:113 collector=xfs\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:113 collector=zfs\nlevel=info ts=2023-03-23T12:16:11.615Z caller=node_exporter.go:195 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2023-03-23T12:16:11.615Z caller=tls_config.go:191 msg="TLS is disabled." http2=false\n Mar 23 12:50:03.051 E ns/openshift-ingress-canary pod/ingress-canary-4bpn4 node/ci-op-hp9hics2-253f3-7jdn9-worker-eastus3-vtzfs container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n Mar 23 12:50:10.819 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-hp9hics2-253f3-7jdn9-worker-eastus1-m7rh4 container/thanos-sidecar reason/ContainerExit code/1 cause/Error 8Z caller=grpc.go:130 service=gRPC/server component=sidecar msg="internal server is shutting down" err="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=info ts=2023-03-23T12:50:10.014220278Z caller=sidecar.go:166 msg="successfully loaded prometheus version"\nlevel=warn ts=2023-03-23T12:50:10.014264479Z caller=sidecar.go:179 msg="failed to fetch initial external labels. Is Prometheus running? Retrying" err="perform GET request against http://localhost:9090/api/v1/status/config: Get \"http://localhost:9090/api/v1/status/config\": context canceled"\nlevel=info ts=2023-03-23T12:50:10.014215678Z caller=grpc.go:143 service=gRPC/server component=sidecar msg="gracefully stopping internal server"\nlevel=warn ts=2023-03-23T12:50:10.014301179Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason="perform GET request against http://localhost:9090/api/v1/status/config: Get \"http://localhost:9090/api/v1/status/config\": context canceled"\nlevel=info ts=2023-03-23T12:50:10.014314879Z caller=grpc.go:156 service=gRPC/server component=sidecar msg="internal server is shutdown gracefully" err="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=error ts=2023-03-23T12:50:10.014433881Z caller=main.go:156 err="listen tcp: lookup $(POD_IP): no such host\nlisten gRPC on address [$(POD_IP)]:10901\ngithub.com/thanos-io/thanos/pkg/server/grpc.(*Server).ListenAndServe\n\t/go/src/github.com/improbable-eng/thanos/pkg/server/grpc/grpc.g Mar 23 12:50:15.000 - 1s E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: error running request: 500 Internal Server Error: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"etcdserver: leader changed","code":500}\n | |||
#1638910519693807616 | junit | 3 days ago | |
Mar 23 15:49:43.514 E ns/openshift-machine-api pod/machine-api-operator-8ccd4d544-ndqfs node/ci-op-8jrfwd63-253f3-s2n2n-master-1 container/machine-api-operator reason/ContainerExit code/2 cause/Error Mar 23 15:52:19.907 E ns/openshift-machine-api pod/machine-api-controllers-64bf988c6-l5c8w node/ci-op-8jrfwd63-253f3-s2n2n-master-2 container/machineset-controller reason/ContainerExit code/1 cause/Error Mar 23 15:53:11.484 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-5dbb5fb469-rwxlx node/ci-op-8jrfwd63-253f3-s2n2n-master-1 container/kube-storage-version-migrator-operator reason/ContainerExit code/1 cause/Error 15:53:10.822832 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0323 15:53:10.822936 1 base_controller.go:167] Shutting down StaticConditionsController ...\nI0323 15:53:10.822996 1 base_controller.go:167] Shutting down KubeStorageVersionMigrator ...\nI0323 15:53:10.823013 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0323 15:53:10.823031 1 base_controller.go:114] Shutting down worker of StaticConditionsController controller ...\nI0323 15:53:10.823041 1 base_controller.go:104] All StaticConditionsController workers have been terminated\nI0323 15:53:10.823028 1 reflector.go:225] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0323 15:53:10.823050 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigrator controller ...\nI0323 15:53:10.823056 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ...\nI0323 15:53:10.823063 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated\nI0323 15:53:10.823064 1 base_controller.go:104] All KubeStorageVersionMigrator workers have been terminated\nI0323 15:53:10.823084 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0323 15:53:10.823114 1 reflector.go:225] Stopping reflector *unstructured.Unstructured (12h0m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0323 15:53:10.823120 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0323 15:53:10.823133 1 base_controller.go:167] Shutting down StatusSyncer_kube-storage-version-migrator ...\nW0323 15:53:10.823135 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0323 15:53:10.823147 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0323 15:53:10.823154 1 base_controller.go:104] All LoggingSyncer workers have been terminated\n Mar 23 15:53:19.731 E ns/openshift-ingress-operator pod/ingress-operator-f64487774-tvsrk node/ci-op-8jrfwd63-253f3-s2n2n-master-1 container/ingress-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 23 15:53:22.599 E ns/openshift-insights pod/insights-operator-774865fbf-2lcml node/ci-op-8jrfwd63-253f3-s2n2n-master-1 container/insights-operator reason/ContainerExit code/2 cause/Error theus/2.29.2" audit-ID="680baf7e-00bd-41fe-8e71-6e3e81780e62" srcIP="10.131.0.20:46550" resp=200\nI0323 15:51:46.673742 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.579253ms" userAgent="Prometheus/2.29.2" audit-ID="813e1aa6-dbd2-47f8-b669-f3f0cac372ab" srcIP="10.129.2.11:58430" resp=200\nI0323 15:51:48.749462 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.199811ms" userAgent="Prometheus/2.29.2" audit-ID="265a37ad-d0e4-476a-8d6f-96c81673247c" srcIP="10.131.0.20:46550" resp=200\nI0323 15:52:15.778781 1 status.go:178] Failed to download Insights report\nI0323 15:52:15.778830 1 status.go:354] The operator is healthy\nI0323 15:52:15.778899 1 status.go:441] No status update necessary, objects are identical\nI0323 15:52:16.681803 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="11.635596ms" userAgent="Prometheus/2.29.2" audit-ID="47e0c302-f07b-4146-867a-e98109ebda99" srcIP="10.129.2.11:58430" resp=200\nI0323 15:52:18.753096 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="8.819428ms" userAgent="Prometheus/2.29.2" audit-ID="e3c41311-3589-4366-b207-89706afbb07c" srcIP="10.131.0.20:46550" resp=200\nI0323 15:52:46.673639 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.952975ms" userAgent="Prometheus/2.29.2" audit-ID="5688c7c0-ecd7-47e3-b87e-b518d215d50f" srcIP="10.129.2.11:58430" resp=200\nI0323 15:52:48.750881 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.326514ms" userAgent="Prometheus/2.29.2" audit-ID="680dac49-3a44-426e-abe6-7cea0c24b5fa" srcIP="10.131.0.20:46550" resp=200\nI0323 15:53:16.680168 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="14.326357ms" userAgent="Prometheus/2.29.2" audit-ID="db488bc3-f69a-4eec-be50-34e23c6f338d" srcIP="10.129.2.11:58430" resp=200\nI0323 15:53:18.767165 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="21.828006ms" userAgent="Prometheus/2.29.2" audit-ID="661b4f44-d386-4375-8e17-3f1f6370c6e2" srcIP="10.131.0.20:46550" resp=200\n Mar 23 15:53:37.410 E ns/openshift-console-operator pod/console-operator-8d4486798-2f426 node/ci-op-8jrfwd63-253f3-s2n2n-master-0 container/console-operator reason/ContainerExit code/1 cause/Error o:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0323 15:53:35.450632 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-2f426", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0323 15:53:35.450664 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-2f426", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0323 15:53:35.450916 1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0323 15:53:35.450929 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0323 15:53:35.450949 1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0323 15:53:35.450960 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0323 15:53:35.450965 1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0323 15:53:35.450726 1 base_controller.go:167] Shutting down HealthCheckController ...\nI0323 15:53:35.450738 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0323 15:53:35.450986 1 base_controller.go:114] Shutting down worker of StatusSyncer_console controller ...\nI0323 15:53:35.450989 1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0323 15:53:35.450994 1 base_controller.go:114] Shutting down worker of ConsoleCLIDownloadsController controller ...\nI0323 15:53:35.451002 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nW0323 15:53:35.450704 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 23 15:53:44.803 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7c6cd6f67f-xzqqn node/ci-op-8jrfwd63-253f3-s2n2n-master-1 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error e_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0323 15:53:43.777506 1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ...\nI0323 15:53:43.777510 1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0323 15:53:43.777514 1 base_controller.go:104] All UserCAObservationController workers have been terminated\nI0323 15:53:43.777507 1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0323 15:53:43.777498 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0323 15:53:43.777528 1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0323 15:53:43.777528 1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0323 15:53:43.777520 1 reflector.go:225] Stopping reflector *v1.Build (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0323 15:53:43.777552 1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0323 15:53:43.777556 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0323 15:53:43.777566 1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0323 15:53:43.777548 1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0323 15:53:43.777589 1 reflector.go:225] Stopping reflector *v1.Network (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0323 15:53:43.777603 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0323 15:53:43.777950 1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\n Mar 23 15:53:47.646 - 3s E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator Mar 23 15:53:49.522 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-75969f964b-pjnb5 node/ci-op-8jrfwd63-253f3-s2n2n-master-0 container/webhook reason/ContainerExit code/2 cause/Error Mar 23 15:53:52.557 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5c87c94f97-2ktxd node/ci-op-8jrfwd63-253f3-s2n2n-master-0 container/snapshot-controller reason/ContainerExit code/2 cause/Error Mar 23 15:53:57.902 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator authentication is updating versions\n* Cluster operator cloud-credential is updating versions\n* Cluster operator cluster-autoscaler is updating versions\n* Cluster operator console is updating versions\n* Cluster operator csi-snapshot-controller is updating versions\n* Cluster operator image-registry is updating versions\n* Cluster operator ingress is updating versions\n* Cluster operator kube-storage-version-migrator is updating versions\n* Cluster operator machine-approver is updating versions\n* Cluster operator monitoring is updating versions\n* Cluster operator node-tuning is updating versions\n* Cluster operator openshift-apiserver is updating versions\n* Cluster operator openshift-controller-manager is updating versions\n* Cluster operator openshift-samples is updating versions\n* Cluster operator operator-lifecycle-manager is updating versions\n* Cluster operator storage is updating versions | |||
#1638816570585124864 | junit | 3 days ago | |
Mar 23 09:39:46.838 E ns/openshift-image-registry pod/cluster-image-registry-operator-695b9d5fd5-5xqkm node/ci-op-1gchgsh5-253f3-np25k-master-1 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 23 09:39:47.006 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-95fdb55dc-mtzjr node/ci-op-1gchgsh5-253f3-np25k-master-1 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error 3:46.308094 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0323 09:18:44.724835 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0323 09:19:25.456672 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0323 09:23:21.091045 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0323 09:28:44.724855 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0323 09:33:46.308804 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0323 09:38:44.725915 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0323 09:39:37.052637 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0323 09:39:37.053294 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0323 09:39:37.053345 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0323 09:39:37.054350 1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0323 09:39:37.054397 1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0323 09:39:37.054423 1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0323 09:39:37.054440 1 base_controller.go:167] Shutting down ConfigObserver ...\nI0323 09:39:37.054454 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0323 09:39:37.054472 1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0323 09:39:37.054478 1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0323 09:39:37.054494 1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nW0323 09:39:37.054496 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0323 09:39:37.054509 1 base_controller.go:167] Shutting down LoggingSyncer ...\n Mar 23 09:39:47.006 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-95fdb55dc-mtzjr node/ci-op-1gchgsh5-253f3-np25k-master-1 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 23 09:39:48.881 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-1gchgsh5-253f3-np25k-worker-westus-gdftw container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/03/23 08:55:57 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/03/23 08:55:57 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/03/23 08:55:57 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/03/23 08:55:57 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/03/23 08:55:57 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/03/23 08:55:57 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/03/23 08:55:57 http.go:107: HTTPS: listening on [::]:9095\nI0323 08:55:57.783124 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/03/23 09:07:43 server.go:3120: http: TLS handshake error from 10.128.2.7:35714: read tcp 10.129.2.8:9095->10.128.2.7:35714: read: connection reset by peer\n Mar 23 09:39:48.881 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-1gchgsh5-253f3-np25k-worker-westus-gdftw container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-03-23T08:55:57.429742135Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=719dbaf)"\nlevel=info ts=2023-03-23T08:55:57.429819834Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20230304-05:54:47)"\nlevel=info ts=2023-03-23T08:55:57.430025733Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-03-23T08:55:57.430907026Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-03-23T08:55:58.727193428Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n Mar 23 09:39:49.309 E ns/openshift-console-operator pod/console-operator-8d4486798-fmbjx node/ci-op-1gchgsh5-253f3-np25k-master-2 container/console-operator reason/ContainerExit code/1 cause/Error ecoming unready, but keeping serving\nI0323 09:39:37.239806 1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0323 09:39:37.239811 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-fmbjx", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0323 09:39:37.239821 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0323 09:39:37.239830 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0323 09:39:37.239836 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0323 09:39:37.239849 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0323 09:39:37.239853 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-fmbjx", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0323 09:39:37.239862 1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0323 09:39:37.239871 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0323 09:39:37.239875 1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0323 09:39:37.239888 1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0323 09:39:37.239902 1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0323 09:39:37.239909 1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0323 09:39:37.239921 1 base_controller.go:167] Shutting down HealthCheckController ...\nW0323 09:39:37.240088 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 23 09:39:50.465 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-1gchgsh5-253f3-np25k-worker-westus-xbmzt container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/03/23 08:55:51 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/03/23 08:55:51 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/03/23 08:55:51 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/03/23 08:55:52 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/03/23 08:55:52 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/03/23 08:55:52 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/03/23 08:55:52 http.go:107: HTTPS: listening on [::]:9095\nI0323 08:55:52.019488 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/03/23 09:38:30 server.go:3120: http: TLS handshake error from 10.128.2.7:36446: read tcp 10.131.0.19:9095->10.128.2.7:36446: read: connection reset by peer\n Mar 23 09:39:50.465 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-1gchgsh5-253f3-np25k-worker-westus-xbmzt container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-03-23T08:55:51.746784717Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=719dbaf)"\nlevel=info ts=2023-03-23T08:55:51.746884924Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20230304-05:54:47)"\nlevel=info ts=2023-03-23T08:55:51.747145041Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-03-23T08:55:51.747516066Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=error ts=2023-03-23T08:55:52.985184251Z caller=runutil.go:101 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post \"http://localhost:9093/-/reload\": dial tcp [::1]:9093: connect: connection refused"\nlevel=info ts=2023-03-23T08:55:57.989517446Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n Mar 23 09:39:50.668 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-1gchgsh5-253f3-np25k-worker-westus-gdftw container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/03/23 08:56:12 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/03/23 08:56:12 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/03/23 08:56:12 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/03/23 08:56:12 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/03/23 08:56:12 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/03/23 08:56:12 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/03/23 08:56:12 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0323 08:56:12.986355 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/03/23 08:56:12 http.go:107: HTTPS: listening on [::]:9091\n Mar 23 09:39:50.668 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-1gchgsh5-253f3-np25k-worker-westus-gdftw container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-03-23T08:56:12.065153273Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=719dbaf)"\nlevel=info ts=2023-03-23T08:56:12.065244773Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20230304-05:54:47)"\nlevel=info ts=2023-03-23T08:56:12.06555677Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-03-23T08:56:12.980929406Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-23T08:56:12.981094304Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-23T08:57:25.585366408Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-23T09:01:16.625499763Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-23T09:14:30.21801577Z caller=rel Mar 23 09:39:51.173 E ns/openshift-monitoring pod/node-exporter-44f52 node/ci-op-1gchgsh5-253f3-np25k-master-0 container/node-exporter reason/ContainerExit code/143 cause/Error 3T08:44:58.571Z caller=node_exporter.go:113 collector=meminfo\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=netclass\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=netdev\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=netstat\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=nfs\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=nfsd\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=powersupplyclass\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=pressure\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=rapl\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=schedstat\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=sockstat\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=softnet\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=stat\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=textfile\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=thermal_zone\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=time\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=timex\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=udp_queues\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=uname\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=vmstat\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=xfs\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:113 collector=zfs\nlevel=info ts=2023-03-23T08:44:58.571Z caller=node_exporter.go:195 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2023-03-23T08:44:58.571Z caller=tls_config.go:191 msg="TLS is disabled." http2=false\n | |||
#1638374686964322304 | junit | 4 days ago | |
Mar 22 04:31:44.000 - 1s E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-db3sxrhy-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 22 04:31:44.504 - 2s E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-db3sxrhy-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers Mar 22 04:31:45.505 - 999ms E disruption/oauth-api connection/reused disruption/oauth-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-db3sxrhy-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers Mar 22 04:31:45.506 - 999ms E disruption/openshift-api connection/reused disruption/openshift-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-db3sxrhy-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 22 04:31:46.000 - 1s E disruption/oauth-api connection/reused disruption/oauth-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-db3sxrhy-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers Mar 22 04:31:52.711 E ns/openshift-console-operator pod/console-operator-8d4486798-vrzs6 node/ci-op-db3sxrhy-253f3-w565d-master-0 container/console-operator reason/ContainerExit code/1 cause/Error "v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0322 04:31:40.537426 1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0322 04:31:40.537498 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-vrzs6", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0322 04:31:40.537566 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-vrzs6", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0322 04:31:40.537624 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0322 04:31:40.537716 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-vrzs6", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0322 04:31:40.537776 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0322 04:31:40.538286 1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0322 04:31:40.538432 1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0322 04:31:40.538525 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0322 04:31:40.538542 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nW0322 04:31:40.538544 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 22 04:32:13.907 E ns/openshift-controller-manager pod/controller-manager-mdrkc node/ci-op-db3sxrhy-253f3-w565d-master-1 container/controller-manager reason/ContainerExit code/137 cause/Error 172.30.39.234:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.39.234:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0322 03:44:58.625011 1 build_controller.go:475] Starting build controller\nI0322 03:44:58.625151 1 build_controller.go:477] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nW0322 03:46:40.945193 1 reflector.go:441] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 143; INTERNAL_ERROR") has prevented the request from succeeding\nW0322 03:46:40.946311 1 reflector.go:441] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 153; INTERNAL_ERROR") has prevented the request from succeeding\nW0322 03:46:40.947769 1 reflector.go:441] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 147; INTERNAL_ERROR") has prevented the request from succeeding\nW0322 03:46:40.950470 1 reflector.go:441] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 149; INTERNAL_ERROR") has prevented the request from succeeding\nW0322 03:46:40.958731 1 reflector.go:441] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 151; INTERNAL_ERROR") has prevented the request from succeeding\n Mar 22 04:32:15.249 E ns/openshift-controller-manager pod/controller-manager-zkm5p node/ci-op-db3sxrhy-253f3-w565d-master-2 container/controller-manager reason/ContainerExit code/137 cause/Error I0322 03:42:30.449756 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202303040029.p0.g79857a3.assembly.stream-79857a3)\nI0322 03:42:30.451599 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07be3dae76690d861f51f4335fa1367f22e7bbbd0aa5cd0f03d404a8504c5a4b"\nI0322 03:42:30.451628 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41f17928cbcccb8eb903d5c8bddb59aca2402752cdf8596133fca642510a38ee"\nI0322 03:42:30.451710 1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0322 03:42:30.451773 1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\nE0322 03:43:47.106442 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0322 03:44:35.827575 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\n Mar 22 04:32:18.995 E ns/openshift-controller-manager pod/controller-manager-b7c8t node/ci-op-db3sxrhy-253f3-w565d-master-0 container/controller-manager reason/ContainerExit code/137 cause/Error I0322 03:42:30.893144 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202303040029.p0.g79857a3.assembly.stream-79857a3)\nI0322 03:42:30.894958 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07be3dae76690d861f51f4335fa1367f22e7bbbd0aa5cd0f03d404a8504c5a4b"\nI0322 03:42:30.894977 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41f17928cbcccb8eb903d5c8bddb59aca2402752cdf8596133fca642510a38ee"\nI0322 03:42:30.895072 1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0322 03:42:30.895122 1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\nE0322 03:44:12.880823 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0322 03:44:43.487023 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\n Mar 22 04:32:24.374 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7c6cd6f67f-2759x node/ci-op-db3sxrhy-253f3-w565d-master-2 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Proxy total 7 items received\nI0322 04:32:06.869862 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="12.516373ms" userAgent="Prometheus/2.29.2" audit-ID="0da7f983-18c3-4f17-9fbe-6bbc5b1baf19" srcIP="10.128.2.11:37086" resp=200\nI0322 04:32:12.120542 1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Build total 7 items received\nI0322 04:32:12.890911 1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Deployment total 8 items received\nI0322 04:32:19.109178 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0322 04:32:19.109400 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0322 04:32:19.109449 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0322 04:32:19.109562 1 base_controller.go:167] Shutting down ConfigObserver ...\nI0322 04:32:19.109566 1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0322 04:32:19.109606 1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0322 04:32:19.109620 1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0322 04:32:19.109620 1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0322 04:32:19.109635 1 base_controller.go:167] Shutting down StaticResourceController ...\nW0322 04:32:19.109627 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0322 04:32:19.109649 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0322 04:32:19.109654 1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ...\nI0322 04:32:19.109659 1 base_controller.go:104] All ConfigObserver workers have been terminated\n Mar 22 04:32:24.374 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7c6cd6f67f-2759x node/ci-op-db3sxrhy-253f3-w565d-master-2 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
#1638200679870763008 | junit | 5 days ago | |
Mar 21 16:43:21.767 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-5dbb5fb469-5wzgq node/ci-op-31x9zgcg-253f3-lsgvf-master-2 container/kube-storage-version-migrator-operator reason/ContainerExit code/1 cause/Error ionsController \nI0321 16:06:10.543427 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ...\nI0321 16:06:10.543400 1 base_controller.go:110] Starting #1 worker of StaticConditionsController controller ...\nI0321 16:06:10.543403 1 base_controller.go:110] Starting #1 worker of StaticResourceController controller ...\nI0321 16:06:10.543493 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-storage-version-migrator \nI0321 16:06:10.543504 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-storage-version-migrator controller ...\nI0321 16:06:10.543414 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigrator controller ...\nI0321 16:06:10.543372 1 base_controller.go:73] Caches are synced for LoggingSyncer \nI0321 16:06:10.543604 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ...\nI0321 16:43:20.387224 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0321 16:43:20.387306 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0321 16:43:20.387325 1 base_controller.go:167] Shutting down KubeStorageVersionMigrator ...\nI0321 16:43:20.387338 1 base_controller.go:167] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0321 16:43:20.387344 1 base_controller.go:145] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated\nI0321 16:43:20.387356 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0321 16:43:20.387371 1 base_controller.go:167] Shutting down StaticConditionsController ...\nI0321 16:43:20.387382 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0321 16:43:20.387399 1 reflector.go:225] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nW0321 16:43:20.387420 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 21 16:43:21.767 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-5dbb5fb469-5wzgq node/ci-op-31x9zgcg-253f3-lsgvf-master-2 container/kube-storage-version-migrator-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 21 16:43:30.804 E ns/openshift-insights pod/insights-operator-774865fbf-ddgsv node/ci-op-31x9zgcg-253f3-lsgvf-master-2 container/insights-operator reason/ContainerExit code/2 cause/Error 1 reflector.go:535] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Watch close - *v1.ConfigMap total 8 items received\nI0321 16:42:05.837005 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.462809ms" userAgent="Prometheus/2.29.2" audit-ID="ba3e6a99-2175-420b-9728-43ee034fd7b6" srcIP="10.129.2.12:53208" resp=200\nI0321 16:42:09.650925 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.423946ms" userAgent="Prometheus/2.29.2" audit-ID="9d1eeb82-d5a6-4efd-8665-9119b8054714" srcIP="10.131.0.11:53780" resp=200\nI0321 16:42:23.468935 1 status.go:178] Failed to download Insights report\nI0321 16:42:23.468962 1 status.go:354] The operator is healthy\nI0321 16:42:23.469002 1 status.go:441] No status update necessary, objects are identical\nI0321 16:42:35.832710 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.670455ms" userAgent="Prometheus/2.29.2" audit-ID="0bdaf445-29bd-44b0-ad09-49a11172035e" srcIP="10.129.2.12:53208" resp=200\nI0321 16:42:39.655822 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="9.094789ms" userAgent="Prometheus/2.29.2" audit-ID="6b1bbb44-b513-4e99-b310-e2920fb13045" srcIP="10.131.0.11:53780" resp=200\nI0321 16:43:05.837068 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.708503ms" userAgent="Prometheus/2.29.2" audit-ID="8e96e64c-a2d4-4594-8839-1c12561b7b42" srcIP="10.129.2.12:53208" resp=200\nI0321 16:43:09.651012 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.584544ms" userAgent="Prometheus/2.29.2" audit-ID="47a1c562-e6ad-4854-a4dc-3797462169d6" srcIP="10.131.0.11:53780" resp=200\nI0321 16:43:10.295305 1 configobserver.go:77] Refreshing configuration from cluster pull secret\nI0321 16:43:10.301253 1 configobserver.go:102] Found cloud.openshift.com token\nI0321 16:43:10.301291 1 configobserver.go:120] Refreshing configuration from cluster secret\nI0321 16:43:10.304616 1 configobserver.go:124] Support secret does not exist\n Mar 21 16:43:40.884 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7c6cd6f67f-qg6ls node/ci-op-31x9zgcg-253f3-lsgvf-master-2 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error topping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888219 1 reflector.go:225] Stopping reflector *v1.Image (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888240 1 reflector.go:225] Stopping reflector *v1.Build (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888262 1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888308 1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888331 1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888351 1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888374 1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888397 1 reflector.go:225] Stopping reflector *v1.Service (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888419 1 reflector.go:225] Stopping reflector *v1.ServiceAccount (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888441 1 reflector.go:225] Stopping reflector *v1.Proxy (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888462 1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0321 16:43:35.888484 1 reflector.go:225] Stopping reflector *v1.OpenShiftControllerManager (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0321 16:43:35.888614 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 21 16:43:40.884 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7c6cd6f67f-qg6ls node/ci-op-31x9zgcg-253f3-lsgvf-master-2 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 21 16:43:41.101 E ns/openshift-console-operator pod/console-operator-8d4486798-v4hsn node/ci-op-31x9zgcg-253f3-lsgvf-master-0 container/console-operator reason/ContainerExit code/1 cause/Error reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0321 16:43:39.752114 1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0321 16:43:39.752132 1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0321 16:43:39.752147 1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0321 16:43:39.752147 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-v4hsn", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0321 16:43:39.752165 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-v4hsn", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0321 16:43:39.752177 1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0321 16:43:39.752183 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0321 16:43:39.752193 1 base_controller.go:167] Shutting down HealthCheckController ...\nW0321 16:43:39.752195 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0321 16:43:39.752203 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-v4hsn", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0321 16:43:39.752220 1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0321 16:43:39.752235 1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\n Mar 21 16:43:42.927 E ns/openshift-image-registry pod/cluster-image-registry-operator-695b9d5fd5-4bhzw node/ci-op-31x9zgcg-253f3-lsgvf-master-2 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 21 16:43:46.970 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-95fdb55dc-b5ppd node/ci-op-31x9zgcg-253f3-lsgvf-master-2 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error s managed-premium found, reconciling\nI0321 16:06:03.647574 1 base_controller.go:73] Caches are synced for SnapshotCRDController \nI0321 16:06:03.647595 1 base_controller.go:110] Starting #1 worker of SnapshotCRDController controller ...\nI0321 16:13:51.337139 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0321 16:16:03.449337 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0321 16:17:51.707392 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0321 16:26:03.449850 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0321 16:27:25.998116 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0321 16:36:03.450111 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0321 16:37:51.707587 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0321 16:43:46.045708 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0321 16:43:46.045803 1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0321 16:43:46.045803 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0321 16:43:46.045838 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0321 16:43:46.045908 1 base_controller.go:114] Shutting down worker of SnapshotCRDController controller ...\nI0321 16:43:46.045924 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0321 16:43:46.045847 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0321 16:43:46.045861 1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0321 16:43:46.045862 1 base_controller.go:167] Shutting down SnapshotCRDController ...\nW0321 16:43:46.046057 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 21 16:43:51.337 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-cb5bf8fc7-wf7jh node/ci-op-31x9zgcg-253f3-lsgvf-master-2 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error tusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0321 16:43:50.206938 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0321 16:43:50.206949 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0321 16:43:50.206970 1 base_controller.go:114] Shutting down worker of CSISnapshotWebhookController controller ...\nI0321 16:43:50.206978 1 base_controller.go:104] All CSISnapshotWebhookController workers have been terminated\nI0321 16:43:50.206987 1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0321 16:43:50.206993 1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0321 16:43:50.207001 1 base_controller.go:114] Shutting down worker of StatusSyncer_csi-snapshot-controller controller ...\nI0321 16:43:50.207008 1 base_controller.go:104] All StatusSyncer_csi-snapshot-controller workers have been terminated\nI0321 16:43:50.207017 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0321 16:43:50.207024 1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0321 16:43:50.207033 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0321 16:43:50.207039 1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0321 16:43:50.207052 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0321 16:43:50.207074 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0321 16:43:50.207079 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nW0321 16:43:50.207144 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 21 16:43:51.991 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-696df46455-m5f2g node/ci-op-31x9zgcg-253f3-lsgvf-master-2 container/cluster-node-tuning-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 21 16:43:56.449 E ns/openshift-ingress-canary pod/ingress-canary-7sck4 node/ci-op-31x9zgcg-253f3-lsgvf-worker-eastus23-z9kmj container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n | |||
#1638602194338975744 | junit | 3 days ago | |
Mar 22 19:19:20.617 - 2s E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-hb3xg0gt-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 22 19:19:20.618 - 1s E disruption/openshift-api connection/reused disruption/openshift-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-hb3xg0gt-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 22 19:19:21.000 - 1s E disruption/kube-api connection/reused disruption/kube-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-hb3xg0gt-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers Mar 22 19:19:21.000 - 1s E disruption/oauth-api connection/reused disruption/oauth-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-hb3xg0gt-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers Mar 22 19:19:21.618 - 999ms E disruption/oauth-api connection/reused disruption/oauth-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-hb3xg0gt-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers Mar 22 19:19:28.369 E ns/openshift-console-operator pod/console-operator-8d4486798-87g2b node/ci-op-hb3xg0gt-253f3-hk9mg-master-0 container/console-operator reason/ContainerExit code/1 cause/Error iserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0322 19:19:16.477987 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-87g2b", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0322 19:19:16.478007 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-87g2b", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0322 19:19:16.478027 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0322 19:19:16.478059 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-87g2b", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0322 19:19:16.478096 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0322 19:19:16.479210 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0322 19:19:16.479227 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0322 19:19:16.479246 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0322 19:19:16.479524 1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0322 19:19:16.479653 1 base_controller.go:167] Shutting down DownloadsRouteController ...\nW0322 19:19:16.479718 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0322 19:19:16.479826 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\n Mar 22 19:19:31.342 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator authentication is updating versions\n* Cluster operator cloud-credential is updating versions\n* Cluster operator cluster-autoscaler is updating versions\n* Cluster operator console is updating versions\n* Cluster operator csi-snapshot-controller is updating versions\n* Cluster operator image-registry is updating versions\n* Cluster operator ingress is updating versions\n* Cluster operator kube-storage-version-migrator is updating versions\n* Cluster operator machine-approver is updating versions\n* Cluster operator monitoring is updating versions\n* Cluster operator node-tuning is updating versions\n* Cluster operator openshift-apiserver is updating versions\n* Cluster operator openshift-controller-manager is updating versions\n* Cluster operator openshift-samples is updating versions\n* Cluster operator storage is updating versions\n* Could not update cronjob "openshift-operator-lifecycle-manager/collect-profiles" (584 of 776) Mar 22 19:19:32.130 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-cb5bf8fc7-s2mn4 node/ci-op-hb3xg0gt-253f3-hk9mg-master-1 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error -03-22 19:19:16.525601927 +0000 UTC m=+2964.523189156\nI0322 19:19:16.594830 1 operator.go:159] Finished syncing operator at 69.213009ms\nI0322 19:19:17.574242 1 operator.go:157] Starting syncing operator at 2023-03-22 19:19:17.57421801 +0000 UTC m=+2965.571805239\nI0322 19:19:17.695026 1 operator.go:159] Finished syncing operator at 120.795719ms\nI0322 19:19:22.980177 1 operator.go:157] Starting syncing operator at 2023-03-22 19:19:22.980165903 +0000 UTC m=+2970.977753132\nI0322 19:19:27.228315 1 operator.go:159] Finished syncing operator at 4.248136118s\nI0322 19:19:30.596179 1 operator.go:157] Starting syncing operator at 2023-03-22 19:19:30.596165814 +0000 UTC m=+2978.593752943\nI0322 19:19:30.680682 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0322 19:19:30.680894 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0322 19:19:30.680938 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0322 19:19:30.680955 1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0322 19:19:30.680977 1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0322 19:19:30.681830 1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0322 19:19:30.681883 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0322 19:19:30.681902 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0322 19:19:30.681919 1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0322 19:19:30.681927 1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0322 19:19:30.681942 1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0322 19:19:30.682104 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 22 19:19:36.904 E ns/openshift-ingress-canary pod/ingress-canary-dzpcr node/ci-op-hb3xg0gt-253f3-hk9mg-worker-westus-wm5bn container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n Mar 22 19:19:39.206 E ns/openshift-controller-manager pod/controller-manager-22ttl node/ci-op-hb3xg0gt-253f3-hk9mg-master-1 container/controller-manager reason/ContainerExit code/137 cause/Error I0322 18:40:23.201877 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202303040029.p0.g79857a3.assembly.stream-79857a3)\nI0322 18:40:23.204267 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07be3dae76690d861f51f4335fa1367f22e7bbbd0aa5cd0f03d404a8504c5a4b"\nI0322 18:40:23.204288 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41f17928cbcccb8eb903d5c8bddb59aca2402752cdf8596133fca642510a38ee"\nI0322 18:40:23.204408 1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0322 18:40:23.204585 1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\nE0322 18:41:47.032568 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0322 18:42:31.410841 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\n Mar 22 19:19:39.572 E ns/openshift-controller-manager pod/controller-manager-x76gp node/ci-op-hb3xg0gt-253f3-hk9mg-master-0 container/controller-manager reason/ContainerExit code/137 cause/Error I0322 18:40:24.164551 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202303040029.p0.g79857a3.assembly.stream-79857a3)\nI0322 18:40:24.166428 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07be3dae76690d861f51f4335fa1367f22e7bbbd0aa5cd0f03d404a8504c5a4b"\nI0322 18:40:24.166448 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41f17928cbcccb8eb903d5c8bddb59aca2402752cdf8596133fca642510a38ee"\nI0322 18:40:24.166484 1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0322 18:40:24.166630 1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\nE0322 18:42:03.272155 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0322 18:42:39.731341 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\n | |||
#1638054534116806656 | junit | 5 days ago | |
Mar 21 07:12:56.223 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i7d1585j-253f3-mfzkg-worker-westus-h695r container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/03/21 06:29:55 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/03/21 06:29:55 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/03/21 06:29:55 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/03/21 06:29:55 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/03/21 06:29:55 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/03/21 06:29:55 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/03/21 06:29:55 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0321 06:29:55.255431 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/03/21 06:29:55 http.go:107: HTTPS: listening on [::]:9091\n2023/03/21 06:38:08 server.go:3120: http: TLS handshake error from 10.131.0.5:52154: read tcp 10.129.2.15:9091->10.131.0.5:52154: read: connection reset by peer\n2023/03/21 06:42:49 server.go:3120: http: TLS handshake error from 10.131.0.5:54350: read tcp 10.129.2.15:9091->10.131.0.5:54350: read: connection reset by peer\n Mar 21 07:12:56.223 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i7d1585j-253f3-mfzkg-worker-westus-h695r container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-03-21T06:29:54.035403479Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=719dbaf)"\nlevel=info ts=2023-03-21T06:29:54.03549128Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20230304-05:54:47)"\nlevel=info ts=2023-03-21T06:29:54.047992247Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-03-21T06:29:55.290633496Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-21T06:29:55.290845598Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-21T06:29:58.599953732Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-21T06:31:01.849720481Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-21T06:48:28.951078298Z caller=re Mar 21 07:12:56.732 - 286ms E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator Mar 21 07:12:57.314 E ns/openshift-monitoring pod/thanos-querier-7bf86594c8-cmx8z node/ci-op-i7d1585j-253f3-mfzkg-worker-westus-h695r container/oauth-proxy reason/ContainerExit code/2 cause/Error : connect: connection refused\n2023/03/21 06:32:56 oauthproxy.go:791: requestauth: 10.128.0.9:55326 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0321 06:32:56.372738 1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:32:56 oauthproxy.go:791: requestauth: 10.128.0.9:55330 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0321 06:32:56.747912 1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:32:56 oauthproxy.go:791: requestauth: 10.128.0.9:55370 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0321 06:32:59.038798 1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:32:59 oauthproxy.go:791: requestauth: 10.128.0.9:55404 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0321 06:33:02.855193 1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:33:02 oauthproxy.go:791: requestauth: 10.129.2.15:46522 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:33:17 server.go:3120: http: TLS handshake error from 10.131.0.5:59280: read tcp 10.129.2.14:9091->10.131.0.5:59280: read: connection reset by peer\n Mar 21 07:12:58.099 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5c87c94f97-xdnxb node/ci-op-i7d1585j-253f3-mfzkg-master-1 container/snapshot-controller reason/ContainerExit code/2 cause/Error Mar 21 07:12:58.171 E ns/openshift-console-operator pod/console-operator-8d4486798-nxhbq node/ci-op-i7d1585j-253f3-mfzkg-master-1 container/console-operator reason/ContainerExit code/1 cause/Error , FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0321 07:12:56.543715 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-nxhbq", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0321 07:12:56.543736 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0321 07:12:56.543232 1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0321 07:12:56.543265 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0321 07:12:56.543278 1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0321 07:12:56.543290 1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0321 07:12:56.543301 1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0321 07:12:56.543312 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0321 07:12:56.543329 1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0321 07:12:56.543760 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-nxhbq", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0321 07:12:56.543775 1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0321 07:12:56.543342 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0321 07:12:56.543783 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nW0321 07:12:56.543420 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 21 07:13:02.469 E ns/openshift-monitoring pod/thanos-querier-7bf86594c8-744tz node/ci-op-i7d1585j-253f3-mfzkg-worker-westus-qpd8w container/oauth-proxy reason/ContainerExit code/2 cause/Error connect: connection refused\n2023/03/21 06:32:56 oauthproxy.go:791: requestauth: 10.128.0.9:55358 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0321 06:32:57.080661 1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:32:57 oauthproxy.go:791: requestauth: 10.128.0.9:55378 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0321 06:32:57.737875 1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:32:57 oauthproxy.go:791: requestauth: 10.128.0.9:55394 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0321 06:33:01.612365 1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:33:01 oauthproxy.go:791: requestauth: 10.128.0.9:55412 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0321 06:33:02.273131 1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:33:02 oauthproxy.go:791: requestauth: 10.129.2.15:39772 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:47:29 server.go:3120: http: TLS handshake error from 10.129.2.12:55918: read tcp 10.131.0.11:9091->10.129.2.12:55918: read: connection reset by peer\n Mar 21 07:13:07.144 E ns/openshift-service-ca pod/service-ca-54888b9dcb-ml8jk node/ci-op-i7d1585j-253f3-mfzkg-master-1 container/service-ca-controller reason/ContainerExit code/1 cause/Error Mar 21 07:13:08.570 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-i7d1585j-253f3-mfzkg-worker-westus-qpd8w container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-03-21T06:27:03.725942359Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=719dbaf)"\nlevel=info ts=2023-03-21T06:27:03.726031263Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20230304-05:54:47)"\nlevel=info ts=2023-03-21T06:27:03.726329175Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-03-21T06:27:03.726765993Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-03-21T06:27:04.971814454Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n Mar 21 07:13:08.570 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-i7d1585j-253f3-mfzkg-worker-westus-qpd8w container/alertmanager-proxy reason/ContainerExit code/2 cause/Error proxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/03/21 06:27:04 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0321 06:27:04.045104 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/03/21 06:27:04 http.go:107: HTTPS: listening on [::]:9095\nE0321 06:32:25.521989 1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:32:25 oauthproxy.go:791: requestauth: 10.131.0.13:55996 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0321 06:32:37.449247 1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:32:37 oauthproxy.go:791: requestauth: 10.129.2.15:43978 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0321 06:32:55.521065 1 webhook.go:111] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:32:55 oauthproxy.go:791: requestauth: 10.131.0.13:55996 Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\n2023/03/21 06:45:44 server.go:3120: http: TLS handshake error from 10.129.2.12:37220: read tcp 10.131.0.10:9095->10.129.2.12:37220: read: connection reset by peer\n2023/03/21 07:04:55 server.go:3120: http: TLS handshake error from 10.129.2.12:46438: read tcp 10.131.0.10:9095->10.129.2.12:46438: read: connection reset by peer\n Mar 21 07:13:14.935 E ns/openshift-service-ca-operator pod/service-ca-operator-6df98d789d-nkrzl node/ci-op-i7d1585j-253f3-mfzkg-master-0 container/service-ca-operator reason/ContainerExit code/1 cause/Error | |||
#1638009134332776448 | junit | 5 days ago | |
Mar 21 04:14:18.000 - 1s E disruption/openshift-api connection/reused disruption/openshift-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-n9ngnqjc-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 21 04:14:18.285 - 1s E disruption/openshift-api connection/reused disruption/openshift-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-n9ngnqjc-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 21 04:14:18.285 - 1s E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-n9ngnqjc-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers Mar 21 04:14:23.740 E ns/openshift-controller-manager pod/controller-manager-fdv4d node/ci-op-n9ngnqjc-253f3-5fpmv-master-2 container/controller-manager reason/ContainerExit code/137 cause/Error I0321 03:35:14.720167 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202303040029.p0.g79857a3.assembly.stream-79857a3)\nI0321 03:35:14.723454 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07be3dae76690d861f51f4335fa1367f22e7bbbd0aa5cd0f03d404a8504c5a4b"\nI0321 03:35:14.723480 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41f17928cbcccb8eb903d5c8bddb59aca2402752cdf8596133fca642510a38ee"\nI0321 03:35:14.723615 1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0321 03:35:14.724795 1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n Mar 21 04:14:23.806 E ns/openshift-kube-storage-version-migrator pod/migrator-97d6f6595-lbp22 node/ci-op-n9ngnqjc-253f3-5fpmv-master-1 container/migrator reason/ContainerExit code/2 cause/Error I0321 03:23:07.786482 1 migrator.go:18] FLAG: --add_dir_header="false"\nI0321 03:23:07.786563 1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0321 03:23:07.786567 1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0321 03:23:07.786572 1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0321 03:23:07.786576 1 migrator.go:18] FLAG: --kubeconfig=""\nI0321 03:23:07.786580 1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0321 03:23:07.786585 1 migrator.go:18] FLAG: --log_dir=""\nI0321 03:23:07.786588 1 migrator.go:18] FLAG: --log_file=""\nI0321 03:23:07.786591 1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0321 03:23:07.786594 1 migrator.go:18] FLAG: --logtostderr="true"\nI0321 03:23:07.786597 1 migrator.go:18] FLAG: --one_output="false"\nI0321 03:23:07.786600 1 migrator.go:18] FLAG: --skip_headers="false"\nI0321 03:23:07.786603 1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0321 03:23:07.786606 1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0321 03:23:07.786608 1 migrator.go:18] FLAG: --v="2"\nI0321 03:23:07.786611 1 migrator.go:18] FLAG: --vmodule=""\nI0321 03:23:07.788928 1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0321 03:23:21.920386 1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0321 03:23:22.033473 1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0321 03:23:23.044132 1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0321 03:23:23.109861 1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\nI0321 03:29:26.405898 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\n Mar 21 04:14:24.049 E ns/openshift-console-operator pod/console-operator-8d4486798-kxb5g node/ci-op-n9ngnqjc-253f3-5fpmv-master-1 container/console-operator reason/ContainerExit code/1 cause/Error .go:114] Shutting down worker of ResourceSyncController controller ...\nI0321 04:14:09.784544 1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0321 04:14:09.784332 1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0321 04:14:09.784566 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-kxb5g", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0321 04:14:09.784601 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-kxb5g", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0321 04:14:09.784618 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0321 04:14:09.784642 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-kxb5g", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0321 04:14:09.784662 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0321 04:14:09.786276 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0321 04:14:09.786871 1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nW0321 04:14:09.786905 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 21 04:14:24.161 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5c87c94f97-xpbw6 node/ci-op-n9ngnqjc-253f3-5fpmv-master-1 container/snapshot-controller reason/ContainerExit code/2 cause/Error Mar 21 04:14:24.443 E ns/openshift-ingress-canary pod/ingress-canary-nlrgp node/ci-op-n9ngnqjc-253f3-5fpmv-worker-centralus3-5p2pd container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n Mar 21 04:14:25.717 E ns/openshift-controller-manager pod/controller-manager-24bkk node/ci-op-n9ngnqjc-253f3-5fpmv-master-1 container/controller-manager reason/ContainerExit code/137 cause/Error mer.go:247] Caches are synced for DefaultRoleBindingController \nI0321 03:37:32.558530 1 factory.go:85] deploymentconfig controller caches are synced. Starting workers.\nI0321 03:37:32.558612 1 templateinstance_controller.go:297] Starting TemplateInstance controller\nI0321 03:37:32.579420 1 shared_informer.go:247] Caches are synced for service account \nI0321 03:37:32.581175 1 factory.go:80] Deployer controller caches are synced. Starting workers.\nI0321 03:37:32.742828 1 docker_registry_service.go:156] caches synced\nI0321 03:37:32.742880 1 create_dockercfg_secrets.go:219] urls found\nI0321 03:37:32.742898 1 create_dockercfg_secrets.go:225] caches synced\nI0321 03:37:32.742910 1 deleted_token_secrets.go:70] caches synced\nI0321 03:37:32.742912 1 docker_registry_service.go:298] Updating registry URLs from map[172.30.24.1:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.24.1:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0321 03:37:32.751828 1 deleted_dockercfg_secrets.go:75] caches synced\nI0321 03:37:32.777027 1 build_controller.go:475] Starting build controller\nI0321 03:37:32.777050 1 build_controller.go:477] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nE0321 04:14:12.891951 1 imagestream_controller.go:136] Error syncing image stream "openshift/rhpam-businesscentral-rhel8": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "rhpam-businesscentral-rhel8": the object has been modified; please apply your changes to the latest version and try again\nE0321 04:14:13.361789 1 imagestream_controller.go:136] Error syncing image stream "openshift/rhpam-businesscentral-rhel8": Operation cannot be fulfilled on imagestream.image.openshift.io "rhpam-businesscentral-rhel8": the image stream was updated from "41447" to "41626"\n Mar 21 04:14:27.070 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator authentication is updating versions\n* Cluster operator cloud-credential is updating versions\n* Cluster operator cluster-autoscaler is updating versions\n* Cluster operator console is updating versions\n* Cluster operator csi-snapshot-controller is updating versions\n* Cluster operator image-registry is updating versions\n* Cluster operator ingress is updating versions\n* Cluster operator insights is updating versions\n* Cluster operator marketplace is updating versions\n* Cluster operator monitoring is updating versions\n* Cluster operator node-tuning is updating versions\n* Cluster operator openshift-apiserver is updating versions\n* Cluster operator openshift-controller-manager is updating versions\n* Cluster operator openshift-samples is updating versions\n* Cluster operator operator-lifecycle-manager is updating versions\n* Cluster operator storage is updating versions Mar 21 04:14:27.548 E ns/openshift-controller-manager pod/controller-manager-kh8cl node/ci-op-n9ngnqjc-253f3-5fpmv-master-0 container/controller-manager reason/ContainerExit code/137 cause/Error I0321 03:35:14.660109 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202303040029.p0.g79857a3.assembly.stream-79857a3)\nI0321 03:35:14.662030 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07be3dae76690d861f51f4335fa1367f22e7bbbd0aa5cd0f03d404a8504c5a4b"\nI0321 03:35:14.662053 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41f17928cbcccb8eb903d5c8bddb59aca2402752cdf8596133fca642510a38ee"\nI0321 03:35:14.662091 1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0321 03:35:14.662190 1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n | |||
#1637792542047080448 | junit | 6 days ago | |
Mar 20 13:48:26.541 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-5dbb5fb469-trt64 node/ci-op-5hjxnrr2-253f3-bnt96-master-2 container/kube-storage-version-migrator-operator reason/ContainerExit code/1 cause/Error eaccounts/kube-storage-version-migrator-sa": dial tcp 172.30.0.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 172.30.0.1:443: connect: connection refused, Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused]\nI0320 13:48:24.866890 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0320 13:48:24.867309 1 reflector.go:225] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0320 13:48:24.867476 1 reflector.go:225] Stopping reflector *unstructured.Unstructured (12h0m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0320 13:48:24.867542 1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0320 13:48:24.867586 1 reflector.go:225] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0320 13:48:24.867629 1 base_controller.go:167] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0320 13:48:24.868269 1 base_controller.go:145] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated\nI0320 13:48:24.867641 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0320 13:48:24.867651 1 base_controller.go:167] Shutting down StaticConditionsController ...\nI0320 13:48:24.867661 1 base_controller.go:167] Shutting down KubeStorageVersionMigrator ...\nI0320 13:48:24.867670 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0320 13:48:24.867678 1 base_controller.go:167] Shutting down StaticResourceController ...\nW0320 13:48:24.867700 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 20 13:48:26.541 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-5dbb5fb469-trt64 node/ci-op-5hjxnrr2-253f3-bnt96-master-2 container/kube-storage-version-migrator-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 20 13:48:33.575 E ns/openshift-ingress-operator pod/ingress-operator-f64487774-6bmqb node/ci-op-5hjxnrr2-253f3-bnt96-master-2 container/ingress-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 20 13:48:36.579 E ns/openshift-insights pod/insights-operator-774865fbf-tld6n node/ci-op-5hjxnrr2-253f3-bnt96-master-2 container/insights-operator reason/ContainerExit code/2 cause/Error \nI0320 13:46:42.115406 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="2.742159ms" userAgent="Prometheus/2.29.2" audit-ID="3f617326-9451-4101-9609-83834a971579" srcIP="10.128.2.22:50974" resp=200\nI0320 13:46:45.840980 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.598965ms" userAgent="Prometheus/2.29.2" audit-ID="c1e55255-0e76-4f8e-a904-e62760960458" srcIP="10.131.0.17:36974" resp=200\nI0320 13:47:12.121011 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.33606ms" userAgent="Prometheus/2.29.2" audit-ID="a1f6ca99-b898-4a82-ad74-0db23a7fe619" srcIP="10.128.2.22:50974" resp=200\nI0320 13:47:15.838041 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.447997ms" userAgent="Prometheus/2.29.2" audit-ID="b30d7ddb-e3ac-4a29-b320-009fb3c174ee" srcIP="10.131.0.17:36974" resp=200\nI0320 13:47:22.106835 1 status.go:354] The operator is healthy\nI0320 13:47:22.106912 1 status.go:441] No status update necessary, objects are identical\nI0320 13:47:42.117754 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.018466ms" userAgent="Prometheus/2.29.2" audit-ID="1cace863-5d9a-4122-a7aa-52410a2e1328" srcIP="10.128.2.22:50974" resp=200\nI0320 13:47:45.841362 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.914272ms" userAgent="Prometheus/2.29.2" audit-ID="5a40eabc-8767-43a2-8576-59216411bfa5" srcIP="10.131.0.17:36974" resp=200\nI0320 13:48:12.120256 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.077855ms" userAgent="Prometheus/2.29.2" audit-ID="3690d68a-4244-4471-bf88-461400a91d0c" srcIP="10.128.2.22:50974" resp=200\nI0320 13:48:15.837792 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.397696ms" userAgent="Prometheus/2.29.2" audit-ID="44848f7b-c3ce-446c-ae27-8d078e117583" srcIP="10.131.0.17:36974" resp=200\nI0320 13:48:16.551653 1 reflector.go:535] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Watch close - *v1.ConfigMap total 9 items received\n Mar 20 13:48:43.438 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator authentication is updating versions\n* Cluster operator cloud-credential is updating versions\n* Cluster operator cluster-autoscaler is updating versions\n* Cluster operator console is updating versions\n* Cluster operator csi-snapshot-controller is updating versions\n* Cluster operator image-registry is updating versions\n* Cluster operator ingress is updating versions\n* Cluster operator insights is updating versions\n* Cluster operator kube-storage-version-migrator is updating versions\n* Cluster operator machine-approver is updating versions\n* Cluster operator monitoring is updating versions\n* Cluster operator node-tuning is updating versions\n* Cluster operator openshift-apiserver is updating versions\n* Cluster operator openshift-controller-manager is updating versions\n* Cluster operator openshift-samples is updating versions\n* Cluster operator storage is updating versions Mar 20 13:48:49.251 E ns/openshift-console-operator pod/console-operator-8d4486798-7dt7k node/ci-op-5hjxnrr2-253f3-bnt96-master-0 container/console-operator reason/ContainerExit code/1 cause/Error rs have been terminated\nI0320 13:48:46.422816 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-7dt7k", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0320 13:48:46.422839 1 base_controller.go:114] Shutting down worker of StatusSyncer_console controller ...\nI0320 13:48:46.422849 1 base_controller.go:104] All StatusSyncer_console workers have been terminated\nI0320 13:48:46.422849 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-7dt7k", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0320 13:48:46.422729 1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0320 13:48:46.422735 1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0320 13:48:46.422869 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0320 13:48:46.422880 1 base_controller.go:114] Shutting down worker of DownloadsRouteController controller ...\nI0320 13:48:46.422892 1 base_controller.go:104] All DownloadsRouteController workers have been terminated\nI0320 13:48:46.422890 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-8d4486798-7dt7k", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nW0320 13:48:46.422902 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0320 13:48:46.422910 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\n Mar 20 13:48:57.737 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7c6cd6f67f-jhjj6 node/ci-op-5hjxnrr2-253f3-bnt96-master-2 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error h close - *v1.Role total 7 items received\nI0320 13:48:11.550467 1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.ConfigMap total 7 items received\nI0320 13:48:22.655671 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="43.860058ms" userAgent="Prometheus/2.29.2" audit-ID="d5ee8752-afbc-4ea9-9bc0-980f471d5619" srcIP="10.128.2.22:54200" resp=200\nI0320 13:48:31.559709 1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Secret total 7 items received\nI0320 13:48:32.768005 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.151871ms" userAgent="Prometheus/2.29.2" audit-ID="596798cf-9b50-4d46-af87-9108f3840f86" srcIP="10.131.0.17:58916" resp=200\nI0320 13:48:52.615079 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.281672ms" userAgent="Prometheus/2.29.2" audit-ID="8e546b23-f7a9-41ad-becf-afec2ac74be8" srcIP="10.128.2.22:54200" resp=200\nI0320 13:48:56.799448 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0320 13:48:56.799538 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0320 13:48:56.799629 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0320 13:48:56.799642 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0320 13:48:56.799644 1 reflector.go:225] Stopping reflector *v1.Network (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0320 13:48:56.799651 1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0320 13:48:56.799661 1 base_controller.go:167] Shutting down UserCAObservationController ...\nW0320 13:48:56.799662 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0320 13:48:56.799688 1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\n Mar 20 13:49:03.722 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-cb5bf8fc7-5tbqt node/ci-op-5hjxnrr2-253f3-bnt96-master-2 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error operator.go:157] Starting syncing operator at 2023-03-20 13:48:33.674790644 +0000 UTC m=+3239.549970084\nI0320 13:48:33.728825 1 operator.go:159] Finished syncing operator at 54.026404ms\nI0320 13:48:57.012903 1 operator.go:157] Starting syncing operator at 2023-03-20 13:48:57.012890972 +0000 UTC m=+3262.888070412\nI0320 13:48:57.057211 1 operator.go:159] Finished syncing operator at 44.312967ms\nI0320 13:49:02.904683 1 operator.go:157] Starting syncing operator at 2023-03-20 13:49:02.904672599 +0000 UTC m=+3268.779852139\nI0320 13:49:02.948921 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0320 13:49:02.948996 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0320 13:49:02.949015 1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0320 13:49:02.949038 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0320 13:49:02.949054 1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0320 13:49:02.949087 1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0320 13:49:02.949099 1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0320 13:49:02.949114 1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0320 13:49:02.949127 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0320 13:49:02.949140 1 base_controller.go:167] Shutting down ManagementStateController ...\nW0320 13:49:02.949148 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0320 13:49:02.949153 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0320 13:49:02.949163 1 base_controller.go:167] Shutting down LoggingSyncer ...\n Mar 20 13:49:04.744 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-95fdb55dc-tfjg8 node/ci-op-5hjxnrr2-253f3-bnt96-master-2 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error 3:38.631871 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0320 13:26:16.578235 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0320 13:30:42.805836 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0320 13:33:37.694555 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0320 13:36:00.043872 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0320 13:43:37.695023 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0320 13:46:16.578690 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0320 13:49:03.915677 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0320 13:49:03.915872 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0320 13:49:03.915891 1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0320 13:49:03.915912 1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0320 13:49:03.915923 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0320 13:49:03.915936 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0320 13:49:03.915939 1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0320 13:49:03.915958 1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0320 13:49:03.915969 1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0320 13:49:03.915980 1 base_controller.go:167] Shutting down ConfigObserver ...\nI0320 13:49:03.915990 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0320 13:49:03.916000 1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nW0320 13:49:03.916422 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 20 13:49:05.000 - 1s E disruption/oauth-api connection/reused disruption/oauth-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-5hjxnrr2-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers Mar 20 13:49:05.792 - 4s E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-5hjxnrr2-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers | |||
#1637161962968190976 | junit | 7 days ago | |
Mar 18 20:12:34.924 - 999ms E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: error running request: 500 Internal Server Error: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"etcdserver: leader changed","code":500}\n Mar 18 20:12:34.924 - 999ms E disruption/kube-api connection/reused disruption/kube-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-v364xx4v-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers Mar 18 20:12:40.049 E ns/openshift-ingress-canary pod/ingress-canary-9bjwl node/ci-op-v364xx4v-253f3-49xfr-worker-centralus1-95bsn container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n Mar 18 20:12:40.130 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-v364xx4v-253f3-49xfr-worker-centralus3-mksrk container/thanos-sidecar reason/ContainerExit code/1 cause/Error :32.534567139Z caller=http.go:93 service=http/server component=sidecar msg="internal server is shutdown gracefully" err="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=info ts=2023-03-18T20:12:32.53464544Z caller=intrumentation.go:66 msg="changing probe status" status=not-healthy reason="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=warn ts=2023-03-18T20:12:32.53469184Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=info ts=2023-03-18T20:12:32.534843841Z caller=grpc.go:130 service=gRPC/server component=sidecar msg="internal server is shutting down" err="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=info ts=2023-03-18T20:12:32.534874542Z caller=grpc.go:143 service=gRPC/server component=sidecar msg="gracefully stopping internal server"\nlevel=info ts=2023-03-18T20:12:32.534908942Z caller=grpc.go:156 service=gRPC/server component=sidecar msg="internal server is shutdown gracefully" err="listen gRPC on address [$(POD_IP)]:10901: listen tcp: lookup $(POD_IP): no such host"\nlevel=error ts=2023-03-18T20:12:32.535013943Z caller=main.go:156 err="listen tcp: lookup $(POD_IP): no such host\nlisten gRPC on address [$(POD_IP)]:10901\ngithub.com/thanos-io/thanos/pkg/server/grpc.(*Server).ListenAndServe\n\t/go/src/github.com/improbable-eng/thanos/pkg/server/grpc/grpc.g Mar 18 20:12:40.164 E ns/openshift-image-registry pod/cluster-image-registry-operator-5978fb6844-tvcjs node/ci-op-v364xx4v-253f3-49xfr-master-2 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 18 20:12:40.204 E ns/openshift-console-operator pod/console-operator-76778d847f-hrtfp node/ci-op-v364xx4v-253f3-49xfr-master-0 container/console-operator reason/ContainerExit code/1 cause/Error \nI0318 20:12:32.860381 1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0318 20:12:32.860402 1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0318 20:12:32.860400 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0318 20:12:32.860392 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-hrtfp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0318 20:12:32.860417 1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0318 20:12:32.860437 1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0318 20:12:32.860452 1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0318 20:12:32.860452 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-hrtfp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0318 20:12:32.860485 1 base_controller.go:167] Shutting down StatusSyncer_console ...\nW0318 20:12:32.860490 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0318 20:12:32.860493 1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0318 20:12:32.860486 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-hrtfp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0318 20:12:32.860508 1 base_controller.go:167] Shutting down LoggingSyncer ...\n Mar 18 20:12:40.310 E ns/openshift-controller-manager pod/controller-manager-thzhq node/ci-op-v364xx4v-253f3-49xfr-master-2 container/controller-manager reason/ContainerExit code/137 cause/Error request: POST:https://172.30.0.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports?timeout=1h0m0s\nE0318 20:12:07.564015 1 imagestream_controller.go:136] Error syncing image stream "openshift/rhpam-kieserver-rhel8": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "rhpam-kieserver-rhel8": the object has been modified; please apply your changes to the latest version and try again\nE0318 20:12:07.625306 1 imagestream_controller.go:136] Error syncing image stream "openshift/fuse7-karaf-openshift": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "fuse7-karaf-openshift": the object has been modified; please apply your changes to the latest version and try again\nE0318 20:12:08.434794 1 imagestream_controller.go:136] Error syncing image stream "openshift/dotnet": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "dotnet": the object has been modified; please apply your changes to the latest version and try again\nE0318 20:12:08.942558 1 imagestream_controller.go:136] Error syncing image stream "openshift/golang": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "golang": the object has been modified; please apply your changes to the latest version and try again\nE0318 20:12:09.839192 1 imagestream_controller.go:136] Error syncing image stream "openshift/redis": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "redis": the object has been modified; please apply your changes to the latest version and try again\nE0318 20:12:11.074349 1 imagestream_controller.go:136] Error syncing image stream "openshift/redis": Operation cannot be fulfilled on imagestream.image.openshift.io "redis": the image stream was updated from "45667" to "46032"\nE0318 20:12:11.095856 1 imagestream_controller.go:136] Error syncing image stream "openshift/redis": Operation cannot be fulfilled on imagestream.image.openshift.io "redis": the image stream was updated from "45667" to "46032"\n Mar 18 20:12:40.471 E ns/openshift-service-ca pod/service-ca-5d846dc897-z9289 node/ci-op-v364xx4v-253f3-49xfr-master-1 container/service-ca-controller reason/ContainerExit code/1 cause/Error Mar 18 20:12:40.890 E ns/openshift-controller-manager pod/controller-manager-kvw6n node/ci-op-v364xx4v-253f3-49xfr-master-0 container/controller-manager reason/ContainerExit code/137 cause/Error I0318 19:22:08.458155 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202302111027.p0.g79857a3.assembly.stream-79857a3)\nI0318 19:22:08.460041 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e46120f088f4ccfe93c4b78783998f0a2dbb6bb676b6887054928f296d2c7756"\nI0318 19:22:08.460064 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01beff02e16695cd6d5577dc8c2601545cac06893b6d7feb257ec51a97ba9c1e"\nI0318 19:22:08.460157 1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0318 19:22:08.460203 1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\nE0318 19:23:33.135503 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0318 19:24:23.339660 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\n Mar 18 20:12:41.451 E ns/openshift-controller-manager pod/controller-manager-mz7m2 node/ci-op-v364xx4v-253f3-49xfr-master-1 container/controller-manager reason/ContainerExit code/137 cause/Error I0318 19:22:08.477275 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202302111027.p0.g79857a3.assembly.stream-79857a3)\nI0318 19:22:08.479168 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e46120f088f4ccfe93c4b78783998f0a2dbb6bb676b6887054928f296d2c7756"\nI0318 19:22:08.479186 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01beff02e16695cd6d5577dc8c2601545cac06893b6d7feb257ec51a97ba9c1e"\nI0318 19:22:08.479325 1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0318 19:22:08.479754 1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\nE0318 19:23:47.448352 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\nE0318 19:24:25.214971 1 leaderelection.go:330] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers": dial tcp 172.30.0.1:443: connect: connection refused\n Mar 18 20:12:41.855 E ns/openshift-monitoring pod/kube-state-metrics-9dc84b9f6-88php node/ci-op-v364xx4v-253f3-49xfr-worker-centralus3-mksrk container/kube-state-metrics reason/ContainerExit code/2 cause/Error | |||
#1637025354965061632 | junit | 8 days ago | |
Mar 18 10:54:26.459 - 1s E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-n9zmxnb9-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 18 10:54:26.459 - 1s E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-n9zmxnb9-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers Mar 18 10:54:26.459 - 2s E disruption/kube-api connection/reused disruption/kube-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-n9zmxnb9-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers Mar 18 10:54:26.459 - 2s E disruption/openshift-api connection/reused disruption/openshift-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-n9zmxnb9-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 18 10:54:26.460 - 2s E disruption/oauth-api connection/reused disruption/oauth-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-n9zmxnb9-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers Mar 18 10:54:34.772 E ns/openshift-console-operator pod/console-operator-76778d847f-rqvhm node/ci-op-n9zmxnb9-253f3-qtbf7-master-0 container/console-operator reason/ContainerExit code/1 cause/Error 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-rqvhm", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0318 10:54:21.390741 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0318 10:54:21.391310 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0318 10:54:21.391326 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0318 10:54:21.391355 1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0318 10:54:21.391367 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0318 10:54:21.391379 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0318 10:54:21.391394 1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0318 10:54:21.391400 1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0318 10:54:21.391414 1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0318 10:54:21.391425 1 base_controller.go:167] Shutting down HealthCheckController ...\nI0318 10:54:21.391436 1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0318 10:54:21.391447 1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0318 10:54:21.391458 1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0318 10:54:21.391468 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0318 10:54:21.391480 1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0318 10:54:21.391491 1 base_controller.go:167] Shutting down DownloadsRouteController ...\nW0318 10:54:21.391761 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 18 10:54:42.940 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7bf9f4bd6c-gkm9q node/ci-op-n9zmxnb9-253f3-qtbf7-master-1 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error ker of UserCAObservationController controller ...\nI0318 10:54:41.986584 1 base_controller.go:104] All UserCAObservationController workers have been terminated\nI0318 10:54:41.986588 1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0318 10:54:41.986600 1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0318 10:54:41.986600 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0318 10:54:41.986608 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0318 10:54:41.986616 1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0318 10:54:41.986619 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0318 10:54:41.986611 1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0318 10:54:41.986640 1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0318 10:54:41.986646 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0318 10:54:41.986664 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0318 10:54:41.986667 1 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0318 10:54:41.986680 1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nW0318 10:54:41.986556 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0318 10:54:41.986693 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\n Mar 18 10:54:46.020 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5dbf449b47-7qxnx node/ci-op-n9zmxnb9-253f3-qtbf7-master-1 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error 03-18 10:54:35.060737694 +0000 UTC m=+2935.302125369\nI0318 10:54:35.659691 1 operator.go:159] Finished syncing operator at 598.944826ms\nI0318 10:54:35.659874 1 operator.go:157] Starting syncing operator at 2023-03-18 10:54:35.659868624 +0000 UTC m=+2935.901256299\nI0318 10:54:36.252651 1 operator.go:159] Finished syncing operator at 592.773216ms\nI0318 10:54:44.677603 1 operator.go:157] Starting syncing operator at 2023-03-18 10:54:44.677590912 +0000 UTC m=+2944.918978487\nI0318 10:54:44.740989 1 operator.go:159] Finished syncing operator at 63.391337ms\nI0318 10:54:44.963938 1 operator.go:157] Starting syncing operator at 2023-03-18 10:54:44.963927948 +0000 UTC m=+2945.205315523\nI0318 10:54:45.047197 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0318 10:54:45.047603 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0318 10:54:45.047685 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0318 10:54:45.047724 1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0318 10:54:45.047766 1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0318 10:54:45.048145 1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0318 10:54:45.048212 1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0318 10:54:45.048260 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0318 10:54:45.048303 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0318 10:54:45.048339 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0318 10:54:45.048375 1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nW0318 10:54:45.048751 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 18 10:54:47.524 - 2s E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator Mar 18 10:54:49.588 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-5cd764c577-rqxx8 node/ci-op-n9zmxnb9-253f3-qtbf7-master-2 container/webhook reason/ContainerExit code/2 cause/Error Mar 18 10:54:54.096 E ns/openshift-monitoring pod/cluster-monitoring-operator-6649dbcbd-5mnpd node/ci-op-n9zmxnb9-253f3-qtbf7-master-1 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
#1636373508185395200 | junit | 10 days ago | |
Mar 16 15:46:31.047 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-fi1ghn7l-253f3-j5kcx-worker-centralus2-wnw4k container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/03/16 15:04:35 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/03/16 15:04:35 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/03/16 15:04:35 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/03/16 15:04:35 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/03/16 15:04:35 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/03/16 15:04:35 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/03/16 15:04:35 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0316 15:04:35.376384 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/03/16 15:04:35 http.go:107: HTTPS: listening on [::]:9091\n Mar 16 15:46:31.047 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-fi1ghn7l-253f3-j5kcx-worker-centralus2-wnw4k container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-03-16T15:04:34.763188578Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=1291cd0)"\nlevel=info ts=2023-03-16T15:04:34.763254378Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20230211-14:32:03)"\nlevel=info ts=2023-03-16T15:04:34.76343038Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-03-16T15:04:35.339936272Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-16T15:04:35.340074773Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-16T15:05:55.3844884Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-16T15:24:28.36904175Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\n Mar 16 15:46:31.105 E ns/openshift-monitoring pod/prometheus-operator-6c4845dd74-9fsh4 node/ci-op-fi1ghn7l-253f3-j5kcx-master-0 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 16 15:46:31.281 E ns/openshift-kube-storage-version-migrator pod/migrator-b8757b48c-nr5f7 node/ci-op-fi1ghn7l-253f3-j5kcx-master-0 container/migrator reason/ContainerExit code/2 cause/Error I0316 14:54:20.831959 1 migrator.go:18] FLAG: --add_dir_header="false"\nI0316 14:54:20.832065 1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0316 14:54:20.832071 1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0316 14:54:20.832077 1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0316 14:54:20.832083 1 migrator.go:18] FLAG: --kubeconfig=""\nI0316 14:54:20.832088 1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0316 14:54:20.832095 1 migrator.go:18] FLAG: --log_dir=""\nI0316 14:54:20.832099 1 migrator.go:18] FLAG: --log_file=""\nI0316 14:54:20.832104 1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0316 14:54:20.832108 1 migrator.go:18] FLAG: --logtostderr="true"\nI0316 14:54:20.832113 1 migrator.go:18] FLAG: --one_output="false"\nI0316 14:54:20.832117 1 migrator.go:18] FLAG: --skip_headers="false"\nI0316 14:54:20.832121 1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0316 14:54:20.832125 1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0316 14:54:20.832131 1 migrator.go:18] FLAG: --v="2"\nI0316 14:54:20.832135 1 migrator.go:18] FLAG: --vmodule=""\nI0316 14:54:20.834784 1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0316 14:54:33.951832 1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0316 14:54:34.056760 1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0316 14:54:35.066358 1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0316 14:54:35.146022 1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\nI0316 15:01:54.944359 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\n Mar 16 15:46:31.767 E ns/openshift-monitoring pod/openshift-state-metrics-77556d5667-v8rvr node/ci-op-fi1ghn7l-253f3-j5kcx-worker-centralus3-tdgfq container/openshift-state-metrics reason/ContainerExit code/2 cause/Error Mar 16 15:46:32.119 E ns/openshift-console-operator pod/console-operator-76778d847f-v4l5g node/ci-op-fi1ghn7l-253f3-j5kcx-master-0 container/console-operator reason/ContainerExit code/1 cause/Error rsion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0316 15:46:30.162482 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-v4l5g", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0316 15:46:30.162497 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0316 15:46:30.162518 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-v4l5g", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0316 15:46:30.164945 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0316 15:46:30.165049 1 genericapiserver.go:406] [graceful-termination] apiserver is exiting\nI0316 15:46:30.165092 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-v4l5g", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationGracefulTerminationFinished' All pending requests processed\nI0316 15:46:30.165138 1 builder.go:283] server exited\nI0316 15:46:30.162645 1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0316 15:46:30.162659 1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0316 15:46:30.162719 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0316 15:46:30.162740 1 base_controller.go:167] Shutting down DownloadsRouteController ...\nW0316 15:46:30.162738 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 16 15:46:33.460 E ns/openshift-monitoring pod/telemeter-client-77c8c4574f-x6z86 node/ci-op-fi1ghn7l-253f3-j5kcx-worker-centralus3-tdgfq container/reload reason/ContainerExit code/2 cause/Error Mar 16 15:46:33.460 E ns/openshift-monitoring pod/telemeter-client-77c8c4574f-x6z86 node/ci-op-fi1ghn7l-253f3-j5kcx-worker-centralus3-tdgfq container/telemeter-client reason/ContainerExit code/2 cause/Error Mar 16 15:46:33.507 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-fi1ghn7l-253f3-j5kcx-worker-centralus2-wnw4k container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/03/16 15:04:17 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/03/16 15:04:17 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/03/16 15:04:17 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/03/16 15:04:17 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/03/16 15:04:17 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/03/16 15:04:17 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0316 15:04:17.258194 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/03/16 15:04:17 http.go:107: HTTPS: listening on [::]:9095\n2023/03/16 15:15:47 server.go:3120: http: TLS handshake error from 10.131.0.4:47890: read tcp 10.128.2.10:9095->10.131.0.4:47890: read: connection reset by peer\n Mar 16 15:46:33.507 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-fi1ghn7l-253f3-j5kcx-worker-centralus2-wnw4k container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-03-16T15:04:16.933009343Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=1291cd0)"\nlevel=info ts=2023-03-16T15:04:16.933066544Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20230211-14:32:03)"\nlevel=info ts=2023-03-16T15:04:16.933244047Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-03-16T15:04:16.933854958Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-03-16T15:04:18.256205318Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n Mar 16 15:46:33.544 E ns/openshift-monitoring pod/node-exporter-f4jc2 node/ci-op-fi1ghn7l-253f3-j5kcx-master-0 container/node-exporter reason/ContainerExit code/143 cause/Error 6T14:55:05.225Z caller=node_exporter.go:113 collector=meminfo\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=netclass\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=netdev\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=netstat\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=nfs\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=nfsd\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=powersupplyclass\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=pressure\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=rapl\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=schedstat\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=sockstat\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=softnet\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=stat\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=textfile\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=thermal_zone\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=time\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=timex\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=udp_queues\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=uname\nlevel=info ts=2023-03-16T14:55:05.225Z caller=node_exporter.go:113 collector=vmstat\nlevel=info ts=2023-03-16T14:55:05.226Z caller=node_exporter.go:113 collector=xfs\nlevel=info ts=2023-03-16T14:55:05.226Z caller=node_exporter.go:113 collector=zfs\nlevel=info ts=2023-03-16T14:55:05.226Z caller=node_exporter.go:195 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2023-03-16T14:55:05.226Z caller=tls_config.go:191 msg="TLS is disabled." http2=false\n | |||
#1636282652812120064 | junit | 10 days ago | |
Mar 16 09:43:11.176 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-wr1yskiv-253f3-5qght-worker-centralus3-rznps container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-03-16T09:07:04.778870873Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=1291cd0)"\nlevel=info ts=2023-03-16T09:07:04.779126477Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20230211-14:32:03)"\nlevel=info ts=2023-03-16T09:07:04.77941148Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-03-16T09:07:05.460984205Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-16T09:07:05.461088507Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-16T09:08:30.418639899Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-03-16T09:19:52.143817489Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\n Mar 16 09:43:11.417 E ns/openshift-monitoring pod/node-exporter-j8hsj node/ci-op-wr1yskiv-253f3-5qght-master-0 container/node-exporter reason/ContainerExit code/143 cause/Error 6T08:55:51.253Z caller=node_exporter.go:113 collector=meminfo\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=netclass\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=netdev\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=netstat\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=nfs\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=nfsd\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=powersupplyclass\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=pressure\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=rapl\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=schedstat\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=sockstat\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=softnet\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=stat\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=textfile\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=thermal_zone\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=time\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=timex\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=udp_queues\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=uname\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=vmstat\nlevel=info ts=2023-03-16T08:55:51.253Z caller=node_exporter.go:113 collector=xfs\nlevel=info ts=2023-03-16T08:55:51.254Z caller=node_exporter.go:113 collector=zfs\nlevel=info ts=2023-03-16T08:55:51.254Z caller=node_exporter.go:195 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2023-03-16T08:55:51.254Z caller=tls_config.go:191 msg="TLS is disabled." http2=false\n Mar 16 09:43:11.710 E ns/openshift-monitoring pod/telemeter-client-8bbdb8985-vv7tm node/ci-op-wr1yskiv-253f3-5qght-worker-centralus2-r89jt container/reload reason/ContainerExit code/2 cause/Error Mar 16 09:43:11.710 E ns/openshift-monitoring pod/telemeter-client-8bbdb8985-vv7tm node/ci-op-wr1yskiv-253f3-5qght-worker-centralus2-r89jt container/telemeter-client reason/ContainerExit code/2 cause/Error Mar 16 09:43:11.763 E ns/openshift-monitoring pod/openshift-state-metrics-77556d5667-8bfhd node/ci-op-wr1yskiv-253f3-5qght-worker-centralus2-r89jt container/openshift-state-metrics reason/ContainerExit code/2 cause/Error Mar 16 09:43:11.961 E ns/openshift-console-operator pod/console-operator-76778d847f-595wr node/ci-op-wr1yskiv-253f3-5qght-master-1 container/console-operator reason/ContainerExit code/1 cause/Error pace:"openshift-console-operator", Name:"console-operator-76778d847f-595wr", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0316 09:43:10.742573 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-595wr", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0316 09:43:10.742591 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0316 09:43:10.742609 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-595wr", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0316 09:43:10.742686 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0316 09:43:10.742756 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ...\nI0316 09:43:10.742772 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated\nI0316 09:43:10.742756 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0316 09:43:10.742783 1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0316 09:43:10.742688 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nW0316 09:43:10.742687 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0316 09:43:10.742794 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\n Mar 16 09:43:12.077 E ns/openshift-service-ca-operator pod/service-ca-operator-756f7ff99-5wtqw node/ci-op-wr1yskiv-253f3-5qght-master-2 container/service-ca-operator reason/ContainerExit code/1 cause/Error Mar 16 09:43:12.213 E ns/openshift-monitoring pod/thanos-querier-7f949f8dd5-h6g65 node/ci-op-wr1yskiv-253f3-5qght-worker-centralus3-rznps container/oauth-proxy reason/ContainerExit code/2 cause/Error 2023/03/16 09:06:46 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2023/03/16 09:06:46 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/03/16 09:06:46 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/03/16 09:06:46 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/03/16 09:06:46 oauthproxy.go:224: compiled skip-auth-regex => "^/-/(healthy|ready)$"\n2023/03/16 09:06:46 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2023/03/16 09:06:46 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/03/16 09:06:46 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0316 09:06:46.773075 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/03/16 09:06:46 http.go:107: HTTPS: listening on [::]:9091\n Mar 16 09:43:12.657 E ns/openshift-ingress-canary pod/ingress-canary-m9884 node/ci-op-wr1yskiv-253f3-5qght-worker-centralus1-pm7h2 container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n Mar 16 09:43:13.825 E ns/openshift-monitoring pod/thanos-querier-7f949f8dd5-hhc7n node/ci-op-wr1yskiv-253f3-5qght-worker-centralus2-r89jt container/oauth-proxy reason/ContainerExit code/2 cause/Error 2023/03/16 09:06:42 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2023/03/16 09:06:42 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/03/16 09:06:42 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/03/16 09:06:42 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/03/16 09:06:42 oauthproxy.go:224: compiled skip-auth-regex => "^/-/(healthy|ready)$"\n2023/03/16 09:06:42 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2023/03/16 09:06:42 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/03/16 09:06:42 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0316 09:06:42.662679 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/03/16 09:06:42 http.go:107: HTTPS: listening on [::]:9091\n2023/03/16 09:20:09 server.go:3120: http: TLS handshake error from 10.129.2.7:45320: read tcp 10.128.2.20:9091->10.129.2.7:45320: read: connection reset by peer\n Mar 16 09:43:13.971 E ns/openshift-service-ca pod/service-ca-5d846dc897-vvzxb node/ci-op-wr1yskiv-253f3-5qght-master-1 container/service-ca-controller reason/ContainerExit code/1 cause/Error | |||
#1636235210343321600 | junit | 10 days ago | |
Mar 16 06:50:52.737 - 3s E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator Mar 16 06:50:53.800 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-5cd764c577-6tzpq node/ci-op-x9fcl2hs-253f3-fwj72-master-0 container/webhook reason/ContainerExit code/2 cause/Error Mar 16 06:50:57.825 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5485bbd886-plx7c node/ci-op-x9fcl2hs-253f3-fwj72-master-0 container/snapshot-controller reason/ContainerExit code/2 cause/Error Mar 16 06:50:58.116 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fcf9b5fd5-p6hd4 node/ci-op-x9fcl2hs-253f3-fwj72-master-1 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error 6.630102 1 base_controller.go:104] All SnapshotCRDController workers have been terminated\nI0316 06:50:56.630135 1 base_controller.go:114] Shutting down worker of CSIDriverStarter controller ...\nI0316 06:50:56.630145 1 base_controller.go:104] All CSIDriverStarter workers have been terminated\nI0316 06:50:56.630154 1 base_controller.go:114] Shutting down worker of DefaultStorageClassController controller ...\nI0316 06:50:56.630160 1 base_controller.go:104] All DefaultStorageClassController workers have been terminated\nI0316 06:50:56.630170 1 base_controller.go:114] Shutting down worker of StatusSyncer_storage controller ...\nI0316 06:50:56.630175 1 base_controller.go:104] All StatusSyncer_storage workers have been terminated\nI0316 06:50:56.630184 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0316 06:50:56.630190 1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0316 06:50:56.630198 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0316 06:50:56.630204 1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0316 06:50:56.630214 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0316 06:50:56.630221 1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0316 06:50:56.630229 1 base_controller.go:114] Shutting down worker of VSphereProblemDetectorStarter controller ...\nI0316 06:50:56.630234 1 base_controller.go:104] All VSphereProblemDetectorStarter workers have been terminated\nW0316 06:50:56.630374 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0316 06:50:56.630413 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0316 06:50:56.630438 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\n Mar 16 06:50:58.116 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fcf9b5fd5-p6hd4 node/ci-op-x9fcl2hs-253f3-fwj72-master-1 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 16 06:51:03.865 E ns/openshift-console-operator pod/console-operator-76778d847f-tdfkp node/ci-op-x9fcl2hs-253f3-fwj72-master-0 container/console-operator reason/ContainerExit code/1 cause/Error nshift-console-operator", Name:"console-operator-76778d847f-tdfkp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0316 06:51:02.644665 1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0316 06:51:02.644671 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-tdfkp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0316 06:51:02.644726 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0316 06:51:02.644746 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-tdfkp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0316 06:51:02.644761 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0316 06:51:02.644761 1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0316 06:51:02.644789 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0316 06:51:02.644804 1 base_controller.go:167] Shutting down ConsoleOperator ...\nW0316 06:51:02.644813 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0316 06:51:02.644819 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0316 06:51:02.644829 1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0316 06:51:02.644843 1 base_controller.go:167] Shutting down DownloadsRouteController ...\n Mar 16 06:51:09.066 E ns/openshift-monitoring pod/cluster-monitoring-operator-6649dbcbd-qndpc node/ci-op-x9fcl2hs-253f3-fwj72-master-1 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 16 06:51:10.000 - 1s E disruption/kube-api connection/reused disruption/kube-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-x9fcl2hs-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers Mar 16 06:51:10.000 - 1s E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-x9fcl2hs-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers Mar 16 06:51:10.000 - 1s E disruption/openshift-api connection/reused disruption/openshift-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-x9fcl2hs-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers Mar 16 06:51:10.000 - 2s E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-x9fcl2hs-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers | |||
#1635638887609012224 | junit | 12 days ago | |
Mar 14 15:13:59.873 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7bf9f4bd6c-wcwfw node/ci-op-7kd9q75t-253f3-swkcs-master-0 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error o/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0314 15:13:55.444842 1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0314 15:13:55.444929 1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0314 15:13:55.444941 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0314 15:13:55.444951 1 base_controller.go:167] Shutting down ConfigObserver ...\nI0314 15:13:55.444960 1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0314 15:13:55.445015 1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0314 15:13:55.446657 1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0314 15:13:55.445047 1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0314 15:13:55.445084 1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ...\nI0314 15:13:55.446741 1 base_controller.go:104] All UserCAObservationController workers have been terminated\nI0314 15:13:55.445091 1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0314 15:13:55.446794 1 genericapiserver.go:393] [graceful-termination] apiserver is exiting\nI0314 15:13:55.446853 1 builder.go:283] server exited\nI0314 15:13:55.445101 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0314 15:13:55.446917 1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0314 15:13:55.445108 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0314 15:13:55.446995 1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nW0314 15:13:55.445184 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 14 15:13:59.873 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7bf9f4bd6c-wcwfw node/ci-op-7kd9q75t-253f3-swkcs-master-0 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 14 15:14:00.407 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fcf9b5fd5-hdhcj node/ci-op-7kd9q75t-253f3-swkcs-master-0 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error theader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0314 15:13:57.648544 1 secure_serving.go:311] Stopped listening on [::]:8443\nI0314 15:13:57.648582 1 genericapiserver.go:363] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nI0314 15:13:57.648613 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0314 15:13:57.648645 1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0314 15:13:57.629780 1 base_controller.go:114] Shutting down worker of StatusSyncer_storage controller ...\nI0314 15:13:57.648756 1 base_controller.go:104] All StatusSyncer_storage workers have been terminated\nI0314 15:13:57.629785 1 base_controller.go:114] Shutting down worker of VSphereProblemDetectorStarter controller ...\nI0314 15:13:57.648802 1 base_controller.go:104] All VSphereProblemDetectorStarter workers have been terminated\nI0314 15:13:57.629799 1 base_controller.go:114] Shutting down worker of CSIDriverStarter controller ...\nI0314 15:13:57.648814 1 base_controller.go:104] All CSIDriverStarter workers have been terminated\nI0314 15:13:57.629805 1 base_controller.go:114] Shutting down worker of DefaultStorageClassController controller ...\nI0314 15:13:57.648824 1 base_controller.go:104] All DefaultStorageClassController workers have been terminated\nI0314 15:13:57.629811 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0314 15:13:57.648838 1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0314 15:13:57.629822 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0314 15:13:57.648848 1 base_controller.go:104] All ManagementStateController workers have been terminated\nW0314 15:13:57.630054 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 14 15:14:00.407 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fcf9b5fd5-hdhcj node/ci-op-7kd9q75t-253f3-swkcs-master-0 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 14 15:14:00.601 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-5cd764c577-wp7jp node/ci-op-7kd9q75t-253f3-swkcs-master-1 container/webhook reason/ContainerExit code/2 cause/Error Mar 14 15:14:00.722 E ns/openshift-console-operator pod/console-operator-76778d847f-tg9fb node/ci-op-7kd9q75t-253f3-swkcs-master-1 container/console-operator reason/ContainerExit code/1 cause/Error :57.622761 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-tg9fb", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0314 15:13:57.622813 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-tg9fb", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0314 15:13:57.622854 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0314 15:13:57.622589 1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0314 15:13:57.623123 1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0314 15:13:57.623161 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0314 15:13:57.623194 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0314 15:13:57.623229 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0314 15:13:57.623464 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-tg9fb", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0314 15:13:57.623496 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nW0314 15:13:57.623597 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0314 15:13:57.623626 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\n Mar 14 15:14:00.953 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-7kd9q75t-253f3-swkcs-worker-centralus2-sgg8p container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/03/14 14:27:21 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/03/14 14:27:21 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/03/14 14:27:21 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/03/14 14:27:21 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/03/14 14:27:21 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/03/14 14:27:21 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0314 14:27:21.120310 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/03/14 14:27:21 http.go:107: HTTPS: listening on [::]:9095\n2023/03/14 14:41:46 server.go:3120: http: TLS handshake error from 10.128.2.19:53520: read tcp 10.129.2.7:9095->10.128.2.19:53520: read: connection reset by peer\n Mar 14 15:14:00.953 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-7kd9q75t-253f3-swkcs-worker-centralus2-sgg8p container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-03-14T14:27:20.790063283Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=1291cd0)"\nlevel=info ts=2023-03-14T14:27:20.790326186Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20230211-14:32:03)"\nlevel=info ts=2023-03-14T14:27:20.79062849Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-03-14T14:27:20.790859993Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-03-14T14:27:22.650995181Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n Mar 14 15:14:01.404 E ns/openshift-image-registry pod/cluster-image-registry-operator-5978fb6844-cqgsh node/ci-op-7kd9q75t-253f3-swkcs-master-0 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 14 15:14:08.471 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5485bbd886-9qhhx node/ci-op-7kd9q75t-253f3-swkcs-master-2 container/snapshot-controller reason/ContainerExit code/2 cause/Error Mar 14 15:14:11.533 E ns/openshift-marketplace pod/marketplace-operator-5674dd9cc-22sfs node/ci-op-7kd9q75t-253f3-swkcs-master-0 container/marketplace-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
#1635730087980568576 | junit | 11 days ago | |
Mar 14 21:09:30.762 E ns/openshift-insights pod/insights-operator-59bc5f8cd8-pjspn node/ci-op-g2s9l0qv-253f3-7qr2m-master-2 container/insights-operator reason/ContainerExit code/2 cause/Error 1ace54aa9" srcIP="10.129.2.12:58636" resp=200\nI0314 21:08:10.529862 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="6.096581ms" userAgent="Prometheus/2.29.2" audit-ID="324cfaca-6d19-423c-be4c-1968d97cae13" srcIP="10.131.0.12:52646" resp=200\nI0314 21:08:22.738741 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.901506ms" userAgent="Prometheus/2.29.2" audit-ID="ec582215-d874-4567-9b51-1f7c3fee7956" srcIP="10.129.2.12:58636" resp=200\nI0314 21:08:31.920031 1 configobserver.go:77] Refreshing configuration from cluster pull secret\nI0314 21:08:31.925385 1 configobserver.go:102] Found cloud.openshift.com token\nI0314 21:08:31.925429 1 configobserver.go:120] Refreshing configuration from cluster secret\nI0314 21:08:31.930619 1 configobserver.go:124] Support secret does not exist\nI0314 21:08:39.449895 1 status.go:354] The operator is healthy\nI0314 21:08:39.449958 1 status.go:441] No status update necessary, objects are identical\nI0314 21:08:40.534992 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="11.771458ms" userAgent="Prometheus/2.29.2" audit-ID="e25a0f5e-9ace-4f7a-8a12-856c1940b057" srcIP="10.131.0.12:52646" resp=200\nI0314 21:08:41.549978 1 reflector.go:535] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Watch close - *v1.ConfigMap total 8 items received\nI0314 21:08:52.756843 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="12.284464ms" userAgent="Prometheus/2.29.2" audit-ID="5e173483-cb38-4cbc-8ce2-52f411f92a6e" srcIP="10.129.2.12:58636" resp=200\nI0314 21:09:10.529565 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="6.159182ms" userAgent="Prometheus/2.29.2" audit-ID="29d8617f-91d0-47be-a173-13a4c66c8538" srcIP="10.131.0.12:52646" resp=200\nI0314 21:09:22.738251 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.923906ms" userAgent="Prometheus/2.29.2" audit-ID="03aa4991-7c70-4a9b-b403-0ff15aaa298c" srcIP="10.129.2.12:58636" resp=200\n Mar 14 21:09:30.762 E ns/openshift-insights pod/insights-operator-59bc5f8cd8-pjspn node/ci-op-g2s9l0qv-253f3-7qr2m-master-2 container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 14 21:09:46.904 E ns/openshift-image-registry pod/cluster-image-registry-operator-5978fb6844-6gz5z node/ci-op-g2s9l0qv-253f3-7qr2m-master-2 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 14 21:09:50.946 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fcf9b5fd5-dc7fx node/ci-op-g2s9l0qv-253f3-7qr2m-master-2 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0314 20:33:44.750664 1 base_controller.go:73] Caches are synced for SnapshotCRDController \nI0314 20:33:44.750700 1 base_controller.go:110] Starting #1 worker of SnapshotCRDController controller ...\nI0314 20:34:48.875144 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0314 20:44:48.875758 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0314 20:45:01.588290 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0314 20:50:06.243467 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0314 20:51:30.772591 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0314 20:54:48.875862 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0314 21:04:48.876828 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0314 21:09:49.179494 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0314 21:09:49.179979 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0314 21:09:49.180068 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0314 21:09:49.180458 1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0314 21:09:49.180474 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0314 21:09:49.180490 1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0314 21:09:49.180502 1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0314 21:09:49.180516 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0314 21:09:49.180528 1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nW0314 21:09:49.180550 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 14 21:09:50.946 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fcf9b5fd5-dc7fx node/ci-op-g2s9l0qv-253f3-7qr2m-master-2 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 14 21:09:51.677 E ns/openshift-console-operator pod/console-operator-76778d847f-c6fbj node/ci-op-g2s9l0qv-253f3-7qr2m-master-0 container/console-operator reason/ContainerExit code/1 cause/Error 316 1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0314 21:09:49.779326 1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0314 21:09:49.779357 1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0314 21:09:49.779367 1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0314 21:09:49.779379 1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0314 21:09:49.779385 1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0314 21:09:49.779401 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0314 21:09:49.779414 1 base_controller.go:167] Shutting down HealthCheckController ...\nI0314 21:09:49.779417 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-c6fbj", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0314 21:09:49.779438 1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0314 21:09:49.779453 1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0314 21:09:49.779451 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-c6fbj", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0314 21:09:49.779470 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0314 21:09:49.779470 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nW0314 21:09:49.779486 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 14 21:09:53.989 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7bf9f4bd6c-92kht node/ci-op-g2s9l0qv-253f3-7qr2m-master-2 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error .io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0314 21:09:53.035283 1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0314 21:09:53.035349 1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0314 21:09:53.035367 1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0314 21:09:53.035380 1 base_controller.go:167] Shutting down ConfigObserver ...\nI0314 21:09:53.035389 1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0314 21:09:53.035398 1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0314 21:09:53.035445 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0314 21:09:53.035460 1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0314 21:09:53.035524 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0314 21:09:53.035713 1 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0314 21:09:53.035726 1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0314 21:09:53.035590 1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ...\nI0314 21:09:53.035597 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0314 21:09:53.035791 1 base_controller.go:104] All UserCAObservationController workers have been terminated\nI0314 21:09:53.035604 1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0314 21:09:53.035808 1 base_controller.go:104] All StaticResourceController workers have been terminated\nW0314 21:09:53.035619 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 14 21:09:56.012 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5dbf449b47-dtq24 node/ci-op-g2s9l0qv-253f3-7qr2m-master-2 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error ap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0314 21:09:54.311344 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0314 21:09:54.311375 1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0314 21:09:54.311396 1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0314 21:09:54.311427 1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0314 21:09:54.311441 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0314 21:09:54.311453 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0314 21:09:54.311462 1 secure_serving.go:311] Stopped listening on [::]:8443\nI0314 21:09:54.311477 1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0314 21:09:54.311485 1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0314 21:09:54.311489 1 genericapiserver.go:363] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nI0314 21:09:54.311495 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0314 21:09:54.311501 1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0314 21:09:54.311504 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0314 21:09:54.311524 1 base_controller.go:114] Shutting down worker of CSISnapshotWebhookController controller ...\nI0314 21:09:54.311556 1 base_controller.go:104] All CSISnapshotWebhookController workers have been terminated\nI0314 21:09:54.311562 1 base_controller.go:167] Shutting down ManagementStateController ...\nW0314 21:09:54.311564 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 14 21:09:56.012 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5dbf449b47-dtq24 node/ci-op-g2s9l0qv-253f3-7qr2m-master-2 container/csi-snapshot-controller-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 14 21:09:56.115 E ns/openshift-machine-api pod/cluster-autoscaler-operator-6945657b6f-qqlj7 node/ci-op-g2s9l0qv-253f3-7qr2m-master-2 container/cluster-autoscaler-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 14 21:09:56.373 - 9s E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator | |||
#1635269109413318656 | junit | 13 days ago | |
Mar 13 14:29:57.948 E ns/openshift-machine-api pod/machine-api-controllers-5d9c5cc965-7nsbx node/ci-op-0mm1qj3z-253f3-2nxsd-master-1 container/machineset-controller reason/ContainerExit code/1 cause/Error Mar 13 14:30:39.934 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-766cb444d9-bdchq node/ci-op-0mm1qj3z-253f3-2nxsd-master-0 container/kube-storage-version-migrator-operator reason/ContainerExit code/1 cause/Error me:"kube-storage-version-migrator-operator", UID:"6467adc6-63e6-4ba6-8a28-f860a7ebb949", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)" to "All is well"\nI0313 14:30:39.217236 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0313 14:30:39.217690 1 reflector.go:225] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0313 14:30:39.218023 1 reflector.go:225] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0313 14:30:39.218170 1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0313 14:30:39.218255 1 reflector.go:225] Stopping reflector *unstructured.Unstructured (12h0m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0313 14:30:39.218318 1 base_controller.go:167] Shutting down StaticConditionsController ...\nI0313 14:30:39.218361 1 base_controller.go:167] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0313 14:30:39.218389 1 base_controller.go:145] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated\nI0313 14:30:39.218413 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0313 14:30:39.218426 1 base_controller.go:167] Shutting down KubeStorageVersionMigrator ...\nI0313 14:30:39.218437 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0313 14:30:39.218450 1 base_controller.go:167] Shutting down StaticResourceController ...\nW0313 14:30:39.218516 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 13 14:30:52.027 E ns/openshift-insights pod/insights-operator-59bc5f8cd8-5vtx6 node/ci-op-0mm1qj3z-253f3-2nxsd-master-0 container/insights-operator reason/ContainerExit code/2 cause/Error 128.2.11:42156" resp=200\nI0313 14:29:10.266369 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.078916ms" userAgent="Prometheus/2.29.2" audit-ID="7f99b843-aa1c-4036-8a5f-e1d395b560a6" srcIP="10.129.2.13:52698" resp=200\nI0313 14:29:23.158075 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.900424ms" userAgent="Prometheus/2.29.2" audit-ID="88fc0d3d-da5a-4e7c-9238-9c86273bfdfe" srcIP="10.128.2.11:42156" resp=200\nI0313 14:29:40.269658 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="6.968533ms" userAgent="Prometheus/2.29.2" audit-ID="f6eac726-0e1b-4ce8-8b28-750ae188efa3" srcIP="10.129.2.13:52698" resp=200\nI0313 14:29:53.162312 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="10.042043ms" userAgent="Prometheus/2.29.2" audit-ID="a89a5f8f-1d34-4364-aeed-23c191bc8512" srcIP="10.128.2.11:42156" resp=200\nI0313 14:29:56.090450 1 configobserver.go:77] Refreshing configuration from cluster pull secret\nI0313 14:29:56.095033 1 configobserver.go:102] Found cloud.openshift.com token\nI0313 14:29:56.095074 1 configobserver.go:120] Refreshing configuration from cluster secret\nI0313 14:29:56.100060 1 configobserver.go:124] Support secret does not exist\nI0313 14:30:04.917826 1 status.go:354] The operator is healthy\nI0313 14:30:04.917902 1 status.go:441] No status update necessary, objects are identical\nI0313 14:30:10.265860 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.253817ms" userAgent="Prometheus/2.29.2" audit-ID="9471b0f5-2489-4089-ae2c-60816bffdf6b" srcIP="10.129.2.13:52698" resp=200\nI0313 14:30:23.159025 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="6.907631ms" userAgent="Prometheus/2.29.2" audit-ID="bdd0bbee-1baa-4a6b-bc9b-9c5539c6b51b" srcIP="10.128.2.11:42156" resp=200\nI0313 14:30:40.269768 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.57933ms" userAgent="Prometheus/2.29.2" audit-ID="df4132b4-3faf-4f21-b5c2-8036c01912a5" srcIP="10.129.2.13:52698" resp=200\n Mar 13 14:30:52.027 E ns/openshift-insights pod/insights-operator-59bc5f8cd8-5vtx6 node/ci-op-0mm1qj3z-253f3-2nxsd-master-0 container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 13 14:31:07.226 - 3s E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator Mar 13 14:31:07.771 E ns/openshift-console-operator pod/console-operator-76778d847f-8pxwd node/ci-op-0mm1qj3z-253f3-2nxsd-master-2 container/console-operator reason/ContainerExit code/1 cause/Error e_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0313 14:31:00.333412 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ...\nI0313 14:31:00.333414 1 base_controller.go:114] Shutting down worker of ConsoleServiceController controller ...\nI0313 14:31:00.333424 1 base_controller.go:114] Shutting down worker of ConsoleDownloadsDeploymentSyncController controller ...\nW0313 14:31:00.333491 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0313 14:31:00.334944 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-8pxwd", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0313 14:31:00.335576 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-8pxwd", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0313 14:31:00.335637 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0313 14:31:00.335694 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-8pxwd", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0313 14:31:00.335738 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0313 14:31:00.335777 1 base_controller.go:104] All ConsoleOperator workers have been terminated\nI0313 14:31:00.335818 1 base_controller.go:104] All ConsoleCLIDownloadsController workers have been terminated\n Mar 13 14:31:08.126 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5dbf449b47-tqsl5 node/ci-op-0mm1qj3z-253f3-2nxsd-master-0 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error erver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0313 14:31:05.950476 1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0313 14:31:05.951109 1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0313 14:31:05.951138 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0313 14:31:05.951151 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0313 14:31:05.951168 1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0313 14:31:05.951174 1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0313 14:31:05.951187 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0313 14:31:05.951230 1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0313 14:31:05.951242 1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0313 14:31:05.951254 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0313 14:31:05.951260 1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0313 14:31:05.951271 1 base_controller.go:114] Shutting down worker of StatusSyncer_csi-snapshot-controller controller ...\nI0313 14:31:05.951277 1 base_controller.go:104] All StatusSyncer_csi-snapshot-controller workers have been terminated\nI0313 14:31:05.951286 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0313 14:31:05.951294 1 base_controller.go:104] All ManagementStateController workers have been terminated\nW0313 14:31:05.951306 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0313 14:31:05.951316 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\n Mar 13 14:31:08.271 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7bf9f4bd6c-gp6pr node/ci-op-0mm1qj3z-253f3-2nxsd-master-0 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error 0/tools/cache/reflector.go:167\nI0313 14:31:06.035370 1 reflector.go:225] Stopping reflector *v1.Namespace (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 14:31:06.035414 1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 14:31:06.035450 1 reflector.go:225] Stopping reflector *v1.Build (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 14:31:06.035487 1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 14:31:06.035526 1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 14:31:06.035560 1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 14:31:06.035596 1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 14:31:06.035628 1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 14:31:06.035658 1 reflector.go:225] Stopping reflector *v1.Image (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 14:31:06.035712 1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0313 14:31:06.035751 1 reflector.go:225] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 14:31:06.035815 1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 14:31:06.035841 1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0313 14:31:06.035851 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 13 14:31:08.271 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7bf9f4bd6c-gp6pr node/ci-op-0mm1qj3z-253f3-2nxsd-master-0 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 13 14:31:08.361 E ns/openshift-authentication-operator pod/authentication-operator-cd5967fb9-ffbln node/ci-op-0mm1qj3z-253f3-2nxsd-master-0 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 13 14:31:09.511 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-5cd764c577-f24z7 node/ci-op-0mm1qj3z-253f3-2nxsd-master-0 container/webhook reason/ContainerExit code/2 cause/Error | |||
#1635172685925322752 | junit | 13 days ago | |
Mar 13 08:22:00.176 E ns/openshift-image-registry pod/cluster-image-registry-operator-5978fb6844-tqxk9 node/ci-op-m2qtct0z-253f3-kcfld-master-0 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 13 08:22:02.000 - 1s E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-m2qtct0z-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers Mar 13 08:22:06.670 E ns/openshift-kube-storage-version-migrator pod/migrator-b8757b48c-wd7fv node/ci-op-m2qtct0z-253f3-kcfld-master-1 container/migrator reason/ContainerExit code/2 cause/Error I0313 07:23:25.412648 1 migrator.go:18] FLAG: --add_dir_header="false"\nI0313 07:23:25.412752 1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0313 07:23:25.412759 1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0313 07:23:25.412765 1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0313 07:23:25.412772 1 migrator.go:18] FLAG: --kubeconfig=""\nI0313 07:23:25.412779 1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0313 07:23:25.412786 1 migrator.go:18] FLAG: --log_dir=""\nI0313 07:23:25.412791 1 migrator.go:18] FLAG: --log_file=""\nI0313 07:23:25.412795 1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0313 07:23:25.412800 1 migrator.go:18] FLAG: --logtostderr="true"\nI0313 07:23:25.412804 1 migrator.go:18] FLAG: --one_output="false"\nI0313 07:23:25.412809 1 migrator.go:18] FLAG: --skip_headers="false"\nI0313 07:23:25.412814 1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0313 07:23:25.412819 1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0313 07:23:25.412823 1 migrator.go:18] FLAG: --v="2"\nI0313 07:23:25.412827 1 migrator.go:18] FLAG: --vmodule=""\nI0313 07:23:25.414504 1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0313 07:23:41.531732 1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0313 07:23:41.746199 1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0313 07:23:42.771523 1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0313 07:23:42.892321 1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\nI0313 07:29:40.369749 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\n Mar 13 08:22:08.315 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7bf9f4bd6c-d2tdj node/ci-op-m2qtct0z-253f3-kcfld-master-0 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error 2:02.796063 1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0313 08:22:02.796065 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0313 08:22:02.796112 1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0313 08:22:02.796114 1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0313 08:22:02.796031 1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0313 08:22:02.796069 1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0313 08:22:02.796165 1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0313 08:22:02.796165 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0313 08:22:02.796072 1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ...\nI0313 08:22:02.796177 1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0313 08:22:02.796080 1 reflector.go:225] Stopping reflector *v1.Proxy (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 08:22:02.796191 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0313 08:22:02.796179 1 base_controller.go:104] All UserCAObservationController workers have been terminated\nI0313 08:22:02.796157 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nW0313 08:22:02.796138 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0313 08:22:02.796222 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\n Mar 13 08:22:08.315 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7bf9f4bd6c-d2tdj node/ci-op-m2qtct0z-253f3-kcfld-master-0 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 13 08:22:08.684 E ns/openshift-console-operator pod/console-operator-76778d847f-97jql node/ci-op-m2qtct0z-253f3-kcfld-master-2 container/console-operator reason/ContainerExit code/1 cause/Error ler controller ...\nI0313 08:22:02.444931 1 base_controller.go:114] Shutting down worker of ConsoleServiceController controller ...\nI0313 08:22:02.444937 1 base_controller.go:114] Shutting down worker of ConsoleRouteController controller ...\nI0313 08:22:02.444942 1 base_controller.go:114] Shutting down worker of ConsoleCLIDownloadsController controller ...\nI0313 08:22:02.444948 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0313 08:22:02.444953 1 base_controller.go:114] Shutting down worker of DownloadsRouteController controller ...\nI0313 08:22:02.445036 1 base_controller.go:114] Shutting down worker of ConsoleDownloadsDeploymentSyncController controller ...\nI0313 08:22:02.445041 1 base_controller.go:114] Shutting down worker of ConsoleServiceController controller ...\nI0313 08:22:02.445047 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0313 08:22:02.445055 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0313 08:22:02.445061 1 base_controller.go:114] Shutting down worker of HealthCheckController controller ...\nI0313 08:22:02.445785 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-97jql", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0313 08:22:02.446466 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-97jql", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0313 08:22:02.446511 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\n Mar 13 08:22:09.366 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5dbf449b47-ncfwj node/ci-op-m2qtct0z-253f3-kcfld-master-0 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error 159] Finished syncing operator at 108.280707ms\nI0313 08:22:02.186599 1 operator.go:157] Starting syncing operator at 2023-03-13 08:22:02.186593633 +0000 UTC m=+3523.187117041\nI0313 08:22:02.287523 1 operator.go:159] Finished syncing operator at 100.917676ms\nI0313 08:22:02.287580 1 operator.go:157] Starting syncing operator at 2023-03-13 08:22:02.28757501 +0000 UTC m=+3523.288098418\nI0313 08:22:02.410229 1 operator.go:159] Finished syncing operator at 122.642259ms\nI0313 08:22:02.410311 1 operator.go:157] Starting syncing operator at 2023-03-13 08:22:02.410306371 +0000 UTC m=+3523.410829779\nI0313 08:22:02.667532 1 operator.go:159] Finished syncing operator at 257.212928ms\nI0313 08:22:02.667618 1 operator.go:157] Starting syncing operator at 2023-03-13 08:22:02.6676129 +0000 UTC m=+3523.668136308\nI0313 08:22:07.906322 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0313 08:22:07.906384 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0313 08:22:07.906406 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0313 08:22:07.906418 1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0313 08:22:07.906729 1 base_controller.go:167] Shutting down ManagementStateController ...\nI0313 08:22:07.906748 1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0313 08:22:07.906752 1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0313 08:22:07.906761 1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0313 08:22:07.906773 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0313 08:22:07.906784 1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0313 08:22:07.906796 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 13 08:22:10.272 E ns/openshift-authentication-operator pod/authentication-operator-cd5967fb9-d947p node/ci-op-m2qtct0z-253f3-kcfld-master-0 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 13 08:22:10.332 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fcf9b5fd5-nbkxq node/ci-op-m2qtct0z-253f3-kcfld-master-0 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error 1 webhook.go:155] Failed to make webhook authenticator request: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nE0313 07:42:14.571715 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, Post \"https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews\": dial tcp 172.30.0.1:443: connect: connection refused]"\nI0313 07:42:20.937402 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 07:42:23.297537 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 07:42:24.763965 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 07:52:23.297792 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 07:56:38.002152 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 08:00:20.018720 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 08:02:23.298006 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 08:06:33.248013 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 08:12:23.298260 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 08:16:38.002971 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 08:22:08.361033 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0313 08:22:08.361656 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0313 08:22:08.361772 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0313 08:22:08.363004 1 base_controller.go:167] Shutting down SnapshotCRDController ...\nW0313 08:22:08.363172 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 13 08:22:10.332 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fcf9b5fd5-nbkxq node/ci-op-m2qtct0z-253f3-kcfld-master-0 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 13 08:22:10.385 E ns/openshift-monitoring pod/cluster-monitoring-operator-6649dbcbd-mkrp7 node/ci-op-m2qtct0z-253f3-kcfld-master-0 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
#1635130630951931904 | junit | 13 days ago | |
Mar 13 05:17:50.812 E ns/openshift-machine-api pod/machine-api-controllers-5d9c5cc965-vlh7v node/ci-op-jkdy2st0-253f3-v8mrg-master-2 container/machineset-controller reason/ContainerExit code/1 cause/Error Mar 13 05:18:44.248 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-766cb444d9-7r2qk node/ci-op-jkdy2st0-253f3-v8mrg-master-1 container/kube-storage-version-migrator-operator reason/ContainerExit code/1 cause/Error eaccounts/kube-storage-version-migrator-sa": dial tcp 172.30.0.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 172.30.0.1:443: connect: connection refused, Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused]\nI0313 05:18:42.693996 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0313 05:18:42.694711 1 reflector.go:225] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0313 05:18:42.694934 1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0313 05:18:42.695285 1 reflector.go:225] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0313 05:18:42.695342 1 reflector.go:225] Stopping reflector *unstructured.Unstructured (12h0m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0313 05:18:42.695394 1 base_controller.go:167] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0313 05:18:42.709075 1 base_controller.go:145] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated\nI0313 05:18:42.695407 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0313 05:18:42.695419 1 base_controller.go:167] Shutting down StaticConditionsController ...\nI0313 05:18:42.695430 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0313 05:18:42.695439 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0313 05:18:42.695449 1 base_controller.go:167] Shutting down KubeStorageVersionMigrator ...\nW0313 05:18:42.695492 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 13 05:18:44.248 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-766cb444d9-7r2qk node/ci-op-jkdy2st0-253f3-v8mrg-master-1 container/kube-storage-version-migrator-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 13 05:18:54.322 E ns/openshift-insights pod/insights-operator-59bc5f8cd8-xkk6n node/ci-op-jkdy2st0-253f3-v8mrg-master-1 container/insights-operator reason/ContainerExit code/2 cause/Error g.go:104] "HTTP" verb="GET" URI="/metrics" latency="10.635619ms" userAgent="Prometheus/2.29.2" audit-ID="a3248b8e-95c5-4377-8007-795c221afdb2" srcIP="10.131.0.13:57230" resp=200\nI0313 05:16:59.560675 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.479984ms" userAgent="Prometheus/2.29.2" audit-ID="f165817d-2e8a-46ee-badd-edfc32be2ba5" srcIP="10.128.2.13:59378" resp=200\nI0313 05:17:04.737715 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.805954ms" userAgent="Prometheus/2.29.2" audit-ID="5f9e8464-671c-410e-8d03-b4e9e20643d0" srcIP="10.131.0.13:57230" resp=200\nI0313 05:17:29.565713 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="12.433839ms" userAgent="Prometheus/2.29.2" audit-ID="600a1d5b-7119-4476-adde-7e0210cdf15c" srcIP="10.128.2.13:59378" resp=200\nI0313 05:17:34.741219 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="8.623696ms" userAgent="Prometheus/2.29.2" audit-ID="ccc02a34-791a-4949-b6f5-5de47a42b10c" srcIP="10.131.0.13:57230" resp=200\nI0313 05:17:48.392243 1 status.go:354] The operator is healthy\nI0313 05:17:48.392316 1 status.go:441] No status update necessary, objects are identical\nI0313 05:17:59.561328 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="8.063189ms" userAgent="Prometheus/2.29.2" audit-ID="f7e99073-51bf-4935-b061-e070020f5e59" srcIP="10.128.2.13:59378" resp=200\nI0313 05:18:04.737269 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.890155ms" userAgent="Prometheus/2.29.2" audit-ID="4821b3ef-104e-476b-8a2b-11b7a068f05e" srcIP="10.131.0.13:57230" resp=200\nI0313 05:18:29.564279 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="11.170824ms" userAgent="Prometheus/2.29.2" audit-ID="b13b00ba-0f65-41a2-a1f9-35989a374720" srcIP="10.128.2.13:59378" resp=200\nI0313 05:18:34.740764 1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="8.309493ms" userAgent="Prometheus/2.29.2" audit-ID="c5a64aac-2a89-48cc-9286-48519427e07e" srcIP="10.131.0.13:57230" resp=200\n Mar 13 05:19:04.386 E ns/openshift-authentication-operator pod/authentication-operator-cd5967fb9-wt7xk node/ci-op-jkdy2st0-253f3-v8mrg-master-1 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 13 05:19:06.270 E ns/openshift-console-operator pod/console-operator-76778d847f-q22td node/ci-op-jkdy2st0-253f3-v8mrg-master-0 container/console-operator reason/ContainerExit code/1 cause/Error nsole-operator", Name:"console-operator-76778d847f-q22td", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0313 05:19:03.431863 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-q22td", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0313 05:19:03.431879 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0313 05:19:03.431901 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-76778d847f-q22td", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0313 05:19:03.431922 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0313 05:19:03.431940 1 base_controller.go:167] Shutting down HealthCheckController ...\nI0313 05:19:03.431958 1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0313 05:19:03.431966 1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0313 05:19:03.431980 1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0313 05:19:03.431995 1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0313 05:19:03.432009 1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0313 05:19:03.432021 1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0313 05:19:03.432036 1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0313 05:19:03.432294 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 13 05:19:06.420 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fcf9b5fd5-4b87b node/ci-op-jkdy2st0-253f3-v8mrg-master-1 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error 2:12.544279 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 04:54:25.507632 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 05:00:43.170993 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 05:02:12.544838 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 05:03:50.820016 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 05:12:12.544955 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 05:14:25.508589 1 controller.go:174] Existing StorageClass managed-premium found, reconciling\nI0313 05:19:03.309737 1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0313 05:19:03.309958 1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0313 05:19:03.309980 1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0313 05:19:03.310020 1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0313 05:19:03.310191 1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0313 05:19:03.310219 1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0313 05:19:03.310232 1 base_controller.go:167] Shutting down ConfigObserver ...\nI0313 05:19:03.310247 1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0313 05:19:03.310254 1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0313 05:19:03.310269 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0313 05:19:03.310283 1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nI0313 05:19:03.310294 1 base_controller.go:167] Shutting down ManagementStateController ...\nW0313 05:19:03.310401 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 13 05:19:06.420 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7fcf9b5fd5-4b87b node/ci-op-jkdy2st0-253f3-v8mrg-master-1 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 13 05:19:06.515 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7bf9f4bd6c-8kdxn node/ci-op-jkdy2st0-253f3-v8mrg-master-1 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error 1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0313 05:19:03.524626 1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0313 05:19:03.524669 1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0313 05:19:03.524951 1 base_controller.go:167] Shutting down StaticResourceController ...\nI0313 05:19:03.524943 1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0313 05:19:03.525006 1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0313 05:19:03.525028 1 base_controller.go:167] Shutting down ConfigObserver ...\nI0313 05:19:03.525045 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0313 05:19:03.525101 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0313 05:19:03.525103 1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0313 05:19:03.525062 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0313 05:19:03.525127 1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0313 05:19:03.525081 1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0313 05:19:03.525142 1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0313 05:19:03.525148 1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0313 05:19:03.525162 1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0313 05:19:03.525173 1 base_controller.go:104] All StaticResourceController workers have been terminated\nW0313 05:19:03.525173 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n Mar 13 05:19:06.515 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7bf9f4bd6c-8kdxn node/ci-op-jkdy2st0-253f3-v8mrg-master-1 container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 13 05:19:16.000 - 1s E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-jkdy2st0-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers |
Found in 30.30% of runs (37.74% of failures) across 66 total runs and 1 jobs (80.30% failed) in 382ms - clear search | chart view - source code located on github