Job:
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-azure-upgrade (all) - 21 runs, 57% failed, 92% of failures match = 52% impact
#1598363344396357632build-log.txt.gz3 hours ago
Dec 01 18:05:05.000 I ns/openshift-etcd pod/installer-7-ci-op-5933c5fk-253f3-vqhrm-master-1 reason/StaticPodInstallerCompleted Successfully installed revision 7
Dec 01 18:05:06.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5cc9dc7d9ee0e0ec9e5c2ba896dcdebfd864c1b079415a6af507c85cc9719ad6,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (17 times)
Dec 01 18:05:06.628 I ns/openshift-etcd pod/installer-7-ci-op-5933c5fk-253f3-vqhrm-master-1 node/ci-op-5933c5fk-253f3-vqhrm-master-1 container/installer reason/ContainerExit code/0 cause/Completed
Dec 01 18:05:08.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5cc9dc7d9ee0e0ec9e5c2ba896dcdebfd864c1b079415a6af507c85cc9719ad6,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (18 times)
Dec 01 18:05:08.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b5d8dd9bf-5rdpb node/ci-op-5933c5fk-253f3-vqhrm-master-1 reason/Unhealthy Readiness probe failed:
Dec 01 18:05:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-0 node/ci-op-5933c5fk-253f3-vqhrm-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 18:05:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-0 node/ci-op-5933c5fk-253f3-vqhrm-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 18:05:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-0 node/ci-op-5933c5fk-253f3-vqhrm-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 18:05:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-0 node/ci-op-5933c5fk-253f3-vqhrm-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Dec 01 18:05:13.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b5d8dd9bf-5rdpb node/ci-op-5933c5fk-253f3-vqhrm-master-1 reason/Unhealthy Readiness probe failed:  (2 times)
Dec 01 18:05:14.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-5933c5fk-253f3-vqhrm-master-0 node/ci-op-5933c5fk-253f3-vqhrm-master-0 reason/ProbeError Readiness probe error: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused\nbody: \n
#1598363344396357632build-log.txt.gz3 hours ago
Dec 01 18:08:17.427 - 59s   I alert/KubeContainerWaiting ns/openshift-etcd pod/etcd-ci-op-5933c5fk-253f3-vqhrm-master-0 container/etcd-health-monitor ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="etcd-health-monitor", namespace="openshift-etcd", pod="etcd-ci-op-5933c5fk-253f3-vqhrm-master-0", prometheus="openshift-monitoring/k8s", severity="warning"}
Dec 01 18:08:17.427 - 59s   I alert/KubeContainerWaiting ns/openshift-etcd pod/etcd-ci-op-5933c5fk-253f3-vqhrm-master-0 container/etcd-metrics ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="etcd-metrics", namespace="openshift-etcd", pod="etcd-ci-op-5933c5fk-253f3-vqhrm-master-0", prometheus="openshift-monitoring/k8s", severity="warning"}
Dec 01 18:08:17.427 - 59s   I alert/KubeContainerWaiting ns/openshift-etcd pod/etcd-ci-op-5933c5fk-253f3-vqhrm-master-0 container/etcd-readyz ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="etcd-readyz", namespace="openshift-etcd", pod="etcd-ci-op-5933c5fk-253f3-vqhrm-master-0", prometheus="openshift-monitoring/k8s", severity="warning"}
Dec 01 18:08:17.427 - 59s   I alert/KubeContainerWaiting ns/openshift-etcd pod/etcd-ci-op-5933c5fk-253f3-vqhrm-master-0 container/etcdctl ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="etcdctl", namespace="openshift-etcd", pod="etcd-ci-op-5933c5fk-253f3-vqhrm-master-0", prometheus="openshift-monitoring/k8s", severity="warning"}
Dec 01 18:08:17.427 - 59s   I alert/KubePodNotReady ns/openshift-etcd pod/etcd-ci-op-5933c5fk-253f3-vqhrm-master-0 ALERTS{alertname="KubePodNotReady", alertstate="pending", namespace="openshift-etcd", pod="etcd-ci-op-5933c5fk-253f3-vqhrm-master-0", prometheus="openshift-monitoring/k8s", severity="warning"}
Dec 01 18:08:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-2 node/ci-op-5933c5fk-253f3-vqhrm-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 18:08:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-2 node/ci-op-5933c5fk-253f3-vqhrm-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 18:08:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-2 node/ci-op-5933c5fk-253f3-vqhrm-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 18:08:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-2 node/ci-op-5933c5fk-253f3-vqhrm-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 18:08:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-2 node/ci-op-5933c5fk-253f3-vqhrm-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 18:08:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-2 node/ci-op-5933c5fk-253f3-vqhrm-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 18:08:26.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-2 node/ci-op-5933c5fk-253f3-vqhrm-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
#1598363344396357632build-log.txt.gz3 hours ago
Dec 01 18:10:27.623 I ns/openshift-kube-apiserver pod/installer-8-ci-op-5933c5fk-253f3-vqhrm-master-1 node/ci-op-5933c5fk-253f3-vqhrm-master-1 container/installer reason/ContainerExit code/0 cause/Completed
Dec 01 18:10:30.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (37 times)
Dec 01 18:10:30.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (37 times)
Dec 01 18:11:06.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (38 times)
Dec 01 18:11:06.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (38 times)
Dec 01 18:11:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-1 node/ci-op-5933c5fk-253f3-vqhrm-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 18:11:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-1 node/ci-op-5933c5fk-253f3-vqhrm-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 18:11:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-1 node/ci-op-5933c5fk-253f3-vqhrm-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 18:11:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-1 node/ci-op-5933c5fk-253f3-vqhrm-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 18:11:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-1 node/ci-op-5933c5fk-253f3-vqhrm-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 18:11:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-1 node/ci-op-5933c5fk-253f3-vqhrm-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 18:11:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-5933c5fk-253f3-vqhrm-master-1 node/ci-op-5933c5fk-253f3-vqhrm-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
#1598363344396357632build-log.txt.gz3 hours ago
Dec 01 18:27:22.913 I ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5933c5fk-253f3-vqhrm-worker-centralus2-mvjrc container/prometheus reason/ContainerExit code/0 cause/Completed
Dec 01 18:27:22.913 I ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5933c5fk-253f3-vqhrm-worker-centralus2-mvjrc container/thanos-sidecar reason/ContainerExit code/0 cause/Completed
Dec 01 18:27:22.913 I ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5933c5fk-253f3-vqhrm-worker-centralus2-mvjrc container/kube-rbac-proxy-thanos reason/ContainerExit code/0 cause/Completed
Dec 01 18:27:22.913 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5933c5fk-253f3-vqhrm-worker-centralus2-mvjrc container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2022/12/01 17:49:35 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2022/12/01 17:49:35 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2022/12/01 17:49:35 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2022/12/01 17:49:35 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2022/12/01 17:49:35 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2022/12/01 17:49:35 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2022/12/01 17:49:35 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2022/12/01 17:49:35 http.go:107: HTTPS: listening on [::]:9091\nI1201 17:49:35.330883       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2022/12/01 18:07:09 server.go:3120: http: TLS handshake error from 10.128.2.3:53106: read tcp 10.131.0.15:9091->10.128.2.3:53106: read: connection reset by peer\n
Dec 01 18:27:22.913 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5933c5fk-253f3-vqhrm-worker-centralus2-mvjrc container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2022-12-01T17:49:34.505366041Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=ce79c89)"\nlevel=info ts=2022-12-01T17:49:34.505709644Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221128-15:39:13)"\nlevel=info ts=2022-12-01T17:49:34.506065447Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2022-12-01T17:49:35.299005975Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2022-12-01T17:49:35.299102176Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2022-12-01T17:51:01.452979362Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2022-12-01T18:03:18.358373927Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\n
Dec 01 18:27:23.000 I ns/openshift-apiserver pod/apiserver-869b887fcf-cxxtr node/apiserver-869b887fcf-cxxtr reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Dec 01 18:27:24.000 W ns/openshift-apiserver pod/apiserver-869b887fcf-cxxtr node/ci-op-5933c5fk-253f3-vqhrm-master-2 reason/ProbeError Readiness probe error: Get "https://10.128.0.66:8443/readyz": dial tcp 10.128.0.66:8443: connect: connection refused\nbody: \n
Dec 01 18:27:24.000 I ns/openshift-apiserver pod/apiserver-869b887fcf-cxxtr node/apiserver-869b887fcf-cxxtr reason/TerminationStoppedServing Server has stopped listening
Dec 01 18:27:24.000 W ns/openshift-apiserver pod/apiserver-869b887fcf-cxxtr node/ci-op-5933c5fk-253f3-vqhrm-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.128.0.66:8443/readyz": dial tcp 10.128.0.66:8443: connect: connection refused
Dec 01 18:27:24.897 I ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5933c5fk-253f3-vqhrm-worker-centralus2-mvjrc reason/Deleted
Dec 01 18:27:24.935 I ns/openshift-marketplace pod/certified-operators-m26fg node/ci-op-5933c5fk-253f3-vqhrm-worker-centralus1-xqsgc container/registry-server reason/Ready
#1598318248619675648build-log.txt.gz6 hours ago
Dec 01 15:05:52.288 I ns/openshift-etcd pod/installer-8-ci-op-v2ws5rx0-253f3-xst9p-master-1 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 container/installer reason/ContainerExit code/0 cause/Completed
Dec 01 15:05:54.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5cc9dc7d9ee0e0ec9e5c2ba896dcdebfd864c1b079415a6af507c85cc9719ad6,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (17 times)
Dec 01 15:05:54.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b5d8dd9bf-7jpv9 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 reason/Unhealthy Readiness probe failed:
Dec 01 15:05:56.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5cc9dc7d9ee0e0ec9e5c2ba896dcdebfd864c1b079415a6af507c85cc9719ad6,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (18 times)
Dec 01 15:05:59.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b5d8dd9bf-7jpv9 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 reason/Unhealthy Readiness probe failed:  (2 times)
Dec 01 15:06:02.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-0 node/ci-op-v2ws5rx0-253f3-xst9p-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 15:06:02.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-0 node/ci-op-v2ws5rx0-253f3-xst9p-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 15:06:02.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-0 node/ci-op-v2ws5rx0-253f3-xst9p-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 15:06:04.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5cc9dc7d9ee0e0ec9e5c2ba896dcdebfd864c1b079415a6af507c85cc9719ad6,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (19 times)
Dec 01 15:06:04.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-0 node/ci-op-v2ws5rx0-253f3-xst9p-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Dec 01 15:06:04.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b5d8dd9bf-7jpv9 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 reason/Unhealthy Readiness probe failed:  (3 times)
#1598318248619675648build-log.txt.gz6 hours ago
Dec 01 15:08:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (17 times)
Dec 01 15:08:58.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "ci-op-v2ws5rx0-253f3-xst9p-master-0" from revision 7 to 8 because static pod is ready
Dec 01 15:08:58.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 7; 2 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available"
Dec 01 15:08:58.082 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found
Dec 01 15:09:03.346 I ns/openshift-etcd pod/installer-3-ci-op-v2ws5rx0-253f3-xst9p-master-2 node/ci-op-v2ws5rx0-253f3-xst9p-master-2 reason/DeletedAfterCompletion
Dec 01 15:09:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-2 node/ci-op-v2ws5rx0-253f3-xst9p-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 15:09:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-2 node/ci-op-v2ws5rx0-253f3-xst9p-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 15:09:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-2 node/ci-op-v2ws5rx0-253f3-xst9p-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 15:09:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-2 node/ci-op-v2ws5rx0-253f3-xst9p-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
Dec 01 15:09:12.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-v2ws5rx0-253f3-xst9p-master-2 node/ci-op-v2ws5rx0-253f3-xst9p-master-2 reason/ProbeError Readiness probe error: Get "https://10.0.0.7:6443/healthz": dial tcp 10.0.0.7:6443: connect: connection refused\nbody: \n
Dec 01 15:09:12.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-v2ws5rx0-253f3-xst9p-master-2 node/ci-op-v2ws5rx0-253f3-xst9p-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.7:6443/healthz": dial tcp 10.0.0.7:6443: connect: connection refused
#1598318248619675648build-log.txt.gz6 hours ago
Dec 01 15:11:08.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (32 times)
Dec 01 15:11:11.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (33 times)
Dec 01 15:11:33.000 I ns/openshift-kube-apiserver lease/cert-regeneration-controller-lock reason/LeaderElection ci-op-v2ws5rx0-253f3-xst9p-master-2_67c92344-982c-429c-963a-0669a8d9a62a became leader
Dec 01 15:11:33.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-v2ws5rx0-253f3-xst9p-master-2_67c92344-982c-429c-963a-0669a8d9a62a became leader
Dec 01 15:11:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (34 times)
Dec 01 15:12:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-1 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 15:12:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-1 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 15:12:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-1 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 15:12:16.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-v2ws5rx0-253f3-xst9p-master-1 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 reason/ProbeError Readiness probe error: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused\nbody: \n
Dec 01 15:12:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-v2ws5rx0-253f3-xst9p-master-1 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
Dec 01 15:12:16.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-v2ws5rx0-253f3-xst9p-master-1 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused
#1598318248619675648build-log.txt.gz6 hours ago
Dec 01 15:27:29.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-79b69ff4b8 to 1
Dec 01 15:27:29.000 I ns/openshift-oauth-apiserver replicaset/apiserver-79b69ff4b8 reason/SuccessfulCreate Created pod: apiserver-79b69ff4b8-tkw5j
Dec 01 15:27:29.000 I ns/openshift-authentication replicaset/oauth-openshift-7fd9b6757 reason/SuccessfulCreate Created pod: oauth-openshift-7fd9b6757-44czv
Dec 01 15:27:29.000 I ns/openshift-apiserver replicaset/apiserver-56f9f7bc54 reason/SuccessfulDelete Deleted pod: apiserver-56f9f7bc54-wkb65
Dec 01 15:27:29.000 I ns/openshift-oauth-apiserver replicaset/apiserver-d485bb676 reason/SuccessfulDelete Deleted pod: apiserver-d485bb676-5vpgg
Dec 01 15:27:29.000 I ns/default namespace/kube-system node/apiserver-d485bb676-5vpgg reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Dec 01 15:27:29.000 I ns/default namespace/kube-system node/apiserver-d485bb676-5vpgg reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 15:27:29.000 I ns/default namespace/kube-system node/apiserver-d485bb676-5vpgg reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 15:27:29.000 I ns/default namespace/kube-system node/apiserver-d485bb676-5vpgg reason/TerminationStoppedServing Server has stopped listening
Dec 01 15:27:29.000 W ns/openshift-apiserver pod/apiserver-57b9dbb67d-652jg node/ci-op-v2ws5rx0-253f3-xst9p-master-2 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Dec 01 15:27:29.219 W ns/kube-system openshifttest/oauth-api reason/DisruptionBegan disruption/oauth-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-v2ws5rx0-253f3.ci.azure.devcluster.openshift.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: timeout awaiting response headers
#1598318248619675648build-log.txt.gz6 hours ago
Dec 01 15:27:30.000 I ns/openshift-cluster-machine-approver lease/cluster-machine-approver-leader reason/LeaderElection ci-op-v2ws5rx0-253f3-xst9p-master-1_45d99bab-e1f5-4a85-9ead-7c22154e8e83 became leader
Dec 01 15:27:30.000 I ns/openshift-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-7d9cd4dc59 to 1
Dec 01 15:27:30.000 I ns/openshift-apiserver replicaset/apiserver-7d9cd4dc59 reason/SuccessfulCreate Created pod: apiserver-7d9cd4dc59-pb7t2
Dec 01 15:27:30.053 W ns/openshift-apiserver pod/apiserver-7d9cd4dc59-pb7t2 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 15:27:30.060 I ns/openshift-apiserver pod/apiserver-7d9cd4dc59-pb7t2 node/ reason/Created
Dec 01 15:27:30.374 E ns/openshift-console-operator pod/console-operator-7d487f9dd5-cb4cr node/ci-op-v2ws5rx0-253f3-xst9p-master-1 container/console-operator reason/ContainerExit code/1 cause/Error ady, but keeping serving\nI1201 15:27:20.589013       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI1201 15:27:20.589021       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7d487f9dd5-cb4cr", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI1201 15:27:20.589044       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI1201 15:27:20.589047       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI1201 15:27:20.589060       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI1201 15:27:20.589066       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI1201 15:27:20.589073       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7d487f9dd5-cb4cr", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI1201 15:27:20.589100       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI1201 15:27:20.589109       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI1201 15:27:20.589122       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI1201 15:27:20.589134       1 base_controller.go:167] Shutting down ManagementStateController ...\nI1201 15:27:20.589148       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI1201 15:27:20.589159       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI1201 15:27:20.589170       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nW1201 15:27:20.589244       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Dec 01 15:27:30.400 E ns/openshift-kube-storage-version-migrator pod/migrator-849469df5-wzkr8 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 container/migrator reason/ContainerExit code/2 cause/Error I1201 14:36:58.058743       1 migrator.go:18] FLAG: --add_dir_header="false"\nI1201 14:36:58.059014       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI1201 14:36:58.059019       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI1201 14:36:58.059023       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI1201 14:36:58.059028       1 migrator.go:18] FLAG: --kubeconfig=""\nI1201 14:36:58.059032       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI1201 14:36:58.059037       1 migrator.go:18] FLAG: --log_dir=""\nI1201 14:36:58.059041       1 migrator.go:18] FLAG: --log_file=""\nI1201 14:36:58.059044       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI1201 14:36:58.059047       1 migrator.go:18] FLAG: --logtostderr="true"\nI1201 14:36:58.059049       1 migrator.go:18] FLAG: --one_output="false"\nI1201 14:36:58.059052       1 migrator.go:18] FLAG: --skip_headers="false"\nI1201 14:36:58.059055       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI1201 14:36:58.059058       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI1201 14:36:58.059061       1 migrator.go:18] FLAG: --v="2"\nI1201 14:36:58.059064       1 migrator.go:18] FLAG: --vmodule=""\nI1201 14:36:58.061185       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI1201 14:37:13.181830       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI1201 14:37:13.379430       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI1201 14:37:14.390822       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI1201 14:37:14.463731       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\n
Dec 01 15:27:30.766 I ns/openshift-kube-storage-version-migrator pod/migrator-849469df5-wzkr8 node/ci-op-v2ws5rx0-253f3-xst9p-master-1 reason/Deleted
Dec 01 15:27:30.839 I ns/openshift-console-operator pod/console-operator-7d487f9dd5-cb4cr node/ci-op-v2ws5rx0-253f3-xst9p-master-1 reason/Deleted
Dec 01 15:27:30.983 I clusteroperator/machine-approver versions: operator 4.9.53 -> 4.10.0-0.ci-2022-12-01-140751
Dec 01 15:27:31.000 I ns/openshift-machine-api pod/cluster-autoscaler-operator-55f9d7df55-4vshd node/ci-op-v2ws5rx0-253f3-xst9p-master-1 container/cluster-autoscaler-operator reason/Created
#1598318248619675648build-log.txt.gz6 hours ago
Dec 01 15:27:35.000 I ns/openshift-monitoring pod/prometheus-operator-ff5c5966c-jznzk node/ci-op-v2ws5rx0-253f3-xst9p-master-2 container/prometheus-operator reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8de7d03a4e6e7f9bc69708aeb504f4c225db842de02927567de067f49cd947e0
Dec 01 15:27:35.000 I ns/openshift-monitoring pod/prometheus-operator-ff5c5966c-jznzk reason/AddedInterface Add eth0 [10.128.0.73/23] from openshift-sdn
Dec 01 15:27:35.000 I ns/openshift-operator-lifecycle-manager pod/catalog-operator-8d9f8f9d9-d9hll reason/AddedInterface Add eth0 [10.128.0.74/23] from openshift-sdn
Dec 01 15:27:35.000 I ns/openshift-operator-lifecycle-manager pod/package-server-manager-5f68fc5b89-b5fx5 reason/AddedInterface Add eth0 [10.130.0.93/23] from openshift-sdn
Dec 01 15:27:35.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation"
Dec 01 15:27:35.000 I ns/openshift-apiserver pod/apiserver-57b9dbb67d-652jg node/apiserver-57b9dbb67d-652jg reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Dec 01 15:27:35.000 I ns/openshift-apiserver pod/apiserver-57b9dbb67d-652jg node/apiserver-57b9dbb67d-652jg reason/TerminationStoppedServing Server has stopped listening
Dec 01 15:27:35.624 - 299s  I alert/TargetDown ns/openshift-cluster-storage-operator ALERTS{alertname="TargetDown", alertstate="pending", job="cluster-storage-operator-metrics", namespace="openshift-cluster-storage-operator", prometheus="openshift-monitoring/k8s", service="cluster-storage-operator-metrics", severity="warning"}
Dec 01 15:27:35.624 - 299s  I alert/TargetDown ns/openshift-cluster-machine-approver ALERTS{alertname="TargetDown", alertstate="pending", job="machine-approver", namespace="openshift-cluster-machine-approver", prometheus="openshift-monitoring/k8s", service="machine-approver", severity="warning"}
Dec 01 15:27:35.624 - 299s  I alert/TargetDown ns/openshift-console-operator ALERTS{alertname="TargetDown", alertstate="pending", job="metrics", namespace="openshift-console-operator", prometheus="openshift-monitoring/k8s", service="metrics", severity="warning"}
Dec 01 15:27:35.624 - 299s  I alert/TargetDown ns/openshift-controller-manager-operator ALERTS{alertname="TargetDown", alertstate="pending", job="metrics", namespace="openshift-controller-manager-operator", prometheus="openshift-monitoring/k8s", service="metrics", severity="warning"}
#1598200073911537664build-log.txt.gz14 hours ago
Dec 01 07:07:51.175 I ns/openshift-etcd pod/installer-7-ci-op-vycrwr8p-253f3-d8rb8-master-1 node/ci-op-vycrwr8p-253f3-d8rb8-master-1 container/installer reason/ContainerExit code/0 cause/Completed
Dec 01 07:07:53.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (16 times)
Dec 01 07:07:54.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (17 times)
Dec 01 07:07:55.000 W ns/openshift-etcd pod/etcd-quorum-guard-65bdd9d758-g2jnl node/ci-op-vycrwr8p-253f3-d8rb8-master-1 reason/Unhealthy Readiness probe failed:
Dec 01 07:08:00.000 W ns/openshift-etcd pod/etcd-quorum-guard-65bdd9d758-g2jnl node/ci-op-vycrwr8p-253f3-d8rb8-master-1 reason/Unhealthy Readiness probe failed:  (2 times)
Dec 01 07:08:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-0 node/ci-op-vycrwr8p-253f3-d8rb8-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 07:08:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-0 node/ci-op-vycrwr8p-253f3-d8rb8-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 07:08:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-0 node/ci-op-vycrwr8p-253f3-d8rb8-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 07:08:05.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (18 times)
Dec 01 07:08:05.000 W ns/openshift-etcd pod/etcd-quorum-guard-65bdd9d758-g2jnl node/ci-op-vycrwr8p-253f3-d8rb8-master-1 reason/Unhealthy Readiness probe failed:  (3 times)
Dec 01 07:08:05.000 W ns/openshift-etcd pod/etcd-quorum-guard-65bdd9d758-g2jnl node/ci-op-vycrwr8p-253f3-d8rb8-master-1 reason/Unhealthy Readiness probe failed:  (4 times)
#1598200073911537664build-log.txt.gz14 hours ago
Dec 01 07:10:57.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "ci-op-vycrwr8p-253f3-d8rb8-master-2" from revision 6 to 7 because static pod is ready
Dec 01 07:10:57.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available"
Dec 01 07:10:57.106 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found
Dec 01 07:11:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (18 times)
Dec 01 07:11:04.544 I ns/openshift-etcd pod/installer-2-ci-op-vycrwr8p-253f3-d8rb8-master-0 node/ci-op-vycrwr8p-253f3-d8rb8-master-0 reason/DeletedAfterCompletion
Dec 01 07:11:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-1 node/ci-op-vycrwr8p-253f3-d8rb8-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 07:11:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-1 node/ci-op-vycrwr8p-253f3-d8rb8-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 07:11:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-1 node/ci-op-vycrwr8p-253f3-d8rb8-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 07:11:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-1 node/ci-op-vycrwr8p-253f3-d8rb8-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
Dec 01 07:11:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-1 node/ci-op-vycrwr8p-253f3-d8rb8-master-1 container/kube-apiserver reason/Killing
Dec 01 07:11:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-1 node/ci-op-vycrwr8p-253f3-d8rb8-master-1 container/kube-apiserver reason/Killing
#1598200073911537664build-log.txt.gz14 hours ago
Dec 01 07:12:57.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (33 times)
Dec 01 07:13:06.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (34 times)
Dec 01 07:13:06.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (34 times)
Dec 01 07:13:12.044 - 29s   I alert/etcdGRPCRequestsSlow node/10.0.0.8:9979 ns/openshift-etcd pod/etcd-ci-op-vycrwr8p-253f3-d8rb8-master-1 ALERTS{alertname="etcdGRPCRequestsSlow", alertstate="pending", endpoint="etcd-metrics", grpc_method="MemberList", grpc_service="etcdserverpb.Cluster", instance="10.0.0.8:9979", job="etcd", namespace="openshift-etcd", pod="etcd-ci-op-vycrwr8p-253f3-d8rb8-master-1", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
Dec 01 07:13:49.044 - 59s   I alert/KubeClientErrors node/10.0.0.8:6443 ns/default ALERTS{alertname="KubeClientErrors", alertstate="pending", instance="10.0.0.8:6443", job="apiserver", namespace="default", prometheus="openshift-monitoring/k8s", severity="warning"}
Dec 01 07:14:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-2 node/ci-op-vycrwr8p-253f3-d8rb8-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 07:14:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-2 node/ci-op-vycrwr8p-253f3-d8rb8-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 07:14:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-2 node/ci-op-vycrwr8p-253f3-d8rb8-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 07:14:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-2 node/ci-op-vycrwr8p-253f3-d8rb8-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 07:14:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-2 node/ci-op-vycrwr8p-253f3-d8rb8-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 07:14:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-vycrwr8p-253f3-d8rb8-master-2 node/ci-op-vycrwr8p-253f3-d8rb8-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 07:14:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (35 times)
#1598200073911537664build-log.txt.gz14 hours ago
Dec 01 07:27:45.000 I ns/openshift-monitoring statefulset/alertmanager-main reason/SuccessfulCreate create Pod alertmanager-main-2 in StatefulSet alertmanager-main successful
Dec 01 07:27:45.000 I ns/openshift-monitoring statefulset/prometheus-k8s reason/SuccessfulCreate create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful
Dec 01 07:27:45.000 I ns/openshift-monitoring statefulset/prometheus-k8s reason/SuccessfulCreate create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful
Dec 01 07:27:45.000 I ns/openshift-image-registry replicaset/cluster-image-registry-operator-659d6df649 reason/SuccessfulDelete Deleted pod: cluster-image-registry-operator-659d6df649-w7r28
Dec 01 07:27:45.000 I ns/openshift-cluster-storage-operator replicaset/cluster-storage-operator-68b47dd47b reason/SuccessfulDelete Deleted pod: cluster-storage-operator-68b47dd47b-8k92g
Dec 01 07:27:45.000 I ns/openshift-apiserver pod/apiserver-795d6f5d4b-jpddr node/apiserver-795d6f5d4b-jpddr reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Dec 01 07:27:45.000 I ns/openshift-apiserver pod/apiserver-795d6f5d4b-jpddr node/apiserver-795d6f5d4b-jpddr reason/TerminationStoppedServing Server has stopped listening
Dec 01 07:27:45.004 I clusteroperator/operator-lifecycle-manager-packageserver versions: operator 4.9.52 -> 4.10.0-0.ci-2022-12-01-001518
Dec 01 07:27:45.072 I clusteroperator/operator-lifecycle-manager versions: operator 4.9.52 -> 4.10.0-0.ci-2022-12-01-001518, operator-lifecycle-manager 0.18.3 -> 0.19.0
Dec 01 07:27:45.123 I ns/openshift-monitoring pod/kube-state-metrics-7d47dcbc58-xqqd5 node/ reason/Created
Dec 01 07:27:45.152 I ns/openshift-monitoring pod/kube-state-metrics-7d47dcbc58-xqqd5 node/ci-op-vycrwr8p-253f3-d8rb8-worker-eastus2-dlbrc reason/Scheduled
#1598200073911537664build-log.txt.gz14 hours ago
Dec 01 07:28:09.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3."
Dec 01 07:28:09.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-ccdb7dc95 to 2
Dec 01 07:28:09.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-578c77fcb9 to 1
Dec 01 07:28:09.000 I ns/openshift-oauth-apiserver replicaset/apiserver-578c77fcb9 reason/SuccessfulCreate Created pod: apiserver-578c77fcb9-65h8q
Dec 01 07:28:09.000 I ns/openshift-oauth-apiserver replicaset/apiserver-ccdb7dc95 reason/SuccessfulDelete Deleted pod: apiserver-ccdb7dc95-5qnww
Dec 01 07:28:09.000 I ns/default namespace/kube-system node/apiserver-ccdb7dc95-5qnww reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Dec 01 07:28:09.000 I ns/default namespace/kube-system node/apiserver-ccdb7dc95-5qnww reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 07:28:09.000 I ns/default namespace/kube-system node/apiserver-ccdb7dc95-5qnww reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 07:28:09.000 I ns/default namespace/kube-system node/apiserver-ccdb7dc95-5qnww reason/TerminationStoppedServing Server has stopped listening
Dec 01 07:28:09.416 I ns/openshift-oauth-apiserver pod/apiserver-ccdb7dc95-5qnww node/ci-op-vycrwr8p-253f3-d8rb8-master-2 reason/GracefulDelete duration/70s
Dec 01 07:28:09.471 W ns/openshift-oauth-apiserver pod/apiserver-578c77fcb9-65h8q reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
#1598110647457943552build-log.txt.gz20 hours ago
Dec 01 01:24:15.000 W ns/openshift-etcd pod/etcd-quorum-guard-65bdd9d758-h8vww node/ci-op-ylpfb4sq-253f3-dlw7t-master-1 reason/Unhealthy Readiness probe failed:  (3 times)
Dec 01 01:24:15.986 W ns/openshift-etcd pod/etcd-quorum-guard-65bdd9d758-h8vww node/ci-op-ylpfb4sq-253f3-dlw7t-master-1 container/guard reason/NotReady
Dec 01 01:24:16.000 I ns/openshift-marketplace pod/community-operators-pdf6c node/ci-op-ylpfb4sq-253f3-dlw7t-worker-eastus1-gnd7w container/registry-server reason/Killing
Dec 01 01:24:16.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (15 times)
Dec 01 01:24:16.000 W ns/openshift-etcd pod/etcd-quorum-guard-65bdd9d758-h8vww node/ci-op-ylpfb4sq-253f3-dlw7t-master-1 reason/Unhealthy Readiness probe failed:  (4 times)
Dec 01 01:24:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ylpfb4sq-253f3-dlw7t-master-0 node/ci-op-ylpfb4sq-253f3-dlw7t-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 01:24:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ylpfb4sq-253f3-dlw7t-master-0 node/ci-op-ylpfb4sq-253f3-dlw7t-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 01:24:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ylpfb4sq-253f3-dlw7t-master-0 node/ci-op-ylpfb4sq-253f3-dlw7t-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 01:24:17.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-ylpfb4sq-253f3-dlw7t-master-0 node/ci-op-ylpfb4sq-253f3-dlw7t-master-0 reason/ProbeError Readiness probe error: Get "https://10.0.0.7:6443/healthz": dial tcp 10.0.0.7:6443: connect: connection refused\nbody: \n
Dec 01 01:24:17.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-ylpfb4sq-253f3-dlw7t-master-0 node/ci-op-ylpfb4sq-253f3-dlw7t-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.7:6443/healthz": dial tcp 10.0.0.7:6443: connect: connection refused
Dec 01 01:24:17.462 I ns/openshift-marketplace pod/community-operators-pdf6c node/ci-op-ylpfb4sq-253f3-dlw7t-worker-eastus1-gnd7w container/registry-server reason/ContainerExit code/0 cause/Completed
#1598110647457943552build-log.txt.gz20 hours ago
Dec 01 01:27:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (19 times)
Dec 01 01:27:11.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "ci-op-ylpfb4sq-253f3-dlw7t-master-2" from revision 6 to 7 because static pod is ready
Dec 01 01:27:11.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available"
Dec 01 01:27:11.201 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found
Dec 01 01:27:16.437 I ns/openshift-etcd pod/installer-2-ci-op-ylpfb4sq-253f3-dlw7t-master-0 node/ci-op-ylpfb4sq-253f3-dlw7t-master-0 reason/DeletedAfterCompletion
Dec 01 01:27:28.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ylpfb4sq-253f3-dlw7t-master-1 node/ci-op-ylpfb4sq-253f3-dlw7t-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 01:27:28.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ylpfb4sq-253f3-dlw7t-master-1 node/ci-op-ylpfb4sq-253f3-dlw7t-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 01:27:28.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ylpfb4sq-253f3-dlw7t-master-1 node/ci-op-ylpfb4sq-253f3-dlw7t-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 01:27:29.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-ylpfb4sq-253f3-dlw7t-master-1 node/ci-op-ylpfb4sq-253f3-dlw7t-master-1 reason/ProbeError Readiness probe error: Get "https://10.0.0.8:6443/healthz": dial tcp 10.0.0.8:6443: connect: connection refused\nbody: \n
Dec 01 01:27:29.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-ylpfb4sq-253f3-dlw7t-master-1 node/ci-op-ylpfb4sq-253f3-dlw7t-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.8:6443/healthz": dial tcp 10.0.0.8:6443: connect: connection refused
Dec 01 01:27:30.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ylpfb4sq-253f3-dlw7t-master-1 node/ci-op-ylpfb4sq-253f3-dlw7t-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
#1598110647457943552build-log.txt.gz20 hours ago
Dec 01 01:30:03.533 I ns/openshift-operator-lifecycle-manager pod/collect-profiles-27830970--1-vjvlz node/ci-op-ylpfb4sq-253f3-dlw7t-worker-eastus1-gnd7w container/collect-profiles reason/Ready
Dec 01 01:30:09.562 I ns/openshift-operator-lifecycle-manager pod/collect-profiles-27830970--1-vjvlz node/ci-op-ylpfb4sq-253f3-dlw7t-worker-eastus1-gnd7w container/collect-profiles reason/ContainerExit code/0 cause/Completed
Dec 01 01:30:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (37 times)
Dec 01 01:30:11.000 I ns/openshift-operator-lifecycle-manager job/collect-profiles-27830970 reason/Completed Job completed
Dec 01 01:30:11.000 I ns/openshift-operator-lifecycle-manager cronjob/collect-profiles reason/SawCompletedJob Saw completed job: collect-profiles-27830970, status: Complete
Dec 01 01:30:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ylpfb4sq-253f3-dlw7t-master-2 node/ci-op-ylpfb4sq-253f3-dlw7t-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 01:30:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ylpfb4sq-253f3-dlw7t-master-2 node/ci-op-ylpfb4sq-253f3-dlw7t-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 01:30:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ylpfb4sq-253f3-dlw7t-master-2 node/ci-op-ylpfb4sq-253f3-dlw7t-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 01:30:30.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-ylpfb4sq-253f3-dlw7t-master-2 node/ci-op-ylpfb4sq-253f3-dlw7t-master-2 reason/ProbeError Readiness probe error: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused\nbody: \n
Dec 01 01:30:30.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-ylpfb4sq-253f3-dlw7t-master-2 node/ci-op-ylpfb4sq-253f3-dlw7t-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused
Dec 01 01:30:31.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ylpfb4sq-253f3-dlw7t-master-2 node/ci-op-ylpfb4sq-253f3-dlw7t-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
#1598110647457943552build-log.txt.gz20 hours ago
Dec 01 01:46:41.542 - 2559s I disruption/kube-api connection/reused disruption/kube-api connection/reused started responding to GET requests over reused connections
Dec 01 01:46:41.829 W ns/kube-system openshifttest/kube-api reason/DisruptionBegan disruption/kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-ylpfb4sq-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Dec 01 01:46:41.889 W ns/kube-system openshifttest/openshift-api reason/DisruptionBegan disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-ylpfb4sq-253f3.ci.azure.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": net/http: timeout awaiting response headers
Dec 01 01:46:42.000 I ns/openshift-oauth-apiserver pod/apiserver-5bc555c5b4-x9cbr node/ci-op-ylpfb4sq-253f3-dlw7t-master-0 container/oauth-apiserver reason/Killing
Dec 01 01:46:42.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5bc555c5b4 reason/SuccessfulDelete Deleted pod: apiserver-5bc555c5b4-x9cbr
Dec 01 01:46:42.000 I ns/default namespace/kube-system node/apiserver-5bc555c5b4-x9cbr reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Dec 01 01:46:42.000 I ns/default namespace/kube-system node/apiserver-5bc555c5b4-x9cbr reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 01:46:42.000 I ns/default namespace/kube-system node/apiserver-5bc555c5b4-x9cbr reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 01:46:42.000 - 425s  I disruption/oauth-api connection/reused disruption/oauth-api connection/reused started responding to GET requests over reused connections
Dec 01 01:46:42.039 I ns/openshift-oauth-apiserver pod/apiserver-5bc555c5b4-x9cbr node/ci-op-ylpfb4sq-253f3-dlw7t-master-0 reason/GracefulDelete duration/70s
Dec 01 01:46:42.537 - 299s  I alert/etcdGRPCRequestsSlow node/10.0.0.8:9979 ns/openshift-etcd pod/etcd-ci-op-ylpfb4sq-253f3-dlw7t-master-1 ALERTS{alertname="etcdGRPCRequestsSlow", alertstate="pending", endpoint="etcd-metrics", grpc_method="Txn", grpc_service="etcdserverpb.KV", instance="10.0.0.8:9979", job="etcd", namespace="openshift-etcd", pod="etcd-ci-op-ylpfb4sq-253f3-dlw7t-master-1", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
#1598110647457943552build-log.txt.gz20 hours ago
Dec 01 01:46:47.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/DeploymentUpdated Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed (2 times)
Dec 01 01:46:47.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation"
Dec 01 01:46:47.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-6cdb467574 to 0
Dec 01 01:46:47.000 I ns/openshift-cluster-storage-operator deployment/cluster-storage-operator reason/ScalingReplicaSet Scaled down replica set cluster-storage-operator-68b47dd47b to 0
Dec 01 01:46:47.000 I ns/openshift-cluster-storage-operator replicaset/cluster-storage-operator-68b47dd47b reason/SuccessfulDelete Deleted pod: cluster-storage-operator-68b47dd47b-n6dhk
Dec 01 01:46:47.000 I ns/openshift-apiserver pod/apiserver-74f84c8f56-sln89 node/apiserver-74f84c8f56-sln89 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Dec 01 01:46:47.000 I ns/openshift-apiserver pod/apiserver-74f84c8f56-sln89 node/apiserver-74f84c8f56-sln89 reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:46:47.206 E ns/openshift-kube-storage-version-migrator pod/migrator-5cb4d6d6dd-6lkzq node/ci-op-ylpfb4sq-253f3-dlw7t-master-0 container/migrator reason/ContainerExit code/2 cause/Error I1201 00:51:25.961178       1 migrator.go:18] FLAG: --add_dir_header="false"\nI1201 00:51:25.961319       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI1201 00:51:25.961326       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI1201 00:51:25.961333       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI1201 00:51:25.961342       1 migrator.go:18] FLAG: --kubeconfig=""\nI1201 00:51:25.961349       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI1201 00:51:25.961357       1 migrator.go:18] FLAG: --log_dir=""\nI1201 00:51:25.961364       1 migrator.go:18] FLAG: --log_file=""\nI1201 00:51:25.961369       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI1201 00:51:25.961375       1 migrator.go:18] FLAG: --logtostderr="true"\nI1201 00:51:25.961381       1 migrator.go:18] FLAG: --one_output="false"\nI1201 00:51:25.961387       1 migrator.go:18] FLAG: --skip_headers="false"\nI1201 00:51:25.961392       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI1201 00:51:25.961397       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI1201 00:51:25.961403       1 migrator.go:18] FLAG: --v="2"\nI1201 00:51:25.961409       1 migrator.go:18] FLAG: --vmodule=""\nI1201 00:51:25.962564       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI1201 00:51:42.112992       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI1201 00:51:42.282856       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI1201 00:51:43.303174       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI1201 00:51:43.407656       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\n
Dec 01 01:46:47.591 W ns/openshift-oauth-apiserver pod/apiserver-6cdb467574-zg8zp reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 01:46:47.817 I ns/openshift-marketplace pod/marketplace-operator-84d5c45dbc-v6qxf node/ci-op-ylpfb4sq-253f3-dlw7t-master-0 reason/Scheduled
Dec 01 01:46:47.827 I ns/openshift-kube-storage-version-migrator pod/migrator-5cb4d6d6dd-6lkzq node/ci-op-ylpfb4sq-253f3-dlw7t-master-0 reason/Deleted
#1598110647457943552build-log.txt.gz20 hours ago
Dec 01 01:48:01.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation"
Dec 01 01:48:01.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-5bc555c5b4 to 1
Dec 01 01:48:01.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-66fc54f465 to 2
Dec 01 01:48:01.000 I ns/openshift-oauth-apiserver replicaset/apiserver-66fc54f465 reason/SuccessfulCreate Created pod: apiserver-66fc54f465-89frd
Dec 01 01:48:01.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5bc555c5b4 reason/SuccessfulDelete Deleted pod: apiserver-5bc555c5b4-9d7f7
Dec 01 01:48:01.000 I ns/default namespace/kube-system node/apiserver-5bc555c5b4-9d7f7 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Dec 01 01:48:01.000 I ns/default namespace/kube-system node/apiserver-5bc555c5b4-9d7f7 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 01:48:01.000 I ns/default namespace/kube-system node/apiserver-5bc555c5b4-9d7f7 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 01:48:01.000 I ns/default namespace/kube-system node/apiserver-5bc555c5b4-9d7f7 reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:48:01.052 I ns/openshift-oauth-apiserver pod/apiserver-5bc555c5b4-9d7f7 node/ci-op-ylpfb4sq-253f3-dlw7t-master-2 reason/GracefulDelete duration/70s
Dec 01 01:48:01.117 I ns/openshift-oauth-apiserver pod/apiserver-66fc54f465-89frd node/ reason/Created
#1598018085535617024build-log.txt.gz25 hours ago
Nov 30 19:21:03.892 I ns/openshift-marketplace pod/certified-operators-l5qgb node/ci-op-n5lyp11w-253f3-rntdm-worker-westus-hpmvx container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 30 19:21:04.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-30-181518@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (24 times)
Nov 30 19:21:04.025 I ns/openshift-marketplace pod/certified-operators-l5qgb node/ci-op-n5lyp11w-253f3-rntdm-worker-westus-hpmvx reason/Deleted
Nov 30 19:21:05.736 - 59s   I alert/TargetDown ns/openshift-etcd ALERTS{alertname="TargetDown", alertstate="pending", job="etcd", namespace="openshift-etcd", prometheus="openshift-monitoring/k8s", service="etcd", severity="warning"}
Nov 30 19:21:08.000 W ns/openshift-etcd pod/etcd-quorum-guard-584c687554-th2v5 node/ci-op-n5lyp11w-253f3-rntdm-master-0 reason/Unhealthy Readiness probe failed:  (6 times)
Nov 30 19:21:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-1 node/ci-op-n5lyp11w-253f3-rntdm-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 19:21:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-1 node/ci-op-n5lyp11w-253f3-rntdm-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 19:21:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-1 node/ci-op-n5lyp11w-253f3-rntdm-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 19:21:11.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-n5lyp11w-253f3-rntdm-master-1 node/ci-op-n5lyp11w-253f3-rntdm-master-1 reason/ProbeError Readiness probe error: Get "https://10.0.0.8:6443/healthz": dial tcp 10.0.0.8:6443: connect: connection refused\nbody: \n
Nov 30 19:21:11.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-n5lyp11w-253f3-rntdm-master-1 node/ci-op-n5lyp11w-253f3-rntdm-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.8:6443/healthz": dial tcp 10.0.0.8:6443: connect: connection refused
Nov 30 19:21:12.736 - 59s   I alert/etcdMembersDown ns/openshift-etcd pod/etcd-ci-op-n5lyp11w-253f3-rntdm-master-0 ALERTS{alertname="etcdMembersDown", alertstate="pending", job="etcd", namespace="openshift-etcd", pod="etcd-ci-op-n5lyp11w-253f3-rntdm-master-0", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
#1598018085535617024build-log.txt.gz25 hours ago
Nov 30 19:23:43.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "ci-op-n5lyp11w-253f3-rntdm-master-2" from revision 6 to 7 because static pod is ready
Nov 30 19:23:43.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available"
Nov 30 19:23:43.848 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found
Nov 30 19:23:48.143 I ns/openshift-etcd pod/installer-2-ci-op-n5lyp11w-253f3-rntdm-master-1 node/ci-op-n5lyp11w-253f3-rntdm-master-1 reason/DeletedAfterCompletion
Nov 30 19:23:48.407 W clusterversion/version changed Failing to False
Nov 30 19:24:06.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-0 node/ci-op-n5lyp11w-253f3-rntdm-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 19:24:06.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-0 node/ci-op-n5lyp11w-253f3-rntdm-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 19:24:06.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-0 node/ci-op-n5lyp11w-253f3-rntdm-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 19:24:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-0 node/ci-op-n5lyp11w-253f3-rntdm-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 19:24:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-0 node/ci-op-n5lyp11w-253f3-rntdm-master-0 container/kube-apiserver reason/Killing
Nov 30 19:24:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-0 node/ci-op-n5lyp11w-253f3-rntdm-master-0 container/kube-apiserver reason/Killing
#1598018085535617024build-log.txt.gz25 hours ago
Nov 30 19:26:37.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-n5lyp11w-253f3-rntdm-master-0_507e7152-7913-4d8a-b7fc-1e096b62f2e3 became leader
Nov 30 19:26:37.000 I ns/openshift-kube-apiserver lease/cert-regeneration-controller-lock reason/LeaderElection ci-op-n5lyp11w-253f3-rntdm-master-0_507e7152-7913-4d8a-b7fc-1e096b62f2e3 became leader
Nov 30 19:26:37.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-n5lyp11w-253f3-rntdm-master-0_507e7152-7913-4d8a-b7fc-1e096b62f2e3 became leader
Nov 30 19:26:39.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-30-181518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (39 times)
Nov 30 19:26:39.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-30-181518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (39 times)
Nov 30 19:27:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-2 node/ci-op-n5lyp11w-253f3-rntdm-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 19:27:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-2 node/ci-op-n5lyp11w-253f3-rntdm-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 19:27:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-2 node/ci-op-n5lyp11w-253f3-rntdm-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 19:27:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-2 node/ci-op-n5lyp11w-253f3-rntdm-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 19:27:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-2 node/ci-op-n5lyp11w-253f3-rntdm-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 19:27:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n5lyp11w-253f3-rntdm-master-2 node/ci-op-n5lyp11w-253f3-rntdm-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 19:27:12.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-n5lyp11w-253f3-rntdm-master-2 node/ci-op-n5lyp11w-253f3-rntdm-master-2 reason/ProbeError Readiness probe error: Get "https://10.0.0.7:6443/healthz": dial tcp 10.0.0.7:6443: connect: connection refused\nbody: \n
#1598018085535617024build-log.txt.gz25 hours ago
Nov 30 19:42:50.563 I ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-n5lyp11w-253f3-rntdm-worker-westus-hpmvx container/kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
Nov 30 19:42:50.563 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-n5lyp11w-253f3-rntdm-worker-westus-hpmvx container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2022/11/30 18:58:33 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2022/11/30 18:58:33 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2022/11/30 18:58:33 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2022/11/30 18:58:34 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2022/11/30 18:58:34 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2022/11/30 18:58:34 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2022/11/30 18:58:34 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI1130 18:58:34.004862       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2022/11/30 18:58:34 http.go:107: HTTPS: listening on [::]:9091\n2022/11/30 19:10:18 server.go:3120: http: TLS handshake error from 10.129.2.7:42894: read tcp 10.131.0.26:9091->10.129.2.7:42894: read: connection reset by peer\n2022/11/30 19:28:33 server.go:3120: http: TLS handshake error from 10.129.2.7:55328: read tcp 10.131.0.26:9091->10.129.2.7:55328: read: connection reset by peer\n
Nov 30 19:42:50.563 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-n5lyp11w-253f3-rntdm-worker-westus-hpmvx container/config-reloader reason/ContainerExit code/2 cause/Error 58:32.433536522Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221103-15:11:33)"\nlevel=info ts=2022-11-30T18:58:32.433801424Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2022-11-30T18:58:33.052249648Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2022-11-30T18:58:33.052446349Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2022-11-30T18:58:46.858982877Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2022-11-30T19:01:15.349395712Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2022-11-30T19:02:44.35948815Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nl
Nov 30 19:42:51.000 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to update csi-snapshot-controller pods" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy csi-snapshot-controller pods"
Nov 30 19:42:51.000 W ns/openshift-marketplace pod/marketplace-operator-7b5ccb9455-fdk8l node/ci-op-n5lyp11w-253f3-rntdm-master-2 reason/ProbeError Readiness probe error: Get "http://10.130.0.86:8080/healthz": dial tcp 10.130.0.86:8080: connect: connection refused\nbody: \n (2 times)
Nov 30 19:42:51.000 I ns/openshift-apiserver pod/apiserver-cf9bc7887-5jh8h node/apiserver-cf9bc7887-5jh8h reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 19:42:51.000 I ns/openshift-apiserver pod/apiserver-cf9bc7887-5jh8h node/apiserver-cf9bc7887-5jh8h reason/TerminationStoppedServing Server has stopped listening
Nov 30 19:42:51.000 W ns/openshift-marketplace pod/marketplace-operator-7b5ccb9455-fdk8l node/ci-op-n5lyp11w-253f3-rntdm-master-2 reason/Unhealthy Readiness probe failed: Get "http://10.130.0.86:8080/healthz": dial tcp 10.130.0.86:8080: connect: connection refused (2 times)
Nov 30 19:42:51.651 I ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-n5lyp11w-253f3-rntdm-worker-westus-hpmvx reason/Deleted
Nov 30 19:42:51.721 I ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-n5lyp11w-253f3-rntdm-worker-westus-hpmvx container/prom-label-proxy reason/ContainerExit code/0 cause/Completed
Nov 30 19:42:51.721 I ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-n5lyp11w-253f3-rntdm-worker-westus-hpmvx container/kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
#1598018085535617024build-log.txt.gz25 hours ago
Nov 30 19:43:21.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3."
Nov 30 19:43:21.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-7595b5ddf5 to 2
Nov 30 19:43:21.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-595cc58f65 to 1
Nov 30 19:43:21.000 I ns/openshift-oauth-apiserver replicaset/apiserver-595cc58f65 reason/SuccessfulCreate Created pod: apiserver-595cc58f65-j9nxx
Nov 30 19:43:21.000 I ns/openshift-oauth-apiserver replicaset/apiserver-7595b5ddf5 reason/SuccessfulDelete Deleted pod: apiserver-7595b5ddf5-bwvfr
Nov 30 19:43:21.000 I ns/default namespace/kube-system node/apiserver-7595b5ddf5-bwvfr reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 30 19:43:21.000 I ns/default namespace/kube-system node/apiserver-7595b5ddf5-bwvfr reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 19:43:21.000 I ns/default namespace/kube-system node/apiserver-7595b5ddf5-bwvfr reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 30 19:43:21.000 I ns/default namespace/kube-system node/apiserver-7595b5ddf5-bwvfr reason/TerminationStoppedServing Server has stopped listening
Nov 30 19:43:21.260 W ns/openshift-authentication pod/oauth-openshift-cc4bf7dbb-r586h reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 19:43:21.260 W ns/openshift-authentication pod/oauth-openshift-cc4bf7dbb-r586h reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
#1597932918154465280build-log.txt.gz31 hours ago
Nov 30 13:39:15.810 I ns/openshift-etcd pod/etcd-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 reason/Created
Nov 30 13:39:18.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (23 times)
Nov 30 13:39:18.000 W ns/openshift-etcd pod/etcd-quorum-guard-8446b7545-scsqw node/ci-op-hfq58zlk-253f3-n6hwg-master-0 reason/Unhealthy Readiness probe failed:  (10 times)
Nov 30 13:39:18.211 - 29s   I alert/NodeProxyApplyStale ns/openshift-sdn pod/sdn-2bzhc container/kube-rbac-proxy-main ALERTS{alertname="NodeProxyApplyStale", alertstate="pending", container="kube-rbac-proxy-main", created_by_kind="DaemonSet", created_by_name="sdn", endpoint="https-main", host_ip="10.0.0.8", host_network="true", job="kube-state-metrics", namespace="openshift-sdn", node="ci-op-hfq58zlk-253f3-n6hwg-master-1", pod="sdn-2bzhc", pod_ip="10.0.0.8", priority_class="system-node-critical", prometheus="openshift-monitoring/k8s", service="kube-state-metrics", severity="warning", uid="da18b5ea-5f77-4410-bcda-117672e96938"}
Nov 30 13:39:19.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (24 times)
Nov 30 13:39:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-2 node/ci-op-hfq58zlk-253f3-n6hwg-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 13:39:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-2 node/ci-op-hfq58zlk-253f3-n6hwg-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 13:39:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-2 node/ci-op-hfq58zlk-253f3-n6hwg-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 13:39:21.000 I ns/openshift-etcd pod/etcd-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 container/setup reason/Created
Nov 30 13:39:21.000 I ns/openshift-etcd pod/etcd-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 container/setup reason/Pulled duration/5.386s image/registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2
Nov 30 13:39:21.000 I ns/openshift-etcd pod/etcd-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 container/setup reason/Started
#1597932918154465280build-log.txt.gz31 hours ago
Nov 30 13:41:46.464 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found
Nov 30 13:41:50.971 I ns/openshift-etcd pod/installer-2-ci-op-hfq58zlk-253f3-n6hwg-master-2 node/ci-op-hfq58zlk-253f3-n6hwg-master-2 reason/DeletedAfterCompletion
Nov 30 13:42:31.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.3353209509426915 over 5 minutes on "Azure"; disk metrics are: etcd-ci-op-hfq58zlk-253f3-n6hwg-master-0=0.024890,etcd-ci-op-hfq58zlk-253f3-n6hwg-master-2=0.015673,etcd-ci-op-hfq58zlk-253f3-n6hwg-master-1=0.012259. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 30 13:42:31.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.3353209509426915 over 5 minutes on "Azure"; disk metrics are: etcd-ci-op-hfq58zlk-253f3-n6hwg-master-0=0.024890,etcd-ci-op-hfq58zlk-253f3-n6hwg-master-2=0.015673,etcd-ci-op-hfq58zlk-253f3-n6hwg-master-1=0.012259. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 30 13:42:31.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.3353209509426915 over 5 minutes on "Azure"; disk metrics are: etcd-ci-op-hfq58zlk-253f3-n6hwg-master-0=0.024890,etcd-ci-op-hfq58zlk-253f3-n6hwg-master-2=0.015673,etcd-ci-op-hfq58zlk-253f3-n6hwg-master-1=0.012259. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 30 13:42:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 13:42:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 13:42:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 13:42:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 13:42:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 13:42:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 13:42:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 13:42:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-0 node/ci-op-hfq58zlk-253f3-n6hwg-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
#1597932918154465280build-log.txt.gz31 hours ago
Nov 30 13:45:04.646 I ns/openshift-operator-lifecycle-manager pod/collect-profiles-27830265--1-b6779 node/ci-op-hfq58zlk-253f3-n6hwg-worker-centralus2-jlh4d container/collect-profiles reason/ContainerExit code/0 cause/Completed
Nov 30 13:45:06.000 I ns/openshift-operator-lifecycle-manager job/collect-profiles-27830265 reason/Completed Job completed
Nov 30 13:45:06.000 I ns/openshift-operator-lifecycle-manager job/collect-profiles-27830265 reason/Completed Job completed
Nov 30 13:45:06.000 I ns/openshift-operator-lifecycle-manager cronjob/collect-profiles reason/SawCompletedJob Saw completed job: collect-profiles-27830265, status: Complete
Nov 30 13:45:06.000 I ns/openshift-operator-lifecycle-manager cronjob/collect-profiles reason/SawCompletedJob Saw completed job: collect-profiles-27830265, status: Complete
Nov 30 13:45:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-1 node/ci-op-hfq58zlk-253f3-n6hwg-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 13:45:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-1 node/ci-op-hfq58zlk-253f3-n6hwg-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 13:45:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-1 node/ci-op-hfq58zlk-253f3-n6hwg-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 13:45:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-1 node/ci-op-hfq58zlk-253f3-n6hwg-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 13:45:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-1 node/ci-op-hfq58zlk-253f3-n6hwg-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 13:45:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-1 node/ci-op-hfq58zlk-253f3-n6hwg-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 13:45:31.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-hfq58zlk-253f3-n6hwg-master-1 node/ci-op-hfq58zlk-253f3-n6hwg-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
#1597889238383202304build-log.txt.gz34 hours ago
Nov 30 10:40:38.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ci-op-9ww4psx3-253f3-x8zlv-master-1
Nov 30 10:40:39.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (19 times)
Nov 30 10:40:39.000 W ns/openshift-etcd pod/etcd-quorum-guard-8446b7545-8k5sz node/ci-op-9ww4psx3-253f3-x8zlv-master-1 reason/Unhealthy Readiness probe failed:  (5 times)
Nov 30 10:40:42.364 - 59s   I alert/etcdMembersDown ns/openshift-etcd pod/etcd-ci-op-9ww4psx3-253f3-x8zlv-master-1 ALERTS{alertname="etcdMembersDown", alertstate="pending", job="etcd", namespace="openshift-etcd", pod="etcd-ci-op-9ww4psx3-253f3-x8zlv-master-1", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
Nov 30 10:40:44.000 W ns/openshift-etcd pod/etcd-quorum-guard-8446b7545-8k5sz node/ci-op-9ww4psx3-253f3-x8zlv-master-1 reason/Unhealthy Readiness probe failed:  (6 times)
Nov 30 10:40:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-1 node/ci-op-9ww4psx3-253f3-x8zlv-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 10:40:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-1 node/ci-op-9ww4psx3-253f3-x8zlv-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 10:40:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-1 node/ci-op-9ww4psx3-253f3-x8zlv-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 10:40:49.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9ww4psx3-253f3-x8zlv-master-1 is unhealthy"
Nov 30 10:40:49.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9ww4psx3-253f3-x8zlv-master-1 is unhealthy"
Nov 30 10:40:50.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (20 times)
#1597889238383202304build-log.txt.gz34 hours ago
Nov 30 10:43:53.000 I ns/openshift-marketplace pod/certified-operators-2nlqx reason/AddedInterface Add eth0 [10.131.0.33/23] from openshift-sdn
Nov 30 10:43:53.364 - 59s   I alert/KubeDeploymentReplicasMismatch ns/openshift-etcd container/kube-rbac-proxy-main ALERTS{alertname="KubeDeploymentReplicasMismatch", alertstate="pending", container="kube-rbac-proxy-main", deployment="etcd-quorum-guard", endpoint="https-main", job="kube-state-metrics", namespace="openshift-etcd", prometheus="openshift-monitoring/k8s", service="kube-state-metrics", severity="warning"}
Nov 30 10:43:54.000 I ns/openshift-marketplace pod/certified-operators-2nlqx node/ci-op-9ww4psx3-253f3-x8zlv-worker-centralus2-wh7fl container/registry-server reason/Created
Nov 30 10:43:54.000 I ns/openshift-marketplace pod/certified-operators-2nlqx node/ci-op-9ww4psx3-253f3-x8zlv-worker-centralus2-wh7fl container/registry-server reason/Pulled duration/0.880s image/registry.redhat.io/redhat/certified-operator-index:v4.9
Nov 30 10:43:54.000 I ns/openshift-marketplace pod/certified-operators-2nlqx node/ci-op-9ww4psx3-253f3-x8zlv-worker-centralus2-wh7fl container/registry-server reason/Started
Nov 30 10:43:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-2 node/ci-op-9ww4psx3-253f3-x8zlv-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 10:43:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-2 node/ci-op-9ww4psx3-253f3-x8zlv-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 10:43:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-2 node/ci-op-9ww4psx3-253f3-x8zlv-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 10:43:54.406 I ns/openshift-marketplace pod/certified-operators-2nlqx node/ci-op-9ww4psx3-253f3-x8zlv-worker-centralus2-wh7fl container/registry-server reason/ContainerStart duration/4.00s
Nov 30 10:43:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-2 node/ci-op-9ww4psx3-253f3-x8zlv-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 10:43:57.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-2 node/ci-op-9ww4psx3-253f3-x8zlv-master-2 container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea
#1597889238383202304build-log.txt.gz34 hours ago
Nov 30 10:45:46.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (33 times)
Nov 30 10:45:48.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-9ww4psx3-253f3-x8zlv-master-2_37e74ab4-2933-4530-bcf0-4bf8bcdfefe4 became leader
Nov 30 10:45:48.000 I ns/openshift-kube-apiserver lease/cert-regeneration-controller-lock reason/LeaderElection ci-op-9ww4psx3-253f3-x8zlv-master-2_37e74ab4-2933-4530-bcf0-4bf8bcdfefe4 became leader
Nov 30 10:45:49.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (34 times)
Nov 30 10:46:44.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (35 times)
Nov 30 10:46:51.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-0 node/ci-op-9ww4psx3-253f3-x8zlv-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 10:46:51.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-0 node/ci-op-9ww4psx3-253f3-x8zlv-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 10:46:51.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-0 node/ci-op-9ww4psx3-253f3-x8zlv-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 10:46:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-0 node/ci-op-9ww4psx3-253f3-x8zlv-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 10:46:55.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9ww4psx3-253f3-x8zlv-master-0 node/ci-op-9ww4psx3-253f3-x8zlv-master-0 container/kube-apiserver reason/Killing
Nov 30 10:46:56.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-9ww4psx3-253f3-x8zlv-master-0 node/ci-op-9ww4psx3-253f3-x8zlv-master-0 reason/ProbeError Readiness probe error: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused\nbody: \n
#1597889238383202304build-log.txt.gz34 hours ago
Nov 30 11:04:26.000 I ns/openshift-monitoring deployment/prometheus-operator reason/ScalingReplicaSet Scaled up replica set prometheus-operator-9c77bcd8c to 1
Nov 30 11:04:26.000 I ns/openshift-ingress deployment/router-default reason/ScalingReplicaSet Scaled up replica set router-default-766fbd8fc to 1
Nov 30 11:04:26.000 I ns/openshift-oauth-apiserver replicaset/apiserver-6cb7fd8fb reason/SuccessfulCreate Created pod: apiserver-6cb7fd8fb-vrzbm
Nov 30 11:04:26.000 I ns/openshift-monitoring replicaset/prometheus-operator-9c77bcd8c reason/SuccessfulCreate Created pod: prometheus-operator-9c77bcd8c-v48rv
Nov 30 11:04:26.000 I ns/openshift-ingress replicaset/router-default-5dbd5b4bc8 reason/SuccessfulDelete Deleted pod: router-default-5dbd5b4bc8-qc2ls
Nov 30 11:04:26.000 I ns/default namespace/kube-system node/apiserver-5cfdcbd9dd-8q8qt reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 30 11:04:26.000 I ns/default namespace/kube-system node/apiserver-5cfdcbd9dd-8q8qt reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 11:04:26.000 I ns/default namespace/kube-system node/apiserver-5cfdcbd9dd-8q8qt reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 30 11:04:26.000 I ns/default namespace/kube-system node/apiserver-5cfdcbd9dd-8q8qt reason/TerminationStoppedServing Server has stopped listening
Nov 30 11:04:26.029 I ns/openshift-cluster-csi-drivers pod/azure-disk-csi-driver-node-frnzj node/ci-op-9ww4psx3-253f3-x8zlv-worker-centralus3-f8sn8 container/azure-inject-credentials reason/ContainerExit code/0 cause/Completed
Nov 30 11:04:26.181 W ns/openshift-oauth-apiserver pod/apiserver-6cb7fd8fb-vrzbm reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
#1597889238383202304build-log.txt.gz34 hours ago
Nov 30 11:04:42.000 I ns/openshift-ingress pod/router-default-766fbd8fc-q28qr node/ci-op-9ww4psx3-253f3-x8zlv-worker-centralus1-8x48n container/router reason/Started
Nov 30 11:04:42.000 I ns/openshift-cluster-csi-drivers lease/external-snapshotter-leader-disk-csi-azure-com reason/LeaderElection ci-op-9ww4psx3-253f3-x8zlv-master-0 became leader
Nov 30 11:04:42.000 I ns/openshift-monitoring deployment/grafana reason/ScalingReplicaSet Scaled up replica set grafana-5d884f4cd5 to 1
Nov 30 11:04:42.000 I ns/openshift-monitoring replicaset/grafana-5d884f4cd5 reason/SuccessfulCreate Created pod: grafana-5d884f4cd5-dsh8g
Nov 30 11:04:42.000 I ns/openshift-image-registry daemonset/node-ca reason/SuccessfulCreate Created pod: node-ca-wtwjq
Nov 30 11:04:42.000 I ns/openshift-apiserver pod/apiserver-58ccdb74c9-zrbrm node/apiserver-58ccdb74c9-zrbrm reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 11:04:42.000 I ns/openshift-apiserver pod/apiserver-58ccdb74c9-zrbrm node/apiserver-58ccdb74c9-zrbrm reason/TerminationStoppedServing Server has stopped listening
Nov 30 11:04:42.407 W clusteroperator/image-registry condition/Progressing status/True reason/DeploymentNotCompleted changed: Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed
Nov 30 11:04:42.407 - 42s   W clusteroperator/image-registry condition/Progressing status/True reason/Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed
Nov 30 11:04:42.425 I ns/openshift-image-registry pod/node-ca-7sj58 node/ci-op-9ww4psx3-253f3-x8zlv-master-2 container/node-ca reason/ContainerExit code/0 cause/Completed
Nov 30 11:04:42.432 W ns/openshift-image-registry pod/image-registry-658d76f88c-ts94v reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
#1597889238383202304build-log.txt.gz34 hours ago
Nov 30 11:05:46.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation"
Nov 30 11:05:46.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-5cfdcbd9dd to 1
Nov 30 11:05:46.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-5f77b6c79 to 2
Nov 30 11:05:46.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5f77b6c79 reason/SuccessfulCreate Created pod: apiserver-5f77b6c79-ffm6m
Nov 30 11:05:46.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5cfdcbd9dd reason/SuccessfulDelete Deleted pod: apiserver-5cfdcbd9dd-ps8hr
Nov 30 11:05:46.000 I ns/default namespace/kube-system node/apiserver-5cfdcbd9dd-ps8hr reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 30 11:05:46.000 I ns/default namespace/kube-system node/apiserver-5cfdcbd9dd-ps8hr reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 11:05:46.000 I ns/default namespace/kube-system node/apiserver-5cfdcbd9dd-ps8hr reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 30 11:05:46.000 I ns/default namespace/kube-system node/apiserver-5cfdcbd9dd-ps8hr reason/TerminationStoppedServing Server has stopped listening
Nov 30 11:05:46.281 I ns/openshift-oauth-apiserver pod/apiserver-5f77b6c79-htd7j node/ci-op-9ww4psx3-253f3-x8zlv-master-1 container/oauth-apiserver reason/Ready
Nov 30 11:05:46.334 I ns/openshift-oauth-apiserver pod/apiserver-5cfdcbd9dd-ps8hr node/ci-op-9ww4psx3-253f3-x8zlv-master-2 reason/GracefulDelete duration/70s
#1597521812294471680build-log.txt.gz2 days ago
Nov 29 10:27:34.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (12 times)
Nov 29 10:27:36.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (13 times)
Nov 29 10:27:59.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-6p9n05l8-253f3-z47sw-master-1 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available"
Nov 29 10:27:59.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-6p9n05l8-253f3-z47sw-master-1 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found"
Nov 29 10:28:00.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (14 times)
Nov 29 10:28:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-0 node/ci-op-6p9n05l8-253f3-z47sw-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 10:28:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-0 node/ci-op-6p9n05l8-253f3-z47sw-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 10:28:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-0 node/ci-op-6p9n05l8-253f3-z47sw-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 10:28:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-0 node/ci-op-6p9n05l8-253f3-z47sw-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 10:28:04.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (15 times)
Nov 29 10:28:05.000 I ns/openshift-etcd pod/etcd-ci-op-6p9n05l8-253f3-z47sw-master-2 node/ci-op-6p9n05l8-253f3-z47sw-master-2 container/etcd reason/Killing
#1597521812294471680build-log.txt.gz2 days ago
Nov 29 10:30:59.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 3.3333456790580698 over 5 minutes on "Azure"; disk metrics are: etcd-ci-op-6p9n05l8-253f3-z47sw-master-2=0.007961,etcd-ci-op-6p9n05l8-253f3-z47sw-master-0=0.012009,etcd-ci-op-6p9n05l8-253f3-z47sw-master-1=0.027909. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 29 10:31:00.000 W ns/openshift-etcd pod/etcd-quorum-guard-56659848c-ldsgt node/ci-op-6p9n05l8-253f3-z47sw-master-0 reason/Unhealthy Readiness probe failed:  (13 times)
Nov 29 10:31:01.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ci-op-6p9n05l8-253f3-z47sw-master-0 (4 times)
Nov 29 10:31:03.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" (3 times)
Nov 29 10:31:05.220 I ns/openshift-etcd pod/etcd-quorum-guard-56659848c-ldsgt node/ci-op-6p9n05l8-253f3-z47sw-master-0 container/guard reason/Ready
Nov 29 10:31:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-1 node/ci-op-6p9n05l8-253f3-z47sw-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 10:31:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-1 node/ci-op-6p9n05l8-253f3-z47sw-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 10:31:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-1 node/ci-op-6p9n05l8-253f3-z47sw-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 10:31:08.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (19 times)
Nov 29 10:31:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-1 node/ci-op-6p9n05l8-253f3-z47sw-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 10:31:10.578 I ns/openshift-etcd pod/etcd-ci-op-6p9n05l8-253f3-z47sw-master-0 node/ci-op-6p9n05l8-253f3-z47sw-master-0 container/etcd reason/Ready
#1597521812294471680build-log.txt.gz2 days ago
Nov 29 10:33:15.000 - 1s    E disruption/kube-api connection/reused disruption/kube-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-6p9n05l8-253f3.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/default": net/http: timeout awaiting response headers
Nov 29 10:33:15.000 - 1048s I disruption/oauth-api connection/reused disruption/oauth-api connection/reused started responding to GET requests over reused connections
Nov 29 10:33:15.262 - 874s  I disruption/openshift-api connection/reused disruption/openshift-api connection/reused started responding to GET requests over reused connections
Nov 29 10:33:16.000 - 1049s I disruption/kube-api connection/reused disruption/kube-api connection/reused started responding to GET requests over reused connections
Nov 29 10:34:01.171 I kube-apiserver received an error while watching events: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
Nov 29 10:34:02.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-2 node/ci-op-6p9n05l8-253f3-z47sw-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 10:34:02.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-2 node/ci-op-6p9n05l8-253f3-z47sw-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 10:34:02.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-2 node/ci-op-6p9n05l8-253f3-z47sw-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 10:34:03.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-6p9n05l8-253f3-z47sw-master-2 node/ci-op-6p9n05l8-253f3-z47sw-master-2 reason/ProbeError Readiness probe error: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused\nbody: \n
Nov 29 10:34:03.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-6p9n05l8-253f3-z47sw-master-2 node/ci-op-6p9n05l8-253f3-z47sw-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused
Nov 29 10:34:04.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-6p9n05l8-253f3-z47sw-master-2 node/ci-op-6p9n05l8-253f3-z47sw-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
#1597521812294471680build-log.txt.gz2 days ago
Nov 29 10:47:47.000 I ns/openshift-apiserver replicaset/apiserver-547bd894b7 reason/SuccessfulDelete Deleted pod: apiserver-547bd894b7-7ntc4
Nov 29 10:47:47.000 I ns/openshift-oauth-apiserver replicaset/apiserver-d9f5455dc reason/SuccessfulDelete Deleted pod: apiserver-d9f5455dc-5q7j8
Nov 29 10:47:47.000 I ns/openshift-cluster-node-tuning-operator replicaset/cluster-node-tuning-operator-b9ccd7545 reason/SuccessfulDelete Deleted pod: cluster-node-tuning-operator-b9ccd7545-f6jgn
Nov 29 10:47:47.000 I ns/openshift-ingress-canary daemonset/ingress-canary reason/SuccessfulDelete Deleted pod: ingress-canary-cm24m
Nov 29 10:47:47.000 I ns/openshift-authentication replicaset/oauth-openshift-848846cbd9 reason/SuccessfulDelete Deleted pod: oauth-openshift-848846cbd9-wxkkt
Nov 29 10:47:47.000 I ns/default namespace/kube-system node/apiserver-d9f5455dc-5q7j8 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 29 10:47:47.000 I ns/openshift-apiserver pod/apiserver-547bd894b7-7ntc4 node/apiserver-547bd894b7-7ntc4 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 10:47:47.000 I ns/default namespace/kube-system node/apiserver-d9f5455dc-5q7j8 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 10:47:47.000 I ns/openshift-apiserver pod/apiserver-547bd894b7-7ntc4 node/apiserver-547bd894b7-7ntc4 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 10:47:47.000 I ns/default namespace/kube-system node/apiserver-d9f5455dc-5q7j8 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 10:47:47.000 I ns/default namespace/kube-system node/apiserver-d9f5455dc-5q7j8 reason/TerminationStoppedServing Server has stopped listening
#1597521812294471680build-log.txt.gz2 days ago
Nov 29 10:48:02.000 I ns/openshift-operator-lifecycle-manager deployment/package-server-manager reason/ScalingReplicaSet Scaled up replica set package-server-manager-5bff7578f5 to 1
Nov 29 10:48:02.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-controller-67dd68cb6f reason/SuccessfulCreate Created pod: csi-snapshot-controller-67dd68cb6f-v25p4
Nov 29 10:48:02.000 I ns/openshift-operator-lifecycle-manager replicaset/package-server-manager-5bff7578f5 reason/SuccessfulCreate Created pod: package-server-manager-5bff7578f5-wrsw7
Nov 29 10:48:02.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-controller-7b89dcf465 reason/SuccessfulDelete Deleted pod: csi-snapshot-controller-7b89dcf465-p78zz
Nov 29 10:48:02.000 I ns/openshift-monitoring replicaset/prometheus-operator-7bb6c544df reason/SuccessfulDelete Deleted pod: prometheus-operator-7bb6c544df-4ttjk
Nov 29 10:48:02.000 I ns/openshift-apiserver pod/apiserver-547bd894b7-7ntc4 node/apiserver-547bd894b7-7ntc4 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 10:48:02.000 I ns/openshift-apiserver pod/apiserver-547bd894b7-7ntc4 node/apiserver-547bd894b7-7ntc4 reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:48:02.027 I ns/openshift-monitoring pod/prometheus-operator-7bb6c544df-4ttjk node/ci-op-6p9n05l8-253f3-z47sw-master-2 reason/GracefulDelete duration/30s
Nov 29 10:48:02.340 I ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-67dd68cb6f-vswq4 node/ci-op-6p9n05l8-253f3-z47sw-master-1 container/snapshot-controller reason/ContainerStart duration/7.00s
Nov 29 10:48:02.340 I ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-67dd68cb6f-vswq4 node/ci-op-6p9n05l8-253f3-z47sw-master-1 container/snapshot-controller reason/Ready
Nov 29 10:48:02.407 I ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-7b89dcf465-p78zz node/ci-op-6p9n05l8-253f3-z47sw-master-0 reason/GracefulDelete duration/30s
#1597521812294471680build-log.txt.gz2 days ago
Nov 29 10:49:08.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-9cc5b4bb6 to 2
Nov 29 10:49:08.000 I ns/openshift-oauth-apiserver replicaset/apiserver-9cc5b4bb6 reason/SuccessfulCreate Created pod: apiserver-9cc5b4bb6-gxt7p
Nov 29 10:49:08.000 I ns/openshift-oauth-apiserver replicaset/apiserver-d9f5455dc reason/SuccessfulDelete Deleted pod: apiserver-d9f5455dc-vn2zn
Nov 29 10:49:08.000 I ns/openshift-monitoring daemonset/node-exporter reason/SuccessfulDelete Deleted pod: node-exporter-96xfr
Nov 29 10:49:08.000 I ns/openshift-cluster-node-tuning-operator daemonset/tuned reason/SuccessfulDelete Deleted pod: tuned-cpphr
Nov 29 10:49:08.000 I ns/default namespace/kube-system node/apiserver-d9f5455dc-vn2zn reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 29 10:49:08.000 I ns/default namespace/kube-system node/apiserver-d9f5455dc-vn2zn reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 10:49:08.000 I ns/default namespace/kube-system node/apiserver-d9f5455dc-vn2zn reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 10:49:08.000 I ns/default namespace/kube-system node/apiserver-d9f5455dc-vn2zn reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:49:08.073 I ns/openshift-marketplace pod/certified-operators-w6mck node/ci-op-6p9n05l8-253f3-z47sw-master-1 container/registry-server reason/Ready
Nov 29 10:49:08.122 I ns/openshift-oauth-apiserver pod/apiserver-9cc5b4bb6-mshp9 node/ci-op-6p9n05l8-253f3-z47sw-master-1 container/oauth-apiserver reason/Ready
#1596167986522099712build-log.txt.gz6 days ago
Nov 25 16:50:07.016 I ns/openshift-etcd pod/installer-7-ci-op-rlgm51fb-253f3-5mh49-master-0 node/ci-op-rlgm51fb-253f3-5mh49-master-0 container/installer reason/ContainerExit code/0 cause/Completed
Nov 25 16:50:09.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (17 times)
Nov 25 16:50:09.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (17 times)
Nov 25 16:50:10.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (18 times)
Nov 25 16:50:10.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (18 times)
Nov 25 16:50:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-1 node/ci-op-rlgm51fb-253f3-5mh49-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 16:50:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-1 node/ci-op-rlgm51fb-253f3-5mh49-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 16:50:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-1 node/ci-op-rlgm51fb-253f3-5mh49-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:50:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-1 node/ci-op-rlgm51fb-253f3-5mh49-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:50:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-1 node/ci-op-rlgm51fb-253f3-5mh49-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 16:50:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-1 node/ci-op-rlgm51fb-253f3-5mh49-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 16:50:11.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-9kxpd node/ci-op-rlgm51fb-253f3-5mh49-master-0 reason/Unhealthy Readiness probe failed:  (2 times)
#1596167986522099712build-log.txt.gz6 days ago
Nov 25 16:53:02.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-rlgm51fb-253f3-5mh49-master-1 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found"
Nov 25 16:53:02.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-rlgm51fb-253f3-5mh49-master-1 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found"
Nov 25 16:53:02.665 I ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-m6276 node/ci-op-rlgm51fb-253f3-5mh49-master-1 container/guard reason/Ready
Nov 25 16:53:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (17 times)
Nov 25 16:53:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (17 times)
Nov 25 16:53:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-2 node/ci-op-rlgm51fb-253f3-5mh49-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 16:53:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-2 node/ci-op-rlgm51fb-253f3-5mh49-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 16:53:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-2 node/ci-op-rlgm51fb-253f3-5mh49-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:53:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-2 node/ci-op-rlgm51fb-253f3-5mh49-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:53:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-2 node/ci-op-rlgm51fb-253f3-5mh49-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 16:53:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-2 node/ci-op-rlgm51fb-253f3-5mh49-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 16:53:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-2 node/ci-op-rlgm51fb-253f3-5mh49-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
#1596167986522099712build-log.txt.gz6 days ago
Nov 25 16:55:19.545 W ns/kube-system openshifttest/ingress-to-console reason/DisruptionBegan ns/openshift-console route/console disruption/ingress-to-console connection/new stopped responding to GET requests over new connections: Get "https://console-openshift-console.apps.ci-op-rlgm51fb-253f3.ci.azure.devcluster.openshift.com/healthz": dial tcp 20.221.120.44:443: i/o timeout
Nov 25 16:55:19.545 W ns/kube-system openshifttest/ingress-to-console reason/DisruptionBegan ns/openshift-console route/console disruption/ingress-to-console connection/new stopped responding to GET requests over new connections: Get "https://console-openshift-console.apps.ci-op-rlgm51fb-253f3.ci.azure.devcluster.openshift.com/healthz": dial tcp 20.221.120.44:443: i/o timeout
Nov 25 16:55:32.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-rlgm51fb-253f3-5mh49-master-1_c08efd69-4afe-440e-8a73-79fbf4cdee5d became leader
Nov 25 16:55:32.000 I ns/openshift-kube-apiserver lease/cert-regeneration-controller-lock reason/LeaderElection ci-op-rlgm51fb-253f3-5mh49-master-1_c08efd69-4afe-440e-8a73-79fbf4cdee5d became leader
Nov 25 16:56:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (35 times)
Nov 25 16:56:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-0 node/ci-op-rlgm51fb-253f3-5mh49-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 16:56:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-0 node/ci-op-rlgm51fb-253f3-5mh49-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:56:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-0 node/ci-op-rlgm51fb-253f3-5mh49-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 16:56:10.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-rlgm51fb-253f3-5mh49-master-0 node/ci-op-rlgm51fb-253f3-5mh49-master-0 reason/ProbeError Readiness probe error: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused\nbody: \n
Nov 25 16:56:10.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-rlgm51fb-253f3-5mh49-master-0 node/ci-op-rlgm51fb-253f3-5mh49-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused
Nov 25 16:56:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-rlgm51fb-253f3-5mh49-master-0 node/ci-op-rlgm51fb-253f3-5mh49-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
#1596167986522099712build-log.txt.gz6 days ago
Nov 25 17:12:17.155 I ns/openshift-controller-manager pod/controller-manager-qh8cb node/ci-op-rlgm51fb-253f3-5mh49-master-1 reason/GracefulDelete duration/30s
Nov 25 17:12:17.168 I ns/openshift-controller-manager pod/controller-manager-xf8xv node/ci-op-rlgm51fb-253f3-5mh49-master-0 reason/GracefulDelete duration/30s
Nov 25 17:12:17.172 I ns/openshift-controller-manager pod/controller-manager-bwtp8 node/ci-op-rlgm51fb-253f3-5mh49-master-2 reason/GracefulDelete duration/30s
Nov 25 17:12:17.178 W clusteroperator/openshift-controller-manager condition/Progressing status/True reason/_DesiredStateNotYetAchieved changed: Progressing: daemonset/controller-manager: observed generation is 7, desired generation is 8.
Nov 25 17:12:17.178 - 39s   W clusteroperator/openshift-controller-manager condition/Progressing status/True reason/Progressing: daemonset/controller-manager: observed generation is 7, desired generation is 8.
Nov 25 17:12:17.630 E ns/openshift-console-operator pod/console-operator-85d674f9d9-nxjnl node/ci-op-rlgm51fb-253f3-5mh49-master-1 container/console-operator reason/ContainerExit code/1 cause/Error hift-console-operator", Name:"console-operator-85d674f9d9-nxjnl", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI1125 17:12:14.993176       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI1125 17:12:14.993179       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-85d674f9d9-nxjnl", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI1125 17:12:14.993186       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI1125 17:12:14.993197       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI1125 17:12:14.993199       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI1125 17:12:14.993211       1 base_controller.go:167] Shutting down HealthCheckController ...\nI1125 17:12:14.993221       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI1125 17:12:14.993218       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-85d674f9d9-nxjnl", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI1125 17:12:14.993232       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI1125 17:12:14.993233       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI1125 17:12:14.993294       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI1125 17:12:14.993314       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nW1125 17:12:14.993477       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Nov 25 17:12:17.664 I ns/openshift-console-operator pod/console-operator-85d674f9d9-nxjnl node/ci-op-rlgm51fb-253f3-5mh49-master-1 reason/Deleted
Nov 25 17:12:17.690 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-8576896f76-2cf7h node/ci-op-rlgm51fb-253f3-5mh49-master-1 container/webhook reason/ContainerExit code/2 cause/Error
Nov 25 17:12:17.716 I ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-8576896f76-2cf7h node/ci-op-rlgm51fb-253f3-5mh49-master-1 reason/Deleted
Nov 25 17:12:18.000 I ns/openshift-monitoring pod/prometheus-operator-7fb6bfbd68-x8pj8 node/ci-op-rlgm51fb-253f3-5mh49-master-1 container/kube-rbac-proxy reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:baedb268ac66456018fb30af395bb3d69af5fff3252ff5d549f0231b1ebb6901
Nov 25 17:12:18.000 I ns/openshift-monitoring pod/prometheus-operator-7fb6bfbd68-x8pj8 node/ci-op-rlgm51fb-253f3-5mh49-master-1 container/prometheus-operator reason/Created
#1596211792730656768build-log.txt.gz6 days ago
Nov 25 19:29:44.857 I ns/openshift-marketplace pod/community-operators-tn6fq node/ci-op-0qbp7d6t-253f3-p6lt2-worker-westus-lrwzj reason/Deleted
Nov 25 19:29:46.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-tt2p8 node/ci-op-0qbp7d6t-253f3-p6lt2-master-0 reason/Unhealthy Readiness probe failed:  (2 times)
Nov 25 19:29:51.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (19 times)
Nov 25 19:29:51.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-tt2p8 node/ci-op-0qbp7d6t-253f3-p6lt2-master-0 reason/Unhealthy Readiness probe failed:  (3 times)
Nov 25 19:29:51.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-tt2p8 node/ci-op-0qbp7d6t-253f3-p6lt2-master-0 reason/Unhealthy Readiness probe failed:  (4 times)
Nov 25 19:29:52.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-0 node/ci-op-0qbp7d6t-253f3-p6lt2-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 19:29:52.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-0 node/ci-op-0qbp7d6t-253f3-p6lt2-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 19:29:52.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-0 node/ci-op-0qbp7d6t-253f3-p6lt2-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 19:29:52.047 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-tt2p8 node/ci-op-0qbp7d6t-253f3-p6lt2-master-0 container/guard reason/NotReady
Nov 25 19:29:52.576 - 4920s W alert/AlertmanagerReceiversNotConfigured ns/openshift-monitoring ALERTS{alertname="AlertmanagerReceiversNotConfigured", alertstate="firing", namespace="openshift-monitoring", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 25 19:29:53.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-0qbp7d6t-253f3-p6lt2-master-0 node/ci-op-0qbp7d6t-253f3-p6lt2-master-0 reason/ProbeError Readiness probe error: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused\nbody: \n
#1596211792730656768build-log.txt.gz6 days ago
Nov 25 19:32:45.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "ci-op-0qbp7d6t-253f3-p6lt2-master-2" from revision 6 to 7 because static pod is ready
Nov 25 19:32:45.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available"
Nov 25 19:32:45.613 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found
Nov 25 19:32:50.935 I ns/openshift-etcd pod/installer-2-ci-op-0qbp7d6t-253f3-p6lt2-master-1 node/ci-op-0qbp7d6t-253f3-p6lt2-master-1 reason/DeletedAfterCompletion
Nov 25 19:32:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (18 times)
Nov 25 19:32:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-1 node/ci-op-0qbp7d6t-253f3-p6lt2-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 19:32:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-1 node/ci-op-0qbp7d6t-253f3-p6lt2-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 19:32:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-1 node/ci-op-0qbp7d6t-253f3-p6lt2-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 19:32:58.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-1 node/ci-op-0qbp7d6t-253f3-p6lt2-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 25 19:33:00.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-0qbp7d6t-253f3-p6lt2-master-1 node/ci-op-0qbp7d6t-253f3-p6lt2-master-1 reason/ProbeError Readiness probe error: Get "https://10.0.0.7:6443/healthz": dial tcp 10.0.0.7:6443: connect: connection refused\nbody: \n
Nov 25 19:33:00.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-0qbp7d6t-253f3-p6lt2-master-1 node/ci-op-0qbp7d6t-253f3-p6lt2-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.7:6443/healthz": dial tcp 10.0.0.7:6443: connect: connection refused
#1596211792730656768build-log.txt.gz6 days ago
Nov 25 19:34:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-2 node/ci-op-0qbp7d6t-253f3-p6lt2-master-2 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 19:35:00.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (34 times)
Nov 25 19:35:00.396 I ns/openshift-kube-apiserver pod/installer-10-ci-op-0qbp7d6t-253f3-p6lt2-master-2 node/ci-op-0qbp7d6t-253f3-p6lt2-master-2 container/installer reason/ContainerExit code/0 cause/Completed
Nov 25 19:35:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (35 times)
Nov 25 19:35:53.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (36 times)
Nov 25 19:36:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-2 node/ci-op-0qbp7d6t-253f3-p6lt2-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 19:36:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-2 node/ci-op-0qbp7d6t-253f3-p6lt2-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 19:36:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-2 node/ci-op-0qbp7d6t-253f3-p6lt2-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 19:36:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0qbp7d6t-253f3-p6lt2-master-2 node/ci-op-0qbp7d6t-253f3-p6lt2-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 25 19:36:13.000 W ns/openshift-network-diagnostics node/ci-op-0qbp7d6t-253f3-p6lt2-worker-westus-lrwzj reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ci-op-0qbp7d6t-253f3-p6lt2-master-0: failed to establish a TCP connection to 10.0.0.6:6443: dial tcp 10.0.0.6:6443: connect: connection refused
Nov 25 19:36:13.000 W ns/openshift-network-diagnostics node/ci-op-0qbp7d6t-253f3-p6lt2-worker-westus-lrwzj reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ci-op-0qbp7d6t-253f3-p6lt2-master-1: failed to establish a TCP connection to 10.0.0.7:6443: dial tcp 10.0.0.7:6443: connect: connection refused
#1596211792730656768build-log.txt.gz6 days ago
Nov 25 19:52:17.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3."
Nov 25 19:52:17.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-8649b44c86 to 2
Nov 25 19:52:17.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-7ddb757b84 to 1
Nov 25 19:52:17.000 I ns/openshift-oauth-apiserver replicaset/apiserver-7ddb757b84 reason/SuccessfulCreate Created pod: apiserver-7ddb757b84-6vns5
Nov 25 19:52:17.000 I ns/openshift-oauth-apiserver replicaset/apiserver-8649b44c86 reason/SuccessfulDelete Deleted pod: apiserver-8649b44c86-fh8pw
Nov 25 19:52:17.000 I ns/default namespace/kube-system node/apiserver-8649b44c86-fh8pw reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 25 19:52:17.000 I ns/default namespace/kube-system node/apiserver-8649b44c86-fh8pw reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 19:52:17.000 I ns/default namespace/kube-system node/apiserver-8649b44c86-fh8pw reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 19:52:17.000 I ns/default namespace/kube-system node/apiserver-8649b44c86-fh8pw reason/TerminationStoppedServing Server has stopped listening
Nov 25 19:52:17.426 I ns/openshift-marketplace pod/community-operators-jq8hn node/ci-op-0qbp7d6t-253f3-p6lt2-worker-westus-lrwzj container/registry-server reason/ContainerStart duration/3.00s
Nov 25 19:52:17.576 I ns/openshift-oauth-apiserver pod/apiserver-8649b44c86-fh8pw node/ci-op-0qbp7d6t-253f3-p6lt2-master-2 reason/GracefulDelete duration/70s
#1596211792730656768build-log.txt.gz6 days ago
Nov 25 19:52:29.000 I ns/openshift-monitoring deployment/kube-state-metrics reason/ScalingReplicaSet Scaled up replica set kube-state-metrics-6f65d898ff to 1
Nov 25 19:52:29.000 I ns/openshift-monitoring replicaset/kube-state-metrics-6f65d898ff reason/SuccessfulCreate Created pod: kube-state-metrics-6f65d898ff-mvtjc
Nov 25 19:52:29.000 I ns/openshift-operator-lifecycle-manager replicaset/catalog-operator-885c99fdf reason/SuccessfulDelete Deleted pod: catalog-operator-885c99fdf-f5zvh
Nov 25 19:52:29.000 I ns/openshift-monitoring daemonset/node-exporter reason/SuccessfulDelete Deleted pod: node-exporter-khjt6
Nov 25 19:52:29.000 I ns/openshift-operator-lifecycle-manager replicaset/olm-operator-76d58bd996 reason/SuccessfulDelete Deleted pod: olm-operator-76d58bd996-87mcn
Nov 25 19:52:29.000 I ns/openshift-apiserver pod/apiserver-75b9bb6447-rdqzz node/apiserver-75b9bb6447-rdqzz reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 25 19:52:29.000 I ns/openshift-apiserver pod/apiserver-75b9bb6447-rdqzz node/apiserver-75b9bb6447-rdqzz reason/TerminationStoppedServing Server has stopped listening
Nov 25 19:52:29.102 I ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-0qbp7d6t-253f3-p6lt2-worker-westus-2b2xl reason/Deleted
Nov 25 19:52:29.141 I ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-0qbp7d6t-253f3-p6lt2-worker-westus-2b2xl container/alertmanager reason/ContainerExit code/0 cause/Completed
Nov 25 19:52:29.141 I ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-0qbp7d6t-253f3-p6lt2-worker-westus-2b2xl container/prom-label-proxy reason/ContainerExit code/0 cause/Completed
Nov 25 19:52:29.141 I ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-0qbp7d6t-253f3-p6lt2-worker-westus-2b2xl container/kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
#1596211792730656768build-log.txt.gz6 days ago
Nov 25 19:53:31.000 W ns/openshift-oauth-apiserver pod/apiserver-596db6696f-jqnbr node/ci-op-0qbp7d6t-253f3-p6lt2-master-2 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]etcd ok\n[-]informer-sync failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/openshift.io-StartUserInformer ok\n[+]poststarthook/openshift.io-StartOAuthInformer ok\n[+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok\n[+]shutdown ok\nreadyz check failed\n\n
Nov 25 19:53:31.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-8649b44c86 to 1
Nov 25 19:53:31.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-596db6696f to 2
Nov 25 19:53:31.000 I ns/openshift-oauth-apiserver replicaset/apiserver-596db6696f reason/SuccessfulCreate Created pod: apiserver-596db6696f-gxltl
Nov 25 19:53:31.000 I ns/openshift-oauth-apiserver replicaset/apiserver-8649b44c86 reason/SuccessfulDelete Deleted pod: apiserver-8649b44c86-w5kz6
Nov 25 19:53:31.000 I ns/default namespace/kube-system node/apiserver-8649b44c86-w5kz6 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 25 19:53:31.000 I ns/default namespace/kube-system node/apiserver-8649b44c86-w5kz6 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 19:53:31.000 I ns/default namespace/kube-system node/apiserver-8649b44c86-w5kz6 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 19:53:31.000 I ns/default namespace/kube-system node/apiserver-8649b44c86-w5kz6 reason/TerminationStoppedServing Server has stopped listening
Nov 25 19:53:31.000 W ns/openshift-oauth-apiserver pod/apiserver-596db6696f-jqnbr node/ci-op-0qbp7d6t-253f3-p6lt2-master-2 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500
Nov 25 19:53:31.053 I ns/openshift-cluster-csi-drivers pod/azure-disk-csi-driver-node-t2x5d node/ci-op-0qbp7d6t-253f3-p6lt2-worker-westus-2b2xl container/azure-inject-credentials reason/ContainerExit code/0 cause/Completed
#1596124454641995776build-log.txt.gz6 days ago
Nov 25 13:47:07.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" (2 times)
Nov 25 13:47:07.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ci-op-r0280fzx-253f3-wqp84-master-2
Nov 25 13:47:08.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (21 times)
Nov 25 13:47:11.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-4qgc2 node/ci-op-r0280fzx-253f3-wqp84-master-2 reason/Unhealthy Readiness probe failed:  (6 times)
Nov 25 13:47:16.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-4qgc2 node/ci-op-r0280fzx-253f3-wqp84-master-2 reason/Unhealthy Readiness probe failed:  (7 times)
Nov 25 13:47:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-r0280fzx-253f3-wqp84-master-1 node/ci-op-r0280fzx-253f3-wqp84-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 13:47:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-r0280fzx-253f3-wqp84-master-1 node/ci-op-r0280fzx-253f3-wqp84-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 13:47:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-r0280fzx-253f3-wqp84-master-1 node/ci-op-r0280fzx-253f3-wqp84-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 13:47:17.662 I ns/openshift-marketplace pod/certified-operators-4fvzc node/ci-op-r0280fzx-253f3-wqp84-worker-centralus2-9ng44 reason/Scheduled
Nov 25 13:47:17.668 I ns/openshift-marketplace pod/certified-operators-4fvzc node/ reason/Created
Nov 25 13:47:19.000 I ns/openshift-marketplace pod/certified-operators-4fvzc node/ci-op-r0280fzx-253f3-wqp84-worker-centralus2-9ng44 container/registry-server reason/Pulled duration/0.590s image/registry.redhat.io/redhat/certified-operator-index:v4.9
#1596124454641995776build-log.txt.gz6 days ago
Nov 25 13:50:02.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "ci-op-r0280fzx-253f3-wqp84-master-0" from revision 6 to 7 because static pod is ready
Nov 25 13:50:02.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-r0280fzx-253f3-wqp84-master-0 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-r0280fzx-253f3-wqp84-master-0 is unhealthy"
Nov 25 13:50:02.326 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found
Nov 25 13:50:07.590 I ns/openshift-etcd pod/installer-2-ci-op-r0280fzx-253f3-wqp84-master-1 node/ci-op-r0280fzx-253f3-wqp84-master-1 reason/DeletedAfterCompletion
Nov 25 13:50:25.333 W clusterversion/version changed Failing to False
Nov 25 13:50:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-r0280fzx-253f3-wqp84-master-2 node/ci-op-r0280fzx-253f3-wqp84-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 13:50:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-r0280fzx-253f3-wqp84-master-2 node/ci-op-r0280fzx-253f3-wqp84-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 13:50:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-r0280fzx-253f3-wqp84-master-2 node/ci-op-r0280fzx-253f3-wqp84-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 13:50:27.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-r0280fzx-253f3-wqp84-master-2 node/ci-op-r0280fzx-253f3-wqp84-master-2 reason/ProbeError Readiness probe error: Get "https://10.0.0.7:6443/healthz": dial tcp 10.0.0.7:6443: connect: connection refused\nbody: \n
Nov 25 13:50:27.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-r0280fzx-253f3-wqp84-master-2 node/ci-op-r0280fzx-253f3-wqp84-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.7:6443/healthz": dial tcp 10.0.0.7:6443: connect: connection refused
Nov 25 13:50:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-r0280fzx-253f3-wqp84-master-2 node/ci-op-r0280fzx-253f3-wqp84-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
#1596124454641995776build-log.txt.gz6 days ago
Nov 25 13:53:04.000 I ns/openshift-marketplace pod/redhat-operators-xkd9q node/ci-op-r0280fzx-253f3-wqp84-worker-centralus3-tfpb2 container/registry-server reason/Killing
Nov 25 13:53:05.648 I ns/openshift-marketplace pod/community-operators-4qxlg node/ci-op-r0280fzx-253f3-wqp84-worker-centralus1-78hcc container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 25 13:53:05.687 I ns/openshift-marketplace pod/redhat-operators-xkd9q node/ci-op-r0280fzx-253f3-wqp84-worker-centralus3-tfpb2 container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 25 13:53:05.726 I ns/openshift-marketplace pod/community-operators-4qxlg node/ci-op-r0280fzx-253f3-wqp84-worker-centralus1-78hcc reason/Deleted
Nov 25 13:53:05.727 I ns/openshift-marketplace pod/redhat-operators-xkd9q node/ci-op-r0280fzx-253f3-wqp84-worker-centralus3-tfpb2 reason/Deleted
Nov 25 13:53:35.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-r0280fzx-253f3-wqp84-master-0 node/ci-op-r0280fzx-253f3-wqp84-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 13:53:35.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-r0280fzx-253f3-wqp84-master-0 node/ci-op-r0280fzx-253f3-wqp84-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 13:53:35.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-r0280fzx-253f3-wqp84-master-0 node/ci-op-r0280fzx-253f3-wqp84-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 13:53:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-r0280fzx-253f3-wqp84-master-0 node/ci-op-r0280fzx-253f3-wqp84-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 25 13:53:38.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-r0280fzx-253f3-wqp84-master-0 node/ci-op-r0280fzx-253f3-wqp84-master-0 reason/ProbeError Readiness probe error: Get "https://10.0.0.8:6443/healthz": dial tcp 10.0.0.8:6443: connect: connection refused\nbody: \n
Nov 25 13:53:38.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-r0280fzx-253f3-wqp84-master-0 node/ci-op-r0280fzx-253f3-wqp84-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.8:6443/healthz": dial tcp 10.0.0.8:6443: connect: connection refused
#1596124454641995776build-log.txt.gz6 days ago
Nov 25 14:07:24.000 I ns/openshift-apiserver replicaset/apiserver-7c446cc59 reason/SuccessfulCreate Created pod: apiserver-7c446cc59-gvfv9
Nov 25 14:07:24.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-controller-57dd576b58 reason/SuccessfulCreate Created pod: csi-snapshot-controller-57dd576b58-nkq4h
Nov 25 14:07:24.000 I ns/openshift-apiserver replicaset/apiserver-6c9d655f9f reason/SuccessfulDelete Deleted pod: apiserver-6c9d655f9f-t8kn2
Nov 25 14:07:24.000 I ns/openshift-oauth-apiserver replicaset/apiserver-795967cdcc reason/SuccessfulDelete Deleted pod: apiserver-795967cdcc-2b7hs
Nov 25 14:07:24.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-controller-7b89dcf465 reason/SuccessfulDelete Deleted pod: csi-snapshot-controller-7b89dcf465-xr4c7
Nov 25 14:07:24.000 I ns/default namespace/kube-system node/apiserver-795967cdcc-2b7hs reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 25 14:07:24.000 I ns/default namespace/kube-system node/apiserver-795967cdcc-2b7hs reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 14:07:24.000 I ns/openshift-apiserver pod/apiserver-6c9d655f9f-t8kn2 node/apiserver-6c9d655f9f-t8kn2 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 14:07:24.000 I ns/default namespace/kube-system node/apiserver-795967cdcc-2b7hs reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 14:07:24.000 I ns/openshift-apiserver pod/apiserver-6c9d655f9f-t8kn2 node/apiserver-6c9d655f9f-t8kn2 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 14:07:24.000 I ns/default namespace/kube-system node/apiserver-795967cdcc-2b7hs reason/TerminationStoppedServing Server has stopped listening
#1596124454641995776build-log.txt.gz6 days ago
Nov 25 14:07:39.000 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy csi-snapshot-controller pods" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy csi-snapshot-controller pods\nCSISnapshotWebhookControllerProgressing: 1 out of 2 pods running" (12 times)
Nov 25 14:07:39.000 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy csi-snapshot-controller pods\nCSISnapshotWebhookControllerProgressing: 1 out of 2 pods running" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy csi-snapshot-controller pods" (12 times)
Nov 25 14:07:39.000 I ns/openshift-machine-api deployment/cluster-autoscaler-operator reason/ScalingReplicaSet Scaled down replica set cluster-autoscaler-operator-dbb95b99f to 0
Nov 25 14:07:39.000 I ns/openshift-controller-manager daemonset/controller-manager reason/SuccessfulCreate Created pod: controller-manager-fhm9s
Nov 25 14:07:39.000 I ns/openshift-machine-api replicaset/cluster-autoscaler-operator-dbb95b99f reason/SuccessfulDelete Deleted pod: cluster-autoscaler-operator-dbb95b99f-x27d2
Nov 25 14:07:39.000 I ns/openshift-apiserver pod/apiserver-6c9d655f9f-t8kn2 node/apiserver-6c9d655f9f-t8kn2 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 25 14:07:39.000 I ns/openshift-apiserver pod/apiserver-6c9d655f9f-t8kn2 node/apiserver-6c9d655f9f-t8kn2 reason/TerminationStoppedServing Server has stopped listening
Nov 25 14:07:39.002 E ns/openshift-controller-manager pod/controller-manager-f6l5x node/ci-op-r0280fzx-253f3-wqp84-master-1 container/controller-manager reason/ContainerExit code/137 cause/Error dconfig_controller.go:212] Starting buildconfig controller\nI1125 13:32:13.854966       1 shared_informer.go:247] Caches are synced for service account \nI1125 13:32:13.871444       1 factory.go:85] deploymentconfig controller caches are synced. Starting workers.\nI1125 13:32:13.880414       1 shared_informer.go:247] Caches are synced for DefaultRoleBindingController \nI1125 13:32:13.961322       1 factory.go:80] Deployer controller caches are synced. Starting workers.\nI1125 13:32:14.061791       1 build_controller.go:475] Starting build controller\nI1125 13:32:14.061816       1 build_controller.go:477] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nI1125 13:32:14.077768       1 docker_registry_service.go:156] caches synced\nI1125 13:32:14.077856       1 deleted_dockercfg_secrets.go:75] caches synced\nI1125 13:32:14.077871       1 create_dockercfg_secrets.go:219] urls found\nI1125 13:32:14.077877       1 create_dockercfg_secrets.go:225] caches synced\nI1125 13:32:14.078144       1 docker_registry_service.go:298] Updating registry URLs from map[172.30.83.166:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.83.166:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI1125 13:32:14.079319       1 deleted_token_secrets.go:70] caches synced\nE1125 14:07:19.172641       1 imagestream_controller.go:136] Error syncing image stream "openshift/dotnet-runtime": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "dotnet-runtime": the object has been modified; please apply your changes to the latest version and try again\nE1125 14:07:19.486387       1 imagestream_controller.go:136] Error syncing image stream "openshift/openjdk-11-rhel7": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "openjdk-11-rhel7": the object has been modified; please apply your changes to the latest version and try again\n
Nov 25 14:07:39.252 I ns/openshift-controller-manager pod/controller-manager-f6l5x node/ci-op-r0280fzx-253f3-wqp84-master-1 reason/Deleted
Nov 25 14:07:39.276 I ns/openshift-controller-manager pod/controller-manager-fhm9s node/ci-op-r0280fzx-253f3-wqp84-master-1 reason/Scheduled
Nov 25 14:07:39.282 I ns/openshift-controller-manager pod/controller-manager-fhm9s node/ reason/Created
#1596124454641995776build-log.txt.gz6 days ago
Nov 25 14:08:56.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-795967cdcc to 1
Nov 25 14:08:56.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-855756b79b to 2
Nov 25 14:08:56.000 I ns/openshift-oauth-apiserver replicaset/apiserver-855756b79b reason/SuccessfulCreate Created pod: apiserver-855756b79b-vtxnv
Nov 25 14:08:56.000 I ns/openshift-cluster-node-tuning-operator daemonset/tuned reason/SuccessfulCreate Created pod: tuned-qh74m
Nov 25 14:08:56.000 I ns/openshift-oauth-apiserver replicaset/apiserver-795967cdcc reason/SuccessfulDelete Deleted pod: apiserver-795967cdcc-wlqmv
Nov 25 14:08:56.000 I ns/default namespace/kube-system node/apiserver-795967cdcc-wlqmv reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 25 14:08:56.000 I ns/default namespace/kube-system node/apiserver-795967cdcc-wlqmv reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 14:08:56.000 I ns/default namespace/kube-system node/apiserver-795967cdcc-wlqmv reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 14:08:56.000 I ns/default namespace/kube-system node/apiserver-795967cdcc-wlqmv reason/TerminationStoppedServing Server has stopped listening
Nov 25 14:08:56.228 I ns/openshift-image-registry pod/image-registry-5999b48458-fmvqm node/ci-op-r0280fzx-253f3-wqp84-worker-centralus1-78hcc container/registry reason/ContainerExit code/0 cause/Completed
Nov 25 14:08:56.271 I ns/openshift-oauth-apiserver pod/apiserver-855756b79b-5tg59 node/ci-op-r0280fzx-253f3-wqp84-master-2 container/oauth-apiserver reason/Ready
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-aws-ovn-upgrade (all) - 23 runs, 35% failed, 75% of failures match = 26% impact
#1598318248581926912build-log.txt.gz6 hours ago
Dec 01 14:59:12.060 - 59s   I alert/etcdMembersDown ns/openshift-etcd pod/etcd-ip-10-0-202-110.us-east-2.compute.internal ALERTS{alertname="etcdMembersDown", alertstate="pending", job="etcd", namespace="openshift-etcd", pod="etcd-ip-10-0-202-110.us-east-2.compute.internal", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
Dec 01 14:59:13.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5cc9dc7d9ee0e0ec9e5c2ba896dcdebfd864c1b079415a6af507c85cc9719ad6,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (45 times)
Dec 01 14:59:14.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b5d8dd9bf-84fbb node/ip-10-0-202-110.us-east-2.compute.internal reason/Unhealthy Readiness probe failed:  (6 times)
Dec 01 14:59:19.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b5d8dd9bf-84fbb node/ip-10-0-202-110.us-east-2.compute.internal reason/Unhealthy Readiness probe failed:  (7 times)
Dec 01 14:59:19.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ip-10-0-202-110.us-east-2.compute.internal (2 times)
Dec 01 14:59:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-110.us-east-2.compute.internal node/ip-10-0-202-110 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Dec 01 14:59:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-110.us-east-2.compute.internal node/ip-10-0-202-110 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 14:59:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-110.us-east-2.compute.internal node/ip-10-0-202-110 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 14:59:22.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-202-110.us-east-2.compute.internal node/ip-10-0-202-110.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.202.110:6443/healthz": dial tcp 10.0.202.110:6443: connect: connection refused\nbody: \n
Dec 01 14:59:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-110.us-east-2.compute.internal node/ip-10-0-202-110 reason/TerminationGracefulTerminationFinished All pending requests processed
Dec 01 14:59:22.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-202-110.us-east-2.compute.internal node/ip-10-0-202-110.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.0.202.110:6443/healthz": dial tcp 10.0.202.110:6443: connect: connection refused
#1598318248581926912build-log.txt.gz6 hours ago
Dec 01 15:02:58.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (17 times)
Dec 01 15:04:01.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (18 times)
Dec 01 15:04:01.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (18 times)
Dec 01 15:04:04.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (19 times)
Dec 01 15:04:04.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (19 times)
Dec 01 15:04:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-171.us-east-2.compute.internal node/ip-10-0-146-171 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Dec 01 15:04:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-171.us-east-2.compute.internal node/ip-10-0-146-171 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Dec 01 15:04:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-171.us-east-2.compute.internal node/ip-10-0-146-171 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 15:04:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-171.us-east-2.compute.internal node/ip-10-0-146-171 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 15:04:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-171.us-east-2.compute.internal node/ip-10-0-146-171 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 15:04:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-171.us-east-2.compute.internal node/ip-10-0-146-171 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 15:04:33.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-146-171.us-east-2.compute.internal node/ip-10-0-146-171.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.146.171:6443/healthz": dial tcp 10.0.146.171:6443: connect: connection refused\nbody: \n
#1598318248581926912build-log.txt.gz6 hours ago
Dec 01 15:07:56.410 I ns/openshift-marketplace pod/redhat-marketplace-6b6qm node/ip-10-0-140-195.us-east-2.compute.internal reason/Deleted
Dec 01 15:07:56.410 I ns/openshift-marketplace pod/redhat-operators-zgffk node/ip-10-0-140-195.us-east-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Dec 01 15:07:56.412 I ns/openshift-marketplace pod/redhat-operators-zgffk node/ip-10-0-140-195.us-east-2.compute.internal reason/Deleted
Dec 01 15:08:56.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,registry.ci.openshift.org/ocp/4.10-2022-12-01-140751@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (37 times)
Dec 01 15:09:25.060 - 3449s I alert/APIRemovedInNextEUSReleaseInUse ns/openshift-kube-apiserver ALERTS{alertname="APIRemovedInNextEUSReleaseInUse", alertstate="pending", group="discovery.k8s.io", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", resource="endpointslices", severity="info", version="v1beta1"}
Dec 01 15:09:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-153.us-east-2.compute.internal node/ip-10-0-155-153 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Dec 01 15:09:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-153.us-east-2.compute.internal node/ip-10-0-155-153 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 15:09:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-153.us-east-2.compute.internal node/ip-10-0-155-153 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 15:09:52.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-153.us-east-2.compute.internal node/ip-10-0-155-153 reason/TerminationGracefulTerminationFinished All pending requests processed
Dec 01 15:09:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-153.us-east-2.compute.internal node/ip-10-0-155-153.us-east-2.compute.internal container/kube-apiserver reason/Killing
Dec 01 15:09:54.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-155-153.us-east-2.compute.internal node/ip-10-0-155-153.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.155.153:6443/healthz": dial tcp 10.0.155.153:6443: connect: connection refused\nbody: \n
#1598318248581926912build-log.txt.gz6 hours ago
Dec 01 15:25:58.000 I ns/openshift-oauth-apiserver replicaset/apiserver-645d844497 reason/SuccessfulCreate Created pod: apiserver-645d844497-sdddg
Dec 01 15:25:58.000 I ns/openshift-ingress replicaset/router-default-85b4d8766c reason/SuccessfulCreate Created pod: router-default-85b4d8766c-855zt
Dec 01 15:25:58.000 I ns/openshift-ingress replicaset/router-default-85b4d8766c reason/SuccessfulCreate Created pod: router-default-85b4d8766c-hkf5c
Dec 01 15:25:58.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5fcbcd9545 reason/SuccessfulDelete Deleted pod: apiserver-5fcbcd9545-4q94s
Dec 01 15:25:58.000 I ns/openshift-ingress replicaset/router-default-755f4bc5cc reason/SuccessfulDelete Deleted pod: router-default-755f4bc5cc-2lqpl
Dec 01 15:25:58.000 I ns/default namespace/kube-system node/apiserver-5fcbcd9545-4q94s reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Dec 01 15:25:58.000 I ns/default namespace/kube-system node/apiserver-5fcbcd9545-4q94s reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 15:25:58.000 I ns/default namespace/kube-system node/apiserver-5fcbcd9545-4q94s reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 15:25:58.000 I ns/default namespace/kube-system node/apiserver-5fcbcd9545-4q94s reason/TerminationStoppedServing Server has stopped listening
Dec 01 15:25:58.233 I ns/openshift-oauth-apiserver pod/apiserver-5fcbcd9545-4q94s node/ip-10-0-155-153.us-east-2.compute.internal reason/GracefulDelete duration/70s
Dec 01 15:25:58.284 W ns/openshift-oauth-apiserver pod/apiserver-645d844497-sdddg reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
#1598318248581926912build-log.txt.gz6 hours ago
Dec 01 15:26:11.000 I ns/openshift-monitoring pod/openshift-state-metrics-bc5c488f7-dkf4p node/ip-10-0-140-195.us-east-2.compute.internal container/openshift-state-metrics reason/Killing
Dec 01 15:26:11.000 W ns/openshift-apiserver pod/apiserver-5b4dbf8677-kzpm9 node/ip-10-0-202-110.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.58:8443/readyz": dial tcp 10.128.0.58:8443: connect: connection refused\nbody: \n
Dec 01 15:26:11.000 I ns/openshift-monitoring deployment/kube-state-metrics reason/ScalingReplicaSet Scaled up replica set kube-state-metrics-c7c7fb867 to 1
Dec 01 15:26:11.000 I ns/openshift-monitoring replicaset/kube-state-metrics-c7c7fb867 reason/SuccessfulCreate Created pod: kube-state-metrics-c7c7fb867-jp6ll
Dec 01 15:26:11.000 I ns/openshift-monitoring daemonset/node-exporter reason/SuccessfulDelete Deleted pod: node-exporter-6vh9t
Dec 01 15:26:11.000 I ns/openshift-apiserver pod/apiserver-5b4dbf8677-kzpm9 node/apiserver-5b4dbf8677-kzpm9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Dec 01 15:26:11.000 I ns/openshift-apiserver pod/apiserver-5b4dbf8677-kzpm9 node/apiserver-5b4dbf8677-kzpm9 reason/TerminationStoppedServing Server has stopped listening
Dec 01 15:26:11.000 W ns/openshift-apiserver pod/apiserver-5b4dbf8677-kzpm9 node/ip-10-0-202-110.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.58:8443/readyz": dial tcp 10.128.0.58:8443: connect: connection refused
Dec 01 15:26:11.031 I ns/openshift-operator-lifecycle-manager pod/olm-operator-67554b46f6-88qk7 node/ip-10-0-202-110.us-east-2.compute.internal container/olm-operator reason/ContainerExit code/0 cause/Completed
Dec 01 15:26:11.082 I ns/openshift-operator-lifecycle-manager pod/olm-operator-67554b46f6-88qk7 node/ip-10-0-202-110.us-east-2.compute.internal reason/Deleted
Dec 01 15:26:11.083 W ns/openshift-oauth-apiserver pod/apiserver-689cd5c86b-jxr5s reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
#1598110645746667520build-log.txt.gz19 hours ago
Dec 01 01:22:35.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" (3 times)
Dec 01 01:22:35.000 W ns/openshift-etcd pod/etcd-quorum-guard-65bdd9d758-6d8d7 node/ip-10-0-155-65.us-west-1.compute.internal reason/Unhealthy Readiness probe failed:  (5 times)
Dec 01 01:22:35.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ip-10-0-155-65.us-west-1.compute.internal
Dec 01 01:22:36.126 - 59s   I alert/TargetDown ns/openshift-etcd ALERTS{alertname="TargetDown", alertstate="pending", job="etcd", namespace="openshift-etcd", prometheus="openshift-monitoring/k8s", service="etcd", severity="warning"}
Dec 01 01:22:40.000 W ns/openshift-etcd pod/etcd-quorum-guard-65bdd9d758-6d8d7 node/ip-10-0-155-65.us-west-1.compute.internal reason/Unhealthy Readiness probe failed:  (6 times)
Dec 01 01:22:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-65.us-west-1.compute.internal node/ip-10-0-155-65 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Dec 01 01:22:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-65.us-west-1.compute.internal node/ip-10-0-155-65 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 01:22:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-65.us-west-1.compute.internal node/ip-10-0-155-65 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 01:22:42.126 - 59s   I alert/etcdMembersDown ns/openshift-etcd pod/etcd-ip-10-0-155-65.us-west-1.compute.internal ALERTS{alertname="etcdMembersDown", alertstate="pending", job="etcd", namespace="openshift-etcd", pod="etcd-ip-10-0-155-65.us-west-1.compute.internal", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
Dec 01 01:22:43.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 7; 2 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 7; 2 nodes are at revision 8\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-155-65.us-west-1.compute.internal is unhealthy"
Dec 01 01:22:43.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-155-65.us-west-1.compute.internal is unhealthy"
#1598110645746667520build-log.txt.gz19 hours ago
Dec 01 01:26:32.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (21 times)
Dec 01 01:27:37.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (22 times)
Dec 01 01:27:37.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (22 times)
Dec 01 01:27:40.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (23 times)
Dec 01 01:27:40.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (23 times)
Dec 01 01:28:06.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-244-151.us-west-1.compute.internal node/ip-10-0-244-151 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Dec 01 01:28:06.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-244-151.us-west-1.compute.internal node/ip-10-0-244-151 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Dec 01 01:28:06.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-244-151.us-west-1.compute.internal node/ip-10-0-244-151 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 01:28:06.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-244-151.us-west-1.compute.internal node/ip-10-0-244-151 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 01:28:06.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-244-151.us-west-1.compute.internal node/ip-10-0-244-151 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 01:28:06.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-244-151.us-west-1.compute.internal node/ip-10-0-244-151 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 01:28:07.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-244-151.us-west-1.compute.internal node/ip-10-0-244-151.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.244.151:6443/healthz": dial tcp 10.0.244.151:6443: connect: connection refused\nbody: \n
#1598110645746667520build-log.txt.gz19 hours ago
Dec 01 01:33:01.095 I ns/openshift-marketplace pod/redhat-operators-m6pbj node/ip-10-0-191-138.us-west-1.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Dec 01 01:33:01.156 I ns/openshift-marketplace pod/redhat-operators-m6pbj node/ip-10-0-191-138.us-west-1.compute.internal reason/Deleted
Dec 01 01:33:01.157 I ns/openshift-marketplace pod/redhat-marketplace-j5wk4 node/ip-10-0-191-138.us-west-1.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Dec 01 01:33:01.158 I ns/openshift-marketplace pod/redhat-marketplace-j5wk4 node/ip-10-0-191-138.us-west-1.compute.internal reason/Deleted
Dec 01 01:33:32.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (44 times)
Dec 01 01:33:38.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-188-250.us-west-1.compute.internal node/ip-10-0-188-250 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Dec 01 01:33:38.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-188-250.us-west-1.compute.internal node/ip-10-0-188-250 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 01:33:38.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-188-250.us-west-1.compute.internal node/ip-10-0-188-250 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 01:33:39.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-188-250.us-west-1.compute.internal node/ip-10-0-188-250.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.188.250:6443/healthz": dial tcp 10.0.188.250:6443: connect: connection refused\nbody: \n
Dec 01 01:33:39.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-188-250.us-west-1.compute.internal node/ip-10-0-188-250.us-west-1.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.0.188.250:6443/healthz": dial tcp 10.0.188.250:6443: connect: connection refused
Dec 01 01:33:40.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-188-250.us-west-1.compute.internal node/ip-10-0-188-250 reason/TerminationGracefulTerminationFinished All pending requests processed
#1598110645746667520build-log.txt.gz19 hours ago
Dec 01 01:48:09.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/ServiceUpdated Updated Service/api -n openshift-apiserver because it changed
Dec 01 01:48:09.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-controller-7676b5cc78 reason/SuccessfulCreate Created pod: csi-snapshot-controller-7676b5cc78-ww6s8
Dec 01 01:48:09.000 I ns/openshift-machine-api replicaset/cluster-autoscaler-operator-dbb95b99f reason/SuccessfulDelete Deleted pod: cluster-autoscaler-operator-dbb95b99f-6z4mr
Dec 01 01:48:09.000 I ns/openshift-monitoring replicaset/cluster-monitoring-operator-6fd6bb47b reason/SuccessfulDelete Deleted pod: cluster-monitoring-operator-6fd6bb47b-s6v2q
Dec 01 01:48:09.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-controller-7b89dcf465 reason/SuccessfulDelete Deleted pod: csi-snapshot-controller-7b89dcf465-wj4v5
Dec 01 01:48:09.000 I ns/default namespace/kube-system node/apiserver-54cfd66644-jc26v reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Dec 01 01:48:09.000 I ns/default namespace/kube-system node/apiserver-54cfd66644-jc26v reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 01:48:09.000 I ns/default namespace/kube-system node/apiserver-54cfd66644-jc26v reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 01:48:09.000 I ns/default namespace/kube-system node/apiserver-54cfd66644-jc26v reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:48:09.355 W clusteroperator/csi-snapshot-controller condition/Available status/True reason/AsExpected changed: All is well
Dec 01 01:48:09.491 E ns/openshift-kube-storage-version-migrator pod/migrator-5cb4d6d6dd-fr55t node/ip-10-0-244-151.us-west-1.compute.internal container/migrator reason/ContainerExit code/2 cause/Error I1201 00:43:40.596601       1 migrator.go:18] FLAG: --add_dir_header="false"\nI1201 00:43:40.596666       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI1201 00:43:40.596670       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI1201 00:43:40.596674       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI1201 00:43:40.596678       1 migrator.go:18] FLAG: --kubeconfig=""\nI1201 00:43:40.596681       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI1201 00:43:40.596686       1 migrator.go:18] FLAG: --log_dir=""\nI1201 00:43:40.596689       1 migrator.go:18] FLAG: --log_file=""\nI1201 00:43:40.596692       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI1201 00:43:40.596695       1 migrator.go:18] FLAG: --logtostderr="true"\nI1201 00:43:40.596698       1 migrator.go:18] FLAG: --one_output="false"\nI1201 00:43:40.596701       1 migrator.go:18] FLAG: --skip_headers="false"\nI1201 00:43:40.596703       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI1201 00:43:40.596706       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI1201 00:43:40.596709       1 migrator.go:18] FLAG: --v="2"\nI1201 00:43:40.596712       1 migrator.go:18] FLAG: --vmodule=""\nI1201 00:43:40.597890       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI1201 00:43:51.713712       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI1201 00:43:51.787411       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI1201 00:43:52.792774       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI1201 00:43:52.833832       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\nI1201 00:48:40.161573       1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\n
#1598110645746667520build-log.txt.gz19 hours ago
Dec 01 01:48:27.000 I ns/openshift-monitoring pod/grafana-7c656577f8-wgr8q node/ip-10-0-191-138.us-west-1.compute.internal container/grafana reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:b1d25d472ace097cd0385c4180687e368376e184bc6bdcdf149648a0af52a758
Dec 01 01:48:27.000 I ns/openshift-monitoring pod/node-exporter-5kqmb node/ip-10-0-244-151.us-west-1.compute.internal container/init-textfile reason/Created
Dec 01 01:48:27.000 I ns/openshift-monitoring pod/node-exporter-5kqmb node/ip-10-0-244-151.us-west-1.compute.internal container/init-textfile reason/Pulled duration/4.281s image/registry.ci.openshift.org/ocp/4.10-2022-12-01-001518@sha256:9e69356edf4d1e36bb53156906c69762e987a454aa4ec5c83f7ac55483bede9a
Dec 01 01:48:27.000 I ns/openshift-monitoring pod/grafana-7c656577f8-wgr8q reason/AddedInterface Add eth0 [10.129.2.43/23] from ovn-kubernetes
Dec 01 01:48:27.000 I ns/openshift-monitoring statefulset/prometheus-k8s reason/SuccessfulDelete delete Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful
Dec 01 01:48:27.000 I ns/openshift-apiserver pod/apiserver-6c5bc564fb-vl6b9 node/apiserver-6c5bc564fb-vl6b9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Dec 01 01:48:27.000 I ns/openshift-apiserver pod/apiserver-6c5bc564fb-vl6b9 node/apiserver-6c5bc564fb-vl6b9 reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:48:27.355 I ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-212-68.us-west-1.compute.internal reason/GracefulDelete duration/600s
Dec 01 01:48:27.463 W ns/openshift-monitoring pod/prometheus-adapter-5dcf5c649f-v5brk reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Dec 01 01:48:27.472 I ns/openshift-monitoring pod/telemeter-client-677f6b7585-6mkdt node/ip-10-0-212-68.us-west-1.compute.internal container/kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
Dec 01 01:48:27.472 E ns/openshift-monitoring pod/telemeter-client-677f6b7585-6mkdt node/ip-10-0-212-68.us-west-1.compute.internal container/telemeter-client reason/ContainerExit code/2 cause/Error
#1597889246771810304build-log.txt.gz34 hours ago
Nov 30 10:34:26.000 I ns/openshift-etcd pod/etcd-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29.us-east-2.compute.internal container/etcd-ensure-env-vars reason/Pulled image/registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2
Nov 30 10:34:26.000 I ns/openshift-etcd pod/etcd-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29.us-east-2.compute.internal container/etcd-ensure-env-vars reason/Started
Nov 30 10:34:27.000 I ns/openshift-etcd pod/etcd-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29.us-east-2.compute.internal container/etcd-resources-copy reason/Created
Nov 30 10:34:27.000 I ns/openshift-etcd pod/etcd-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29.us-east-2.compute.internal container/etcd-resources-copy reason/Pulled image/registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2
Nov 30 10:34:27.000 I ns/openshift-etcd pod/etcd-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29.us-east-2.compute.internal container/etcd-resources-copy reason/Started
Nov 30 10:34:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-172.us-east-2.compute.internal node/ip-10-0-133-172 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 30 10:34:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-172.us-east-2.compute.internal node/ip-10-0-133-172 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 10:34:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-172.us-east-2.compute.internal node/ip-10-0-133-172 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 10:34:27.393 I ns/openshift-etcd pod/etcd-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29.us-east-2.compute.internal container/etcd-ensure-env-vars reason/ContainerExit code/0 cause/Completed
Nov 30 10:34:28.000 I ns/openshift-etcd pod/etcd-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29.us-east-2.compute.internal container/etcd reason/Created
Nov 30 10:34:28.000 I ns/openshift-etcd pod/etcd-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29.us-east-2.compute.internal container/etcd reason/Pulled image/registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2
#1597889246771810304build-log.txt.gz34 hours ago
Nov 30 10:37:01.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (17 times)
Nov 30 10:38:01.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (18 times)
Nov 30 10:39:06.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (19 times)
Nov 30 10:39:09.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-30-094246@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (20 times)
Nov 30 10:39:25.692 - 3749s I alert/APIRemovedInNextEUSReleaseInUse ns/openshift-kube-apiserver ALERTS{alertname="APIRemovedInNextEUSReleaseInUse", alertstate="pending", group="policy", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", resource="podsecuritypolicies", severity="info", version="v1beta1"}
Nov 30 10:39:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-155.us-east-2.compute.internal node/ip-10-0-167-155 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 30 10:39:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-155.us-east-2.compute.internal node/ip-10-0-167-155 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 10:39:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-155.us-east-2.compute.internal node/ip-10-0-167-155 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 10:39:51.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-155.us-east-2.compute.internal node/ip-10-0-167-155 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 10:39:52.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-155.us-east-2.compute.internal node/ip-10-0-167-155.us-east-2.compute.internal container/kube-apiserver reason/Killing
Nov 30 10:39:52.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-167-155.us-east-2.compute.internal node/ip-10-0-167-155.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.167.155:6443/healthz": dial tcp 10.0.167.155:6443: connect: connection refused\nbody: \n
#1597889246771810304build-log.txt.gz34 hours ago
Nov 30 10:45:05.000 I ns/openshift-operator-lifecycle-manager job/collect-profiles-27830085 reason/Completed Job completed
Nov 30 10:45:05.000 I ns/openshift-operator-lifecycle-manager cronjob/collect-profiles reason/SawCompletedJob Saw completed job: collect-profiles-27830085, status: Complete
Nov 30 10:45:05.000 I ns/openshift-operator-lifecycle-manager cronjob/collect-profiles reason/SuccessfulDelete Deleted job collect-profiles-27830040
Nov 30 10:45:05.455 I ns/openshift-operator-lifecycle-manager pod/collect-profiles-27830040--1-nzw4r node/ip-10-0-243-125.us-east-2.compute.internal reason/DeletedAfterCompletion
Nov 30 10:45:05.456 I ns/openshift-operator-lifecycle-manager pod/collect-profiles-27830040--1-p98bn node/ip-10-0-243-125.us-east-2.compute.internal reason/DeletedAfterCompletion
Nov 30 10:45:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 30 10:45:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 10:45:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 10:45:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 10:45:18.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.237.29:6443/healthz": dial tcp 10.0.237.29:6443: connect: connection refused\nbody: \n
Nov 30 10:45:18.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-237-29.us-east-2.compute.internal node/ip-10-0-237-29.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.0.237.29:6443/healthz": dial tcp 10.0.237.29:6443: connect: connection refused
#1597889246771810304build-log.txt.gz34 hours ago
Nov 30 11:02:53.073 I ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-7d76dc767-qvnh4 node/ reason/Created
Nov 30 11:02:53.081 I ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-7d76dc767-qvnh4 node/ip-10-0-133-172.us-east-2.compute.internal reason/Scheduled
Nov 30 11:02:54.000 I ns/openshift-operator-lifecycle-manager pod/package-server-manager-5759556757-nzb6t reason/AddedInterface Add eth0 [10.130.0.88/23] from ovn-kubernetes
Nov 30 11:02:54.000 I ns/openshift-operator-lifecycle-manager pod/catalog-operator-6d68fc84f-fs8gh reason/AddedInterface Add eth0 [10.130.0.90/23] from ovn-kubernetes
Nov 30 11:02:54.000 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/CustomResourceDefinitionUpdated Updated CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it changed
Nov 30 11:02:54.307 E ns/openshift-console-operator pod/console-operator-85d674f9d9-wnbwh node/ip-10-0-237-29.us-east-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error od", Namespace:"openshift-console-operator", Name:"console-operator-85d674f9d9-wnbwh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI1130 11:02:51.791411       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI1130 11:02:51.791426       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-85d674f9d9-wnbwh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI1130 11:02:51.791437       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI1130 11:02:51.791616       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI1130 11:02:51.791626       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI1130 11:02:51.791642       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI1130 11:02:51.792226       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI1130 11:02:51.791658       1 base_controller.go:167] Shutting down ManagementStateController ...\nI1130 11:02:51.791678       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI1130 11:02:51.791686       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI1130 11:02:51.791693       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI1130 11:02:51.791691       1 base_controller.go:114] Shutting down worker of StatusSyncer_console controller ...\nI1130 11:02:51.792340       1 base_controller.go:104] All StatusSyncer_console workers have been terminated\nI1130 11:02:51.791700       1 base_controller.go:167] Shutting down HealthCheckController ...\nW1130 11:02:51.791702       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Nov 30 11:02:54.307 E ns/openshift-console-operator pod/console-operator-85d674f9d9-wnbwh node/ip-10-0-237-29.us-east-2.compute.internal container/console-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Nov 30 11:02:54.313 W ns/openshift-apiserver pod/apiserver-7787999585-9sqgv reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 11:02:54.313 W ns/openshift-apiserver pod/apiserver-7787999585-9sqgv reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 11:02:54.313 W ns/openshift-apiserver pod/apiserver-7787999585-9sqgv reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 11:02:54.321 I ns/openshift-console-operator pod/console-operator-85d674f9d9-wnbwh node/ip-10-0-237-29.us-east-2.compute.internal reason/Deleted
#1597889246771810304build-log.txt.gz34 hours ago
Nov 30 11:03:00.000 W ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-166-170.us-east-2.compute.internal reason/FailedMount MountVolume.SetUp failed for volume "tls-assets" : secret "prometheus-k8s-tls-assets" not found
Nov 30 11:03:00.000 I ns/openshift-cluster-node-tuning-operator deployment/cluster-node-tuning-operator reason/ScalingReplicaSet Scaled down replica set cluster-node-tuning-operator-b9ccd7545 to 0
Nov 30 11:03:00.000 I ns/openshift-monitoring statefulset/prometheus-k8s reason/SuccessfulCreate create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful
Nov 30 11:03:00.000 I ns/openshift-monitoring statefulset/prometheus-k8s reason/SuccessfulCreate create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful
Nov 30 11:03:00.000 I ns/openshift-cluster-node-tuning-operator replicaset/cluster-node-tuning-operator-b9ccd7545 reason/SuccessfulDelete Deleted pod: cluster-node-tuning-operator-b9ccd7545-4snbs
Nov 30 11:03:00.000 I ns/openshift-apiserver pod/apiserver-798cbc7657-vf6wn node/apiserver-798cbc7657-vf6wn reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 11:03:00.000 I ns/openshift-apiserver pod/apiserver-798cbc7657-vf6wn node/apiserver-798cbc7657-vf6wn reason/TerminationStoppedServing Server has stopped listening
Nov 30 11:03:00.000 W ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-243-125.us-east-2.compute.internal reason/Unhealthy Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of 64fa52bfbb9049dda7baf535416c73ecdf02fcc688a3993b7644a8fc5e9c2c9c is running failed: container process not found
Nov 30 11:03:00.000 W ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-166-170.us-east-2.compute.internal reason/Unhealthy Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of d0c975ad3aec054b95ba02388aba6e8b191e586608060c7bf7ead2ad042e577e is running failed: container process not found
Nov 30 11:03:00.020 I ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-8bbd458b5-zzjx4 node/ip-10-0-133-172.us-east-2.compute.internal container/cluster-node-tuning-operator reason/ContainerStart duration/33.00s
Nov 30 11:03:00.020 I ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-8bbd458b5-zzjx4 node/ip-10-0-133-172.us-east-2.compute.internal container/cluster-node-tuning-operator reason/Ready
#1597889246771810304build-log.txt.gz34 hours ago
Nov 30 11:03:20.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/DeploymentUpdated Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed
Nov 30 11:03:20.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4."
Nov 30 11:03:20.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-74db6bd5cc to 2
Nov 30 11:03:20.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-88fd6f7cc to 1
Nov 30 11:03:20.000 I ns/openshift-oauth-apiserver replicaset/apiserver-74db6bd5cc reason/SuccessfulDelete Deleted pod: apiserver-74db6bd5cc-v74kz
Nov 30 11:03:20.000 I ns/default namespace/kube-system node/apiserver-74db6bd5cc-v74kz reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 30 11:03:20.000 I ns/default namespace/kube-system node/apiserver-74db6bd5cc-v74kz reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 11:03:20.000 I ns/default namespace/kube-system node/apiserver-74db6bd5cc-v74kz reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 30 11:03:20.000 I ns/default namespace/kube-system node/apiserver-74db6bd5cc-v74kz reason/TerminationStoppedServing Server has stopped listening
Nov 30 11:03:20.805 I ns/openshift-oauth-apiserver pod/apiserver-74db6bd5cc-v74kz node/ip-10-0-237-29.us-east-2.compute.internal reason/GracefulDelete duration/70s
Nov 30 11:03:20.993 W ns/openshift-oauth-apiserver pod/apiserver-88fd6f7cc-m8fsp reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
#1597431550566207488build-log.txt.gz2 days ago
Nov 29 04:20:46.871 - 59s   I alert/PodDisruptionBudgetAtLimit ns/openshift-etcd ALERTS{alertname="PodDisruptionBudgetAtLimit", alertstate="pending", namespace="openshift-etcd", poddisruptionbudget="etcd-quorum-guard", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 29 04:20:47.190 I ns/openshift-marketplace pod/certified-operators-6kcnf node/ip-10-0-137-13.us-west-1.compute.internal container/registry-server reason/ContainerStart duration/3.00s
Nov 29 04:20:48.000 W ns/openshift-etcd pod/etcd-quorum-guard-6655857bd5-dpfqw node/ip-10-0-244-61.us-west-1.compute.internal reason/Unhealthy Readiness probe failed:  (9 times)
Nov 29 04:20:50.211 I ns/openshift-marketplace pod/redhat-operators-wmcmf node/ip-10-0-137-13.us-west-1.compute.internal reason/Scheduled
Nov 29 04:20:50.227 I ns/openshift-marketplace pod/redhat-operators-wmcmf node/ reason/Created
Nov 29 04:20:51.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-244-61.us-west-1.compute.internal node/ip-10-0-244-61 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 04:20:51.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-244-61.us-west-1.compute.internal node/ip-10-0-244-61 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 04:20:51.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-244-61.us-west-1.compute.internal node/ip-10-0-244-61 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 04:20:52.000 I ns/openshift-marketplace pod/redhat-operators-wmcmf node/ip-10-0-137-13.us-west-1.compute.internal container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-operator-index:v4.9
Nov 29 04:20:52.000 I ns/openshift-marketplace pod/redhat-operators-wmcmf reason/AddedInterface Add eth0 [10.129.2.32/23] from ovn-kubernetes
Nov 29 04:20:52.871 - 59s   I alert/KubeDeploymentReplicasMismatch ns/openshift-etcd container/kube-rbac-proxy-main ALERTS{alertname="KubeDeploymentReplicasMismatch", alertstate="pending", container="kube-rbac-proxy-main", deployment="etcd-quorum-guard", endpoint="https-main", job="kube-state-metrics", namespace="openshift-etcd", prometheus="openshift-monitoring/k8s", service="kube-state-metrics", severity="warning"}
#1597431550566207488build-log.txt.gz2 days ago
Nov 29 04:26:09.000 I ns/openshift-marketplace pod/redhat-marketplace-nxmjr node/ip-10-0-137-13.us-west-1.compute.internal container/registry-server reason/Pulled duration/0.904s image/registry.redhat.io/redhat/redhat-marketplace-index:v4.9
Nov 29 04:26:09.000 I ns/openshift-marketplace pod/redhat-marketplace-nxmjr node/ip-10-0-137-13.us-west-1.compute.internal container/registry-server reason/Pulled duration/0.904s image/registry.redhat.io/redhat/redhat-marketplace-index:v4.9
Nov 29 04:26:09.000 I ns/openshift-marketplace pod/redhat-marketplace-nxmjr node/ip-10-0-137-13.us-west-1.compute.internal container/registry-server reason/Started
Nov 29 04:26:09.000 I ns/openshift-marketplace pod/redhat-marketplace-nxmjr node/ip-10-0-137-13.us-west-1.compute.internal container/registry-server reason/Started
Nov 29 04:26:09.781 I ns/openshift-marketplace pod/redhat-marketplace-nxmjr node/ip-10-0-137-13.us-west-1.compute.internal container/registry-server reason/ContainerStart duration/3.00s
Nov 29 04:26:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-189-226.us-west-1.compute.internal node/ip-10-0-189-226 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 04:26:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-189-226.us-west-1.compute.internal node/ip-10-0-189-226 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 04:26:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-189-226.us-west-1.compute.internal node/ip-10-0-189-226 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 04:26:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-189-226.us-west-1.compute.internal node/ip-10-0-189-226 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 04:26:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-189-226.us-west-1.compute.internal node/ip-10-0-189-226 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 04:26:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-189-226.us-west-1.compute.internal node/ip-10-0-189-226 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 04:26:14.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-189-226.us-west-1.compute.internal node/ip-10-0-189-226.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.189.226:6443/healthz": dial tcp 10.0.189.226:6443: connect: connection refused\nbody: \n
#1597431550566207488build-log.txt.gz2 days ago
Nov 29 04:31:14.503 I ns/openshift-marketplace pod/community-operators-mhx9x node/ip-10-0-137-13.us-west-1.compute.internal reason/GracefulDelete duration/1s
Nov 29 04:31:16.000 I ns/openshift-marketplace pod/community-operators-mhx9x node/ip-10-0-137-13.us-west-1.compute.internal container/registry-server reason/Killing
Nov 29 04:31:17.000 I ns/openshift-marketplace pod/community-operators-mhx9x node/ip-10-0-137-13.us-west-1.compute.internal container/registry-server reason/Killing
Nov 29 04:31:17.471 I ns/openshift-marketplace pod/community-operators-mhx9x node/ip-10-0-137-13.us-west-1.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 29 04:31:17.542 I ns/openshift-marketplace pod/community-operators-mhx9x node/ip-10-0-137-13.us-west-1.compute.internal reason/Deleted
Nov 29 04:31:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-191-46.us-west-1.compute.internal node/ip-10-0-191-46 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 04:31:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-191-46.us-west-1.compute.internal node/ip-10-0-191-46 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 04:31:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-191-46.us-west-1.compute.internal node/ip-10-0-191-46 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 04:31:29.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-29-032248@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (41 times)
Nov 29 04:31:30.862 I ns/openshift-marketplace pod/redhat-operators-cwj9x node/ip-10-0-137-13.us-west-1.compute.internal reason/Scheduled
Nov 29 04:31:30.890 I ns/openshift-marketplace pod/redhat-operators-cwj9x node/ reason/Created
#1597431550566207488build-log.txt.gz2 days ago
Nov 29 04:49:14.000 I ns/openshift-image-registry pod/image-registry-6df69c6ddd-qtx89 node/ip-10-0-150-88.us-west-1.compute.internal container/registry reason/Killing
Nov 29 04:49:14.000 I ns/openshift-image-registry deployment/image-registry reason/ScalingReplicaSet Scaled down replica set image-registry-6df69c6ddd to 0
Nov 29 04:49:14.000 I ns/openshift-image-registry daemonset/node-ca reason/SuccessfulCreate Created pod: node-ca-ft9f4
Nov 29 04:49:14.000 I ns/openshift-image-registry replicaset/image-registry-6df69c6ddd reason/SuccessfulDelete Deleted pod: image-registry-6df69c6ddd-qtx89
Nov 29 04:49:14.275 I ns/openshift-image-registry pod/node-ca-w54dw node/ip-10-0-191-46.us-west-1.compute.internal container/node-ca reason/ContainerExit code/0 cause/Completed
Nov 29 04:49:14.347 E ns/openshift-console-operator pod/console-operator-85d674f9d9-mmvhn node/ip-10-0-191-46.us-west-1.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error ShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI1129 04:49:04.338590       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI1129 04:49:04.338617       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-85d674f9d9-mmvhn", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI1129 04:49:04.338657       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI1129 04:49:04.338715       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI1129 04:49:04.338726       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI1129 04:49:04.338738       1 base_controller.go:167] Shutting down HealthCheckController ...\nI1129 04:49:04.338748       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI1129 04:49:04.338756       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI1129 04:49:04.338770       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI1129 04:49:04.338784       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI1129 04:49:04.338797       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI1129 04:49:04.338811       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI1129 04:49:04.338820       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI1129 04:49:04.338834       1 base_controller.go:167] Shutting down ManagementStateController ...\nI1129 04:49:04.338848       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI1129 04:49:04.338861       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nW1129 04:49:04.339023       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Nov 29 04:49:14.347 E ns/openshift-console-operator pod/console-operator-85d674f9d9-mmvhn node/ip-10-0-191-46.us-west-1.compute.internal container/console-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Nov 29 04:49:14.598 I ns/openshift-image-registry pod/node-ca-w54dw node/ip-10-0-191-46.us-west-1.compute.internal reason/Deleted
Nov 29 04:49:14.606 I ns/openshift-image-registry pod/node-ca-ft9f4 node/ip-10-0-191-46.us-west-1.compute.internal reason/Scheduled
Nov 29 04:49:14.656 I ns/openshift-image-registry pod/image-registry-59cb86cfbd-g6nqs node/ip-10-0-202-99.us-west-1.compute.internal container/registry reason/ContainerStart duration/9.00s
Nov 29 04:49:14.670 I ns/openshift-image-registry pod/node-ca-ft9f4 node/ reason/Created
#1597431550566207488build-log.txt.gz2 days ago
Nov 29 04:49:31.000 I ns/openshift-monitoring deployment/prometheus-operator reason/ScalingReplicaSet Scaled up replica set prometheus-operator-74948fb495 to 1
Nov 29 04:49:31.000 I ns/openshift-oauth-apiserver replicaset/apiserver-6f95f67779 reason/SuccessfulCreate Created pod: apiserver-6f95f67779-qmsxr
Nov 29 04:49:31.000 I ns/openshift-image-registry daemonset/node-ca reason/SuccessfulCreate Created pod: node-ca-j5vbm
Nov 29 04:49:31.000 I ns/openshift-monitoring replicaset/prometheus-operator-74948fb495 reason/SuccessfulCreate Created pod: prometheus-operator-74948fb495-6x4sn
Nov 29 04:49:31.000 I ns/openshift-oauth-apiserver replicaset/apiserver-cc754ff59 reason/SuccessfulDelete Deleted pod: apiserver-cc754ff59-7sngr
Nov 29 04:49:31.000 I ns/default namespace/kube-system node/apiserver-cc754ff59-7sngr reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 29 04:49:31.000 I ns/default namespace/kube-system node/apiserver-cc754ff59-7sngr reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 04:49:31.000 I ns/default namespace/kube-system node/apiserver-cc754ff59-7sngr reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 04:49:31.000 I ns/default namespace/kube-system node/apiserver-cc754ff59-7sngr reason/TerminationStoppedServing Server has stopped listening
Nov 29 04:49:31.293 I ns/openshift-oauth-apiserver pod/apiserver-cc754ff59-7sngr node/ip-10-0-244-61.us-west-1.compute.internal reason/GracefulDelete duration/70s
Nov 29 04:49:31.347 I ns/openshift-monitoring pod/prometheus-operator-74948fb495-6x4sn node/ip-10-0-244-61.us-west-1.compute.internal reason/Scheduled
#1596155332688613376build-log.txt.gz6 days ago
Nov 25 15:39:44.000 I ns/openshift-etcd pod/etcd-ip-10-0-149-237.us-west-1.compute.internal node/ip-10-0-149-237.us-west-1.compute.internal container/setup reason/Started
Nov 25 15:39:44.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-dpptk node/ip-10-0-149-237.us-west-1.compute.internal reason/Unhealthy Readiness probe failed:  (11 times)
Nov 25 15:39:45.000 I ns/openshift-etcd pod/etcd-ip-10-0-149-237.us-west-1.compute.internal node/ip-10-0-149-237.us-west-1.compute.internal container/etcd-ensure-env-vars reason/Created
Nov 25 15:39:45.000 I ns/openshift-etcd pod/etcd-ip-10-0-149-237.us-west-1.compute.internal node/ip-10-0-149-237.us-west-1.compute.internal container/etcd-ensure-env-vars reason/Pulled image/registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2
Nov 25 15:39:45.000 I ns/openshift-etcd pod/etcd-ip-10-0-149-237.us-west-1.compute.internal node/ip-10-0-149-237.us-west-1.compute.internal container/etcd-ensure-env-vars reason/Started
Nov 25 15:39:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-149-237.us-west-1.compute.internal node/ip-10-0-149-237 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 25 15:39:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-149-237.us-west-1.compute.internal node/ip-10-0-149-237 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 15:39:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-149-237.us-west-1.compute.internal node/ip-10-0-149-237 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 15:39:45.742 I ns/openshift-etcd pod/etcd-ip-10-0-149-237.us-west-1.compute.internal node/ip-10-0-149-237.us-west-1.compute.internal container/setup reason/ContainerExit code/0 cause/Completed
Nov 25 15:39:46.000 I ns/openshift-etcd pod/etcd-ip-10-0-149-237.us-west-1.compute.internal node/ip-10-0-149-237.us-west-1.compute.internal container/etcd-resources-copy reason/Created
Nov 25 15:39:46.000 I ns/openshift-etcd pod/etcd-ip-10-0-149-237.us-west-1.compute.internal node/ip-10-0-149-237.us-west-1.compute.internal container/etcd-resources-copy reason/Pulled image/registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2
#1596155332688613376build-log.txt.gz6 days ago
Nov 25 15:45:02.817 I ns/openshift-operator-lifecycle-manager pod/collect-profiles-27823185--1-cr7bl node/ip-10-0-137-201.us-west-1.compute.internal container/collect-profiles reason/Ready
Nov 25 15:45:03.000 I ns/openshift-multus job/ip-reconciler-27823185 reason/Completed Job completed
Nov 25 15:45:03.000 I ns/openshift-multus cronjob/ip-reconciler reason/SawCompletedJob Saw completed job: ip-reconciler-27823185, status: Complete
Nov 25 15:45:03.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulDelete Deleted job ip-reconciler-27823185
Nov 25 15:45:03.968 I ns/openshift-multus pod/ip-reconciler-27823185--1-tztjq node/ip-10-0-137-201.us-west-1.compute.internal reason/DeletedAfterCompletion
Nov 25 15:45:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-229-37.us-west-1.compute.internal node/ip-10-0-229-37 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 25 15:45:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-229-37.us-west-1.compute.internal node/ip-10-0-229-37 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 15:45:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-229-37.us-west-1.compute.internal node/ip-10-0-229-37 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 15:45:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (22 times)
Nov 25 15:45:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-229-37.us-west-1.compute.internal node/ip-10-0-229-37 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 25 15:45:13.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-229-37.us-west-1.compute.internal node/ip-10-0-229-37.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.229.37:6443/healthz": dial tcp 10.0.229.37:6443: connect: connection refused\nbody: \n
#1596155332688613376build-log.txt.gz6 days ago
Nov 25 15:50:19.549 I ns/openshift-marketplace pod/redhat-operators-ph4sg node/ip-10-0-137-201.us-west-1.compute.internal container/registry-server reason/Ready
Nov 25 15:50:19.620 I ns/openshift-marketplace pod/redhat-operators-ph4sg node/ip-10-0-137-201.us-west-1.compute.internal reason/GracefulDelete duration/1s
Nov 25 15:50:21.000 I ns/openshift-marketplace pod/redhat-operators-ph4sg node/ip-10-0-137-201.us-west-1.compute.internal container/registry-server reason/Killing
Nov 25 15:50:22.597 I ns/openshift-marketplace pod/redhat-operators-ph4sg node/ip-10-0-137-201.us-west-1.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 25 15:50:22.668 I ns/openshift-marketplace pod/redhat-operators-ph4sg node/ip-10-0-137-201.us-west-1.compute.internal reason/Deleted
Nov 25 15:50:23.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-86.us-west-1.compute.internal node/ip-10-0-167-86 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 25 15:50:23.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-86.us-west-1.compute.internal node/ip-10-0-167-86 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 15:50:23.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-86.us-west-1.compute.internal node/ip-10-0-167-86 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 15:50:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-86.us-west-1.compute.internal node/ip-10-0-167-86 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 25 15:50:27.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-167-86.us-west-1.compute.internal node/ip-10-0-167-86.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.167.86:6443/healthz": dial tcp 10.0.167.86:6443: connect: connection refused\nbody: \n
Nov 25 15:50:27.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-167-86.us-west-1.compute.internal node/ip-10-0-167-86.us-west-1.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.0.167.86:6443/healthz": dial tcp 10.0.167.86:6443: connect: connection refused
#1596155332688613376build-log.txt.gz6 days ago
Nov 25 16:08:12.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-774dfc6bc5 to 1
Nov 25 16:08:12.000 I ns/openshift-oauth-apiserver replicaset/apiserver-774dfc6bc5 reason/SuccessfulCreate Created pod: apiserver-774dfc6bc5-95dtl
Nov 25 16:08:12.000 I ns/openshift-ingress-canary daemonset/ingress-canary reason/SuccessfulCreate Created pod: ingress-canary-r4df9
Nov 25 16:08:12.000 I ns/openshift-oauth-apiserver replicaset/apiserver-7565788dcc reason/SuccessfulDelete Deleted pod: apiserver-7565788dcc-kwm78
Nov 25 16:08:12.000 I ns/openshift-operator-lifecycle-manager replicaset/catalog-operator-885c99fdf reason/SuccessfulDelete Deleted pod: catalog-operator-885c99fdf-2xlk9
Nov 25 16:08:12.000 I ns/default namespace/kube-system node/apiserver-7565788dcc-kwm78 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 25 16:08:12.000 I ns/default namespace/kube-system node/apiserver-7565788dcc-kwm78 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 16:08:12.000 I ns/default namespace/kube-system node/apiserver-7565788dcc-kwm78 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 16:08:12.000 I ns/default namespace/kube-system node/apiserver-7565788dcc-kwm78 reason/TerminationStoppedServing Server has stopped listening
Nov 25 16:08:12.116 I ns/openshift-ingress pod/router-default-5d5cdd858-txbch node/ip-10-0-196-120.us-west-1.compute.internal container/router reason/Ready
Nov 25 16:08:12.307 I ns/openshift-operator-lifecycle-manager pod/catalog-operator-6f4d5dfdbd-bwr7c node/ip-10-0-167-86.us-west-1.compute.internal container/catalog-operator reason/ContainerStart duration/16.00s
#1596155332688613376build-log.txt.gz6 days ago
Nov 25 16:08:26.000 I ns/openshift-controller-manager pod/controller-manager-mm45k node/ip-10-0-149-237.us-west-1.compute.internal container/controller-manager reason/Created
Nov 25 16:08:26.000 I ns/openshift-controller-manager pod/controller-manager-mm45k node/ip-10-0-149-237.us-west-1.compute.internal container/controller-manager reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:edfa8a6a6735178ebca4c9678e680c79cb9401128863ac0306d2b58a8f691b06
Nov 25 16:08:26.000 I ns/openshift-controller-manager pod/controller-manager-mm45k node/ip-10-0-149-237.us-west-1.compute.internal container/controller-manager reason/Started
Nov 25 16:08:26.000 I ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-196-120.us-west-1.compute.internal container/prometheus reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:5f4a9544cd9d5131b98b00d301dadfc864f14c3eb77cbf6a9a611e22c838f5eb
Nov 25 16:08:26.000 I ns/openshift-controller-manager pod/controller-manager-mm45k reason/AddedInterface Add eth0 [10.129.0.81/23] from ovn-kubernetes
Nov 25 16:08:26.000 I ns/openshift-apiserver pod/apiserver-77f68bcd95-bhkgc node/apiserver-77f68bcd95-bhkgc reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 25 16:08:26.000 I ns/openshift-apiserver pod/apiserver-77f68bcd95-bhkgc node/apiserver-77f68bcd95-bhkgc reason/TerminationStoppedServing Server has stopped listening
Nov 25 16:08:26.151 I ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-196-120.us-west-1.compute.internal container/init-config-reloader reason/ContainerExit code/0 cause/Completed
Nov 25 16:08:26.277 I ns/openshift-controller-manager pod/controller-manager-xt4z7 node/ip-10-0-229-37.us-west-1.compute.internal container/controller-manager reason/ContainerStart duration/2.00s
Nov 25 16:08:26.277 I ns/openshift-controller-manager pod/controller-manager-xt4z7 node/ip-10-0-229-37.us-west-1.compute.internal container/controller-manager reason/Ready
Nov 25 16:08:26.420 I ns/openshift-controller-manager pod/controller-manager-7wccf node/ip-10-0-167-86.us-west-1.compute.internal container/controller-manager reason/ContainerStart duration/2.00s
#1596155332688613376build-log.txt.gz6 days ago
Nov 25 16:09:31.000 I ns/openshift-marketplace pod/certified-operators-rl77j node/ip-10-0-167-86.us-west-1.compute.internal container/registry-server reason/Created
Nov 25 16:09:31.000 I ns/openshift-marketplace pod/certified-operators-rl77j node/ip-10-0-167-86.us-west-1.compute.internal container/registry-server reason/Pulled duration/16.162s image/registry.redhat.io/redhat/certified-operator-index:v4.10
Nov 25 16:09:31.000 I ns/openshift-marketplace pod/certified-operators-rl77j node/ip-10-0-167-86.us-west-1.compute.internal container/registry-server reason/Started
Nov 25 16:09:31.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-687ddb84b to 2
Nov 25 16:09:31.000 I ns/openshift-oauth-apiserver replicaset/apiserver-687ddb84b reason/SuccessfulCreate Created pod: apiserver-687ddb84b-lmg5w
Nov 25 16:09:31.000 I ns/default namespace/kube-system node/apiserver-7565788dcc-x4nvl reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 25 16:09:31.000 I ns/default namespace/kube-system node/apiserver-7565788dcc-x4nvl reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 16:09:31.000 I ns/default namespace/kube-system node/apiserver-7565788dcc-x4nvl reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 16:09:31.000 I ns/default namespace/kube-system node/apiserver-7565788dcc-x4nvl reason/TerminationStoppedServing Server has stopped listening
Nov 25 16:09:31.031 I ns/openshift-oauth-apiserver pod/apiserver-7565788dcc-x4nvl node/ip-10-0-149-237.us-west-1.compute.internal reason/GracefulDelete duration/70s
Nov 25 16:09:31.487 I ns/openshift-apiserver pod/apiserver-77f68bcd95-bhkgc node/ip-10-0-167-86.us-west-1.compute.internal container/openshift-apiserver reason/ContainerExit code/0 cause/Completed
#1596124454772019200build-log.txt.gz6 days ago
Nov 25 13:56:34.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-tj9mf node/ip-10-0-163-224.us-east-2.compute.internal reason/Unhealthy Readiness probe failed:  (12 times)
Nov 25 13:56:39.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-tj9mf node/ip-10-0-163-224.us-east-2.compute.internal reason/Unhealthy Readiness probe failed:  (13 times)
Nov 25 13:56:41.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ip-10-0-163-224.us-east-2.compute.internal (4 times)
Nov 25 13:56:44.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-163-224.us-east-2.compute.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-163-224.us-east-2.compute.internal is unhealthy"
Nov 25 13:56:44.788 I ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-tj9mf node/ip-10-0-163-224.us-east-2.compute.internal container/guard reason/Ready
Nov 25 13:56:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-224.us-east-2.compute.internal node/ip-10-0-163-224 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 25 13:56:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-224.us-east-2.compute.internal node/ip-10-0-163-224 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 13:56:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-224.us-east-2.compute.internal node/ip-10-0-163-224 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 13:56:50.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.2222222222222223 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-163-224.us-east-2.compute.internal=0.012104,etcd-ip-10-0-180-83.us-east-2.compute.internal=0.012216,etcd-ip-10-0-228-147.us-east-2.compute.internal=0.006868. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 25 13:56:50.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "ip-10-0-163-224.us-east-2.compute.internal" from revision 6 to 7 because static pod is ready
Nov 25 13:56:50.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-163-224.us-east-2.compute.internal is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available"
#1596124454772019200build-log.txt.gz6 days ago
Nov 25 14:00:18.489 I ns/openshift-marketplace pod/certified-operators-wvhqb node/ip-10-0-166-167.us-east-2.compute.internal reason/Deleted
Nov 25 14:00:18.527 I ns/openshift-operator-lifecycle-manager pod/collect-profiles-27823035--1-8l25w node/ip-10-0-140-133.us-east-2.compute.internal reason/DeletedAfterCompletion
Nov 25 14:01:01.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (17 times)
Nov 25 14:01:04.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (18 times)
Nov 25 14:01:58.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (19 times)
Nov 25 14:02:21.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-83.us-east-2.compute.internal node/ip-10-0-180-83 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 25 14:02:21.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-83.us-east-2.compute.internal node/ip-10-0-180-83 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 14:02:21.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-83.us-east-2.compute.internal node/ip-10-0-180-83 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 14:02:23.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-180-83.us-east-2.compute.internal node/ip-10-0-180-83.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.180.83:6443/healthz": dial tcp 10.0.180.83:6443: connect: connection refused\nbody: \n
Nov 25 14:02:23.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-83.us-east-2.compute.internal node/ip-10-0-180-83 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 25 14:02:23.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-180-83.us-east-2.compute.internal node/ip-10-0-180-83.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.0.180.83:6443/healthz": dial tcp 10.0.180.83:6443: connect: connection refused
#1596124454772019200build-log.txt.gz6 days ago
Nov 25 14:06:48.738 I ns/openshift-marketplace pod/redhat-marketplace-qbkp6 node/ip-10-0-140-133.us-east-2.compute.internal reason/GracefulDelete duration/1s
Nov 25 14:06:50.000 I ns/openshift-marketplace pod/redhat-marketplace-qbkp6 node/ip-10-0-140-133.us-east-2.compute.internal container/registry-server reason/Killing
Nov 25 14:06:51.701 I ns/openshift-marketplace pod/redhat-marketplace-qbkp6 node/ip-10-0-140-133.us-east-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 25 14:06:51.729 I ns/openshift-marketplace pod/redhat-marketplace-qbkp6 node/ip-10-0-140-133.us-east-2.compute.internal reason/Deleted
Nov 25 14:06:56.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (40 times)
Nov 25 14:07:41.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-228-147.us-east-2.compute.internal node/ip-10-0-228-147 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 25 14:07:41.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-228-147.us-east-2.compute.internal node/ip-10-0-228-147 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 14:07:41.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-228-147.us-east-2.compute.internal node/ip-10-0-228-147 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 14:07:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-228-147.us-east-2.compute.internal node/ip-10-0-228-147 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 25 14:07:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-228-147.us-east-2.compute.internal node/ip-10-0-228-147.us-east-2.compute.internal container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea
Nov 25 14:07:44.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.10.0-0.ci-2022-11-25-123237"} {"kube-apiserver" "1.22.8"} {"operator" "4.9.52"}] to [{"raw-internal" "4.10.0-0.ci-2022-11-25-123237"} {"kube-apiserver" "1.23.12"} {"operator" "4.10.0-0.ci-2022-11-25-123237"}]
#1596124454772019200build-log.txt.gz6 days ago
Nov 25 14:24:30.000 W ns/openshift-oauth-apiserver pod/apiserver-5798c55764-bfvts node/ip-10-0-163-224.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.48:8443/readyz": dial tcp 10.130.0.48:8443: connect: connection refused\nbody: \n (2 times)
Nov 25 14:24:30.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-5798c55764 to 2
Nov 25 14:24:30.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-7778d758b6 to 1
Nov 25 14:24:30.000 I ns/openshift-oauth-apiserver replicaset/apiserver-7778d758b6 reason/SuccessfulCreate Created pod: apiserver-7778d758b6-jdsbb
Nov 25 14:24:30.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5798c55764 reason/SuccessfulDelete Deleted pod: apiserver-5798c55764-bfvts
Nov 25 14:24:30.000 I ns/default namespace/kube-system node/apiserver-5798c55764-bfvts reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 25 14:24:30.000 I ns/default namespace/kube-system node/apiserver-5798c55764-bfvts reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 14:24:30.000 I ns/default namespace/kube-system node/apiserver-5798c55764-bfvts reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 14:24:30.000 I ns/default namespace/kube-system node/apiserver-5798c55764-bfvts reason/TerminationStoppedServing Server has stopped listening
Nov 25 14:24:30.000 W ns/openshift-oauth-apiserver pod/apiserver-5798c55764-bfvts node/ip-10-0-163-224.us-east-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.48:8443/healthz": dial tcp 10.130.0.48:8443: connect: connection refused
Nov 25 14:24:30.000 W ns/openshift-oauth-apiserver pod/apiserver-5798c55764-bfvts node/ip-10-0-163-224.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.48:8443/readyz": dial tcp 10.130.0.48:8443: connect: connection refused (2 times)
#1596124454772019200build-log.txt.gz6 days ago
Nov 25 14:24:42.507 I ns/openshift-monitoring pod/prometheus-adapter-6fd6b595f7-c6tdb node/ip-10-0-140-133.us-east-2.compute.internal reason/Scheduled
Nov 25 14:24:43.000 I ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-7dc95f46f6-mnmlm node/ip-10-0-180-83.us-east-2.compute.internal container/aws-ebs-csi-driver-operator reason/Killing
Nov 25 14:24:43.000 I ns/openshift-cluster-storage-operator deployment/cluster-storage-operator reason/OperatorStatusChanged Status for clusteroperator/storage changed: Progressing changed from True to False ("AWSEBSCSIDriverOperatorCRProgressing: All is well")
Nov 25 14:24:43.000 I ns/openshift-cluster-csi-drivers deployment/aws-ebs-csi-driver-operator reason/ScalingReplicaSet Scaled down replica set aws-ebs-csi-driver-operator-7dc95f46f6 to 0
Nov 25 14:24:43.000 I ns/openshift-cluster-csi-drivers replicaset/aws-ebs-csi-driver-operator-7dc95f46f6 reason/SuccessfulDelete Deleted pod: aws-ebs-csi-driver-operator-7dc95f46f6-mnmlm
Nov 25 14:24:43.000 I ns/openshift-apiserver pod/apiserver-6fd4949456-926ph node/apiserver-6fd4949456-926ph reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 25 14:24:43.000 I ns/openshift-apiserver pod/apiserver-6fd4949456-926ph node/apiserver-6fd4949456-926ph reason/TerminationStoppedServing Server has stopped listening
Nov 25 14:24:43.192 I ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-976cb786c-lxrcm node/ip-10-0-163-224.us-east-2.compute.internal container/aws-ebs-csi-driver-operator reason/ContainerStart duration/5.00s
Nov 25 14:24:43.192 I ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-976cb786c-lxrcm node/ip-10-0-163-224.us-east-2.compute.internal container/aws-ebs-csi-driver-operator reason/Ready
Nov 25 14:24:43.222 I ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-7dc95f46f6-mnmlm node/ip-10-0-180-83.us-east-2.compute.internal reason/GracefulDelete duration/30s
Nov 25 14:24:43.249 W clusteroperator/storage condition/Progressing status/False reason/AsExpected changed: AWSEBSCSIDriverOperatorCRProgressing: All is well
#1596124454772019200build-log.txt.gz6 days ago
Nov 25 14:25:44.000 I ns/openshift-authentication deployment/oauth-openshift reason/ScalingReplicaSet Scaled up replica set oauth-openshift-6c6bdf96fd to 3
Nov 25 14:25:44.000 I ns/openshift-oauth-apiserver replicaset/apiserver-56955fb57f reason/SuccessfulCreate Created pod: apiserver-56955fb57f-zdz7g
Nov 25 14:25:44.000 I ns/openshift-authentication replicaset/oauth-openshift-6c6bdf96fd reason/SuccessfulCreate Created pod: oauth-openshift-6c6bdf96fd-5c8k9
Nov 25 14:25:44.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5798c55764 reason/SuccessfulDelete Deleted pod: apiserver-5798c55764-tlt4s
Nov 25 14:25:44.000 I ns/openshift-authentication replicaset/oauth-openshift-7ddc68bf74 reason/SuccessfulDelete Deleted pod: oauth-openshift-7ddc68bf74-cpqkg
Nov 25 14:25:44.000 I ns/default namespace/kube-system node/apiserver-5798c55764-tlt4s reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 25 14:25:44.000 I ns/default namespace/kube-system node/apiserver-5798c55764-tlt4s reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 14:25:44.000 I ns/default namespace/kube-system node/apiserver-5798c55764-tlt4s reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 14:25:44.000 I ns/default namespace/kube-system node/apiserver-5798c55764-tlt4s reason/TerminationStoppedServing Server has stopped listening
Nov 25 14:25:44.076 I ns/openshift-authentication pod/oauth-openshift-6c6bdf96fd-pf72n node/ip-10-0-180-83.us-east-2.compute.internal container/oauth-openshift reason/Ready
Nov 25 14:25:44.109 I ns/openshift-authentication pod/oauth-openshift-7ddc68bf74-cpqkg node/ip-10-0-228-147.us-east-2.compute.internal reason/GracefulDelete duration/40s
release-openshift-origin-installer-e2e-aws-disruptive-4.9 (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1598317449109835776build-log.txt.gz6 hours ago
Dec 01 14:41:59.000 I ns/openshift-cluster-storage-operator deployment/cluster-storage-operator reason/OperatorStatusChanged Status for clusteroperator/storage changed: Progressing changed from False to True ("AWSEBSProgressing: Waiting for Deployment to deploy pods")
Dec 01 14:41:59.000 I ns/openshift-oauth-apiserver replicaset/apiserver-bbc7bd49f reason/SuccessfulCreate Created pod: apiserver-bbc7bd49f-zmbzw
Dec 01 14:41:59.000 I ns/openshift-cluster-csi-drivers replicaset/aws-ebs-csi-driver-operator-5bbc5cd97c reason/SuccessfulCreate Created pod: aws-ebs-csi-driver-operator-5bbc5cd97c-jbrvs
Dec 01 14:41:59.000 I ns/openshift-cluster-samples-operator replicaset/cluster-samples-operator-7c5499f9db reason/SuccessfulCreate Created pod: cluster-samples-operator-7c5499f9db-4lsxc
Dec 01 14:41:59.000 I ns/openshift-cloud-credential-operator replicaset/pod-identity-webhook-7d6b4f447 reason/SuccessfulCreate Created pod: pod-identity-webhook-7d6b4f447-tp8l4
Dec 01 14:41:59.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-gc55n reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Dec 01 14:41:59.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-gc55n reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 14:41:59.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-gc55n reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 14:41:59.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-gc55n reason/TerminationStoppedServing Server has stopped listening
Dec 01 14:41:59.019 I ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-79fc94f8bd-jkm2g node/ reason/Created
Dec 01 14:41:59.020 I ns/openshift-kube-controller-manager pod/installer-6-ip-10-0-159-237.us-west-1.compute.internal node/ip-10-0-159-237.us-west-1.compute.internal reason/DeletedAfterCompletion
#1598317449109835776build-log.txt.gz6 hours ago
Dec 01 14:42:14.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-59854fb649 reason/SuccessfulCreate (combined from similar events): Created pod: etcd-quorum-guard-59854fb649-nc9l9 (3 times)
Dec 01 14:42:14.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-59854fb649 reason/SuccessfulCreate (combined from similar events): Created pod: etcd-quorum-guard-59854fb649-x975t
Dec 01 14:42:14.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-59854fb649 reason/SuccessfulDelete (combined from similar events): Deleted pod: etcd-quorum-guard-59854fb649-bv8l5
Dec 01 14:42:14.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-59854fb649 reason/SuccessfulDelete (combined from similar events): Deleted pod: etcd-quorum-guard-59854fb649-g8gcv (3 times)
Dec 01 14:42:14.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-59854fb649 reason/SuccessfulDelete (combined from similar events): Deleted pod: etcd-quorum-guard-59854fb649-x9664 (2 times)
Dec 01 14:42:14.000 I ns/openshift-apiserver pod/apiserver-66b49dcdff-5lf58 node/apiserver-66b49dcdff-5lf58 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Dec 01 14:42:14.000 I ns/openshift-apiserver pod/apiserver-66b49dcdff-5lf58 node/apiserver-66b49dcdff-5lf58 reason/TerminationStoppedServing Server has stopped listening
Dec 01 14:42:14.056 W ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-x9664 reason/FailedScheduling skip schedule deleting pod: openshift-etcd/etcd-quorum-guard-59854fb649-x9664
Dec 01 14:42:14.106 I ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-cb6mn node/ip-10-0-161-39.us-west-1.compute.internal container/guard reason/ContainerExit code/0 cause/Completed
Dec 01 14:42:14.130 W ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-x975t reason/FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match Pod's node affinity/selector.
Dec 01 14:42:14.153 W ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-nc9l9 reason/FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match Pod's node affinity/selector.
#1598317449109835776build-log.txt.gz6 hours ago
Dec 01 14:47:11.000 I ns/openshift-kube-controller-manager-operator deployment/kube-controller-manager-operator reason/OperatorStatusChanged Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 8" to "NodeInstallerProgressing: 2 nodes are at revision 8",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 8"
Dec 01 14:47:11.000 I ns/openshift-kube-scheduler-operator deployment/openshift-kube-scheduler-operator reason/OperatorStatusChanged Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 7" to "NodeInstallerProgressing: 2 nodes are at revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 7"
Dec 01 14:47:11.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/PodCreated Created Pod/revision-pruner-9-ip-10-0-161-39.us-west-1.compute.internal -n openshift-kube-apiserver because it was missing (3 times)
Dec 01 14:47:11.000 I ns/openshift-apiserver replicaset/apiserver-66b49dcdff reason/SuccessfulCreate Created pod: apiserver-66b49dcdff-5s7qq
Dec 01 14:47:11.000 I ns/openshift-oauth-apiserver replicaset/apiserver-bbc7bd49f reason/SuccessfulCreate Created pod: apiserver-bbc7bd49f-mtj2r
Dec 01 14:47:11.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-rggw7 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Dec 01 14:47:11.000 I ns/openshift-apiserver pod/apiserver-66b49dcdff-6pd64 node/apiserver-66b49dcdff-6pd64 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 14:47:11.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-rggw7 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 14:47:11.000 I ns/openshift-apiserver pod/apiserver-66b49dcdff-6pd64 node/apiserver-66b49dcdff-6pd64 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 14:47:11.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-rggw7 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 14:47:11.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-rggw7 reason/TerminationStoppedServing Server has stopped listening
#1598317449109835776build-log.txt.gz6 hours ago
Dec 01 14:47:24.102 I ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-ktglg node/ reason/Created
Dec 01 14:47:24.102 I ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-grpff node/ reason/Created
Dec 01 14:47:24.103 I ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-9mfk4 node/ reason/Created
Dec 01 14:47:24.759 I ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-sngqn node/ip-10-0-212-70.us-west-1.compute.internal container/guard reason/Ready
Dec 01 14:47:25.000 I ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-sngqn node/ip-10-0-212-70.us-west-1.compute.internal container/guard reason/Killing
Dec 01 14:47:26.000 I ns/openshift-apiserver pod/apiserver-66b49dcdff-6pd64 node/apiserver-66b49dcdff-6pd64 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Dec 01 14:47:26.000 I ns/openshift-apiserver pod/apiserver-66b49dcdff-6pd64 node/apiserver-66b49dcdff-6pd64 reason/TerminationStoppedServing Server has stopped listening
Dec 01 14:47:26.780 W ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-grpff reason/FailedScheduling 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector.
Dec 01 14:47:26.781 W ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-9mfk4 reason/FailedScheduling 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector.
Dec 01 14:47:26.791 I ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-ktglg node/ip-10-0-212-70.us-west-1.compute.internal reason/Scheduled
Dec 01 14:47:26.832 I ns/openshift-etcd pod/etcd-quorum-guard-59854fb649-sngqn node/ip-10-0-212-70.us-west-1.compute.internal container/guard reason/ContainerExit code/0 cause/Completed
#1598317449109835776build-log.txt.gz6 hours ago
Dec 01 15:20:10.000 I ns/openshift-oauth-apiserver pod/apiserver-bbc7bd49f-hmbts node/ip-10-0-208-233.us-west-1.compute.internal container/oauth-apiserver reason/Killing
Dec 01 15:20:10.000 I ns/openshift-kube-controller-manager pod/revision-pruner-9-ip-10-0-208-233.us-west-1.compute.internal reason/AddedInterface Add eth0 [10.130.0.21/23] from openshift-sdn
Dec 01 15:20:10.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-bbc7bd49f to 1 (2 times)
Dec 01 15:20:10.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-6c7b4746d6 to 2
Dec 01 15:20:10.000 I ns/openshift-oauth-apiserver replicaset/apiserver-bbc7bd49f reason/SuccessfulDelete Deleted pod: apiserver-bbc7bd49f-hmbts
Dec 01 15:20:10.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-hmbts reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Dec 01 15:20:10.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-hmbts reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 15:20:10.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-hmbts reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 15:20:10.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-hmbts reason/TerminationStoppedServing Server has stopped listening
Dec 01 15:20:10.512 I ns/openshift-oauth-apiserver pod/apiserver-6c7b4746d6-7mmxd node/ip-10-0-201-80.us-west-1.compute.internal container/oauth-apiserver reason/Ready
Dec 01 15:20:10.640 I ns/openshift-oauth-apiserver pod/apiserver-bbc7bd49f-hmbts node/ip-10-0-208-233.us-west-1.compute.internal reason/GracefulDelete duration/70s
#1598317449109835776build-log.txt.gz6 hours ago
Dec 01 15:20:42.000 W ns/openshift-oauth-apiserver pod/apiserver-6c7b4746d6-lfqqc node/ip-10-0-208-233.us-west-1.compute.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]etcd ok\n[-]informer-sync failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/openshift.io-StartUserInformer ok\n[+]poststarthook/openshift.io-StartOAuthInformer ok\n[+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok\n[+]shutdown ok\nreadyz check failed\n\n
Dec 01 15:20:42.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-bbc7bd49f to 0
Dec 01 15:20:42.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-6c7b4746d6 to 3
Dec 01 15:20:42.000 I ns/openshift-oauth-apiserver replicaset/apiserver-6c7b4746d6 reason/SuccessfulCreate Created pod: apiserver-6c7b4746d6-b2gwg
Dec 01 15:20:42.000 I ns/openshift-oauth-apiserver replicaset/apiserver-bbc7bd49f reason/SuccessfulDelete Deleted pod: apiserver-bbc7bd49f-ng75x
Dec 01 15:20:42.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-ng75x reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Dec 01 15:20:42.000 I ns/openshift-apiserver pod/apiserver-66b49dcdff-d5hq6 node/apiserver-66b49dcdff-d5hq6 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Dec 01 15:20:42.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-ng75x reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 15:20:42.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-ng75x reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 15:20:42.000 I ns/default namespace/kube-system node/apiserver-bbc7bd49f-ng75x reason/TerminationStoppedServing Server has stopped listening
Dec 01 15:20:42.000 I ns/openshift-apiserver pod/apiserver-66b49dcdff-d5hq6 node/apiserver-66b49dcdff-d5hq6 reason/TerminationStoppedServing Server has stopped listening
Dec 01 15:20:42.000 W ns/openshift-oauth-apiserver pod/apiserver-6c7b4746d6-lfqqc node/ip-10-0-208-233.us-west-1.compute.internal reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500
#1597592553622867968build-log.txt.gz2 days ago
Nov 29 14:57:49.000 I ns/openshift-etcd-operator namespace/openshift-etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:485.829µs Error:<nil>}]" to "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:824.281µs Error:<nil>}]"
Nov 29 14:57:50.000 I ns/openshift-etcd-operator namespace/openshift-etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:476.674µs Error:<nil>}]" to "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:647.755µs Error:<nil>}]"
Nov 29 14:57:51.000 I ns/openshift-etcd-operator namespace/openshift-etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:1.833122ms Error:<nil>}]" to "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:1.444629ms Error:<nil>}]"
Nov 29 14:57:52.000 I ns/openshift-etcd-operator namespace/openshift-etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:597.792µs Error:<nil>}]" to "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:3.666481ms Error:<nil>}]"
Nov 29 14:57:53.000 I ns/openshift-etcd-operator namespace/openshift-etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:587.874µs Error:<nil>}]" to "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:1.896331ms Error:<nil>}]"
Nov 29 14:57:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-166-231.us-east-2.compute.internal node/ip-10-0-166-231 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 14:57:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-166-231.us-east-2.compute.internal node/ip-10-0-166-231 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 14:57:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-166-231.us-east-2.compute.internal node/ip-10-0-166-231 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 14:57:54.000 I ns/openshift-etcd-operator namespace/openshift-etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:469.342µs Error:<nil>}]" to "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:372.517µs Error:<nil>}]"
Nov 29 14:57:55.000 I ns/openshift-etcd-operator namespace/openshift-etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:2.033428ms Error:<nil>}]" to "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:873.219µs Error:<nil>}]"
Nov 29 14:57:56.000 I ns/openshift-etcd-operator namespace/openshift-etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:481.391µs Error:<nil>}]" to "EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 1 which is not fault tolerant: [{Member:ID:2042872530404251960 name:\"ip-10-0-166-231.us-east-2.compute.internal\" peerURLs:\"https://10.0.166.231:2380\" clientURLs:\"https://10.0.166.231:2379\"  Healthy:true Took:716.607µs Error:<nil>}]"
periodic-ci-openshift-release-master-ci-4.9-upgrade-from-stable-4.8-e2e-aws-ovn-upgrade (all) - 10 runs, 40% failed, 75% of failures match = 30% impact
#1598130998384529408build-log.txt.gz18 hours ago
Dec 01 02:45:02.000 I ns/openshift-multus job/ip-reconciler-27831045 reason/Completed Job completed
Dec 01 02:45:02.000 I ns/openshift-multus cronjob/ip-reconciler reason/SawCompletedJob Saw completed job: ip-reconciler-27831045, status: Complete
Dec 01 02:45:02.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulDelete Deleted job ip-reconciler-27831045
Dec 01 02:45:02.246 I ns/openshift-multus pod/ip-reconciler-27831045-lqf99 node/ip-10-0-153-95.us-west-2.compute.internal container/whereabouts reason/ContainerExit code/0 cause/Completed
Dec 01 02:45:02.364 I ns/openshift-multus pod/ip-reconciler-27831045-lqf99 node/ip-10-0-153-95.us-west-2.compute.internal reason/DeletedAfterCompletion
Dec 01 02:48:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-236-30.us-west-2.compute.internal node/ip-10-0-236-30 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 02:48:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-236-30.us-west-2.compute.internal node/ip-10-0-236-30 reason/TerminationStoppedServing Server has stopped listening
Dec 01 02:48:12.600 I ns/openshift-marketplace pod/community-operators-fkrtz node/ip-10-0-153-95.us-west-2.compute.internal reason/Scheduled
Dec 01 02:48:12.621 I ns/openshift-marketplace pod/community-operators-fkrtz node/ reason/Created
Dec 01 02:48:15.000 I ns/openshift-marketplace pod/community-operators-fkrtz node/ip-10-0-153-95.us-west-2.compute.internal container/registry-server reason/Created
Dec 01 02:48:15.000 I ns/openshift-marketplace pod/community-operators-fkrtz node/ip-10-0-153-95.us-west-2.compute.internal container/registry-server reason/Pulled duration/0.565s image/registry.redhat.io/redhat/community-operator-index:v4.8
#1598130998384529408build-log.txt.gz18 hours ago
Dec 01 02:53:34.670 I ns/openshift-marketplace pod/redhat-marketplace-4vgmp node/ip-10-0-153-95.us-west-2.compute.internal container/registry-server reason/Ready
Dec 01 02:53:34.670 I ns/openshift-marketplace pod/redhat-marketplace-4vgmp node/ip-10-0-153-95.us-west-2.compute.internal reason/GracefulDelete duration/1s
Dec 01 02:53:35.000 I ns/openshift-marketplace pod/redhat-marketplace-4vgmp node/ip-10-0-153-95.us-west-2.compute.internal container/registry-server reason/Killing
Dec 01 02:53:36.565 I ns/openshift-marketplace pod/redhat-marketplace-4vgmp node/ip-10-0-153-95.us-west-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Dec 01 02:53:39.900 I ns/openshift-marketplace pod/redhat-marketplace-4vgmp node/ip-10-0-153-95.us-west-2.compute.internal reason/Deleted
Dec 01 02:54:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-186-61.us-west-2.compute.internal node/ip-10-0-186-61 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 02:54:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-186-61.us-west-2.compute.internal node/ip-10-0-186-61 reason/TerminationStoppedServing Server has stopped listening
Dec 01 02:54:08.000 W ns/openshift-network-diagnostics node/ip-10-0-153-95.us-west-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-186-61: failed to establish a TCP connection to 10.0.186.61:6443: dial tcp 10.0.186.61:6443: connect: connection refused
Dec 01 02:54:08.000 W ns/openshift-network-diagnostics node/ip-10-0-153-95.us-west-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-236-30: failed to establish a TCP connection to 10.0.236.30:6443: dial tcp 10.0.236.30:6443: connect: connection refused
Dec 01 02:54:08.000 I ns/openshift-network-diagnostics node/ip-10-0-153-95.us-west-2.compute.internal reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000496325s: kubernetes-apiserver-endpoint-ip-10-0-236-30: tcp connection to 10.0.236.30:6443 succeeded
Dec 01 02:54:15.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: could not get member list rpc error: code = Canceled desc = grpc: the client connection is closing\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found"
#1598130998384529408build-log.txt.gz18 hours ago
Dec 01 02:58:37.723 I ns/openshift-marketplace pod/redhat-operators-79snr node/ip-10-0-153-95.us-west-2.compute.internal reason/GracefulDelete duration/1s
Dec 01 02:58:38.000 I ns/openshift-marketplace pod/redhat-operators-79snr node/ip-10-0-153-95.us-west-2.compute.internal container/registry-server reason/Killing
Dec 01 02:58:39.315 I ns/openshift-marketplace pod/redhat-operators-79snr node/ip-10-0-153-95.us-west-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Dec 01 02:58:40.383 I ns/openshift-marketplace pod/redhat-operators-79snr node/ip-10-0-153-95.us-west-2.compute.internal reason/Deleted
Dec 01 02:59:17.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-12-01-014409@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (35 times)
Dec 01 02:59:52.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-45.us-west-2.compute.internal node/ip-10-0-143-45 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 02:59:52.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-45.us-west-2.compute.internal node/ip-10-0-143-45 reason/TerminationStoppedServing Server has stopped listening
Dec 01 03:00:00.000 I ns/openshift-multus pod/ip-reconciler-27831060-gntdj node/ip-10-0-153-95.us-west-2.compute.internal container/whereabouts reason/Created
Dec 01 03:00:00.000 I ns/openshift-multus pod/ip-reconciler-27831060-gntdj node/ip-10-0-153-95.us-west-2.compute.internal container/whereabouts reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:052d7ace6ea70f688bc2eda54544707251462fad6edf3c2b0b2289be35ceccd6
Dec 01 03:00:00.000 I ns/openshift-multus pod/ip-reconciler-27831060-gntdj node/ip-10-0-153-95.us-west-2.compute.internal container/whereabouts reason/Started
Dec 01 03:00:00.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulCreate Created job ip-reconciler-27831060
#1598130998384529408build-log.txt.gz18 hours ago
Dec 01 03:08:12.953 W ns/openshift-apiserver pod/apiserver-5d995ffbf6-mtxt7 reason/FailedScheduling skip schedule deleting pod: openshift-apiserver/apiserver-5d995ffbf6-mtxt7
Dec 01 03:08:12.997 W ns/openshift-apiserver pod/apiserver-7748fc6475-87xvt reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 03:08:13.079 I ns/openshift-apiserver pod/apiserver-5d995ffbf6-mtxt7 node/ reason/DeletedBeforeScheduling
Dec 01 03:08:13.141 I ns/openshift-apiserver pod/apiserver-7748fc6475-87xvt node/ reason/Created
Dec 01 03:08:16.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Dec 01 03:08:19.000 I ns/openshift-apiserver pod/apiserver-54fdf5dbfb-jbdl6 node/apiserver-54fdf5dbfb-jbdl6 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 03:08:19.000 I ns/openshift-apiserver pod/apiserver-54fdf5dbfb-jbdl6 node/apiserver-54fdf5dbfb-jbdl6 reason/TerminationStoppedServing Server has stopped listening
Dec 01 03:08:28.000 W ns/openshift-apiserver pod/apiserver-54fdf5dbfb-jbdl6 node/ip-10-0-143-45.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.29:8443/healthz": dial tcp 10.128.0.29:8443: connect: connection refused\nbody: \n
Dec 01 03:08:28.000 W ns/openshift-apiserver pod/apiserver-54fdf5dbfb-jbdl6 node/ip-10-0-143-45.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.29:8443/healthz": dial tcp 10.128.0.29:8443: connect: connection refused\nbody: \n
Dec 01 03:08:28.000 W ns/openshift-apiserver pod/apiserver-54fdf5dbfb-jbdl6 node/ip-10-0-143-45.us-west-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.29:8443/healthz": dial tcp 10.128.0.29:8443: connect: connection refused
Dec 01 03:08:28.000 W ns/openshift-apiserver pod/apiserver-54fdf5dbfb-jbdl6 node/ip-10-0-143-45.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.29:8443/healthz": dial tcp 10.128.0.29:8443: connect: connection refused
#1598130998384529408build-log.txt.gz18 hours ago
Dec 01 03:09:42.118 W ns/openshift-apiserver pod/apiserver-7748fc6475-r8p9b reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 03:09:42.156 I ns/openshift-apiserver pod/apiserver-7748fc6475-r8p9b node/ reason/Created
Dec 01 03:09:43.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Dec 01 03:09:50.541 I ns/openshift-marketplace pod/redhat-operators-wm9tt node/ip-10-0-153-95.us-west-2.compute.internal reason/Scheduled
Dec 01 03:09:50.561 I ns/openshift-marketplace pod/redhat-operators-wm9tt node/ reason/Created
Dec 01 03:09:52.000 I ns/openshift-apiserver pod/apiserver-54fdf5dbfb-6pfgs node/apiserver-54fdf5dbfb-6pfgs reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 03:09:52.000 I ns/openshift-apiserver pod/apiserver-54fdf5dbfb-6pfgs node/apiserver-54fdf5dbfb-6pfgs reason/TerminationStoppedServing Server has stopped listening
Dec 01 03:09:53.000 I ns/openshift-marketplace pod/redhat-operators-wm9tt node/ip-10-0-153-95.us-west-2.compute.internal container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-operator-index:v4.8
Dec 01 03:09:53.000 I ns/openshift-marketplace pod/redhat-operators-wm9tt reason/AddedInterface Add eth0 [10.131.0.48/23] from ovn-kubernetes
Dec 01 03:09:54.000 I ns/openshift-marketplace pod/redhat-operators-wm9tt node/ip-10-0-153-95.us-west-2.compute.internal container/registry-server reason/Created
Dec 01 03:09:54.000 I ns/openshift-marketplace pod/redhat-operators-wm9tt node/ip-10-0-153-95.us-west-2.compute.internal container/registry-server reason/Pulled duration/0.605s image/registry.redhat.io/redhat/redhat-operator-index:v4.8
#1598130998384529408build-log.txt.gz18 hours ago
Dec 01 03:11:22.652 I ns/openshift-apiserver pod/apiserver-54fdf5dbfb-nm995 node/ip-10-0-236-30.us-west-2.compute.internal reason/GracefulDelete duration/70s
Dec 01 03:11:22.678 W ns/openshift-apiserver pod/apiserver-7748fc6475-m9rh4 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 03:11:22.712 I ns/openshift-apiserver pod/apiserver-7748fc6475-m9rh4 node/ reason/Created
Dec 01 03:11:23.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Dec 01 03:11:23.961 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Dec 01 03:11:32.000 I ns/openshift-apiserver pod/apiserver-54fdf5dbfb-nm995 node/apiserver-54fdf5dbfb-nm995 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 03:11:32.000 I ns/openshift-apiserver pod/apiserver-54fdf5dbfb-nm995 node/apiserver-54fdf5dbfb-nm995 reason/TerminationStoppedServing Server has stopped listening
Dec 01 03:11:39.000 W ns/openshift-apiserver pod/apiserver-54fdf5dbfb-nm995 node/ip-10-0-236-30.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.44:8443/healthz": dial tcp 10.129.0.44:8443: connect: connection refused\nbody: \n
Dec 01 03:11:39.000 W ns/openshift-apiserver pod/apiserver-54fdf5dbfb-nm995 node/ip-10-0-236-30.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.44:8443/healthz": dial tcp 10.129.0.44:8443: connect: connection refused\nbody: \n
Dec 01 03:11:39.000 W ns/openshift-apiserver pod/apiserver-54fdf5dbfb-nm995 node/ip-10-0-236-30.us-west-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.44:8443/healthz": dial tcp 10.129.0.44:8443: connect: connection refused
Dec 01 03:11:39.000 W ns/openshift-apiserver pod/apiserver-54fdf5dbfb-nm995 node/ip-10-0-236-30.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.44:8443/healthz": dial tcp 10.129.0.44:8443: connect: connection refused
#1598100569573036032build-log.txt.gz20 hours ago
Dec 01 00:46:35.000 I ns/openshift-kube-apiserver pod/installer-11-ip-10-0-147-106.ec2.internal reason/StaticPodInstallerCompleted Successfully installed revision 11
Dec 01 00:46:35.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-106.ec2.internal node/ip-10-0-147-106 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 00:46:35.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-106.ec2.internal node/ip-10-0-147-106 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Dec 01 00:46:36.605 I ns/openshift-kube-apiserver pod/installer-11-ip-10-0-147-106.ec2.internal node/ip-10-0-147-106.ec2.internal container/installer reason/ContainerExit code/0 cause/Completed
Dec 01 00:46:42.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ip-10-0-243-124_9290b5d1-9186-48dd-a76c-0cdc8a6b99e5 became leader
Dec 01 00:50:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-106.ec2.internal node/ip-10-0-147-106 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 00:50:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-106.ec2.internal node/ip-10-0-147-106 reason/TerminationStoppedServing Server has stopped listening
Dec 01 00:50:37.000 W ns/openshift-machine-api machineset/ci-op-d3h09wsc-978ed-gn5zp-worker-us-east-1f reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (9 times)
Dec 01 00:50:37.000 W ns/openshift-machine-api machineset/ci-op-d3h09wsc-978ed-gn5zp-worker-us-east-1d reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (9 times)
Dec 01 00:50:50.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-106.ec2.internal node/ip-10-0-147-106.ec2.internal container/kube-controller-manager-recovery-controller reason/Created
Dec 01 00:50:50.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-106.ec2.internal node/ip-10-0-147-106.ec2.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
#1598100569573036032build-log.txt.gz20 hours ago
Dec 01 00:52:33.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (14 times)
Dec 01 00:53:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (15 times)
Dec 01 00:54:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (16 times)
Dec 01 00:55:13.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (17 times)
Dec 01 00:55:16.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (18 times)
Dec 01 00:56:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-121.ec2.internal node/ip-10-0-154-121 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 00:56:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-121.ec2.internal node/ip-10-0-154-121 reason/TerminationStoppedServing Server has stopped listening
Dec 01 00:56:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (19 times)
Dec 01 00:56:39.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-121.ec2.internal node/ip-10-0-154-121.ec2.internal container/kube-scheduler-recovery-controller reason/Created
Dec 01 00:56:39.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-121.ec2.internal node/ip-10-0-154-121.ec2.internal container/kube-scheduler-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcb22579324a93faddf49caa971153fea33cec03738e76490d6de7e39a01db59
Dec 01 00:56:39.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-121.ec2.internal node/ip-10-0-154-121.ec2.internal container/kube-scheduler-recovery-controller reason/Started
#1598100569573036032build-log.txt.gz20 hours ago
Dec 01 01:00:01.246 I ns/openshift-multus pod/ip-reconciler-27830940-gj2fq node/ip-10-0-138-102.ec2.internal reason/DeletedAfterCompletion
Dec 01 01:00:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (35 times)
Dec 01 01:00:56.000 W ns/openshift-machine-api machineset/ci-op-d3h09wsc-978ed-gn5zp-worker-us-east-1d reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (10 times)
Dec 01 01:00:56.000 W ns/openshift-machine-api machineset/ci-op-d3h09wsc-978ed-gn5zp-worker-us-east-1f reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (10 times)
Dec 01 01:01:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (36 times)
Dec 01 01:01:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-243-124.ec2.internal node/ip-10-0-243-124 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 01:01:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-243-124.ec2.internal node/ip-10-0-243-124 reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:02:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (37 times)
Dec 01 01:02:27.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-243-124.ec2.internal node/ip-10-0-243-124.ec2.internal container/kube-controller-manager-recovery-controller reason/Created
Dec 01 01:02:27.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-243-124.ec2.internal node/ip-10-0-243-124.ec2.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
Dec 01 01:02:27.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-243-124.ec2.internal node/ip-10-0-243-124.ec2.internal container/kube-controller-manager-recovery-controller reason/Started
#1598100569573036032build-log.txt.gz20 hours ago
Dec 01 01:09:51.966 W ns/openshift-apiserver pod/apiserver-5f4b9ff455-5h8x2 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 01:09:51.972 I ns/openshift-apiserver pod/apiserver-5f4b9ff455-5h8x2 node/ reason/Created
Dec 01 01:09:55.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Dec 01 01:09:58.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-7k87b node/ip-10-0-154-121.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.38:8443/healthz": dial tcp 10.130.0.38:8443: connect: connection refused\nbody: \n
Dec 01 01:09:58.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-7k87b node/ip-10-0-154-121.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.38:8443/healthz": dial tcp 10.130.0.38:8443: connect: connection refused\nbody: \n
Dec 01 01:09:58.000 I ns/openshift-apiserver pod/apiserver-6f4674bcdd-7k87b node/apiserver-6f4674bcdd-7k87b reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 01:09:58.000 I ns/openshift-apiserver pod/apiserver-6f4674bcdd-7k87b node/apiserver-6f4674bcdd-7k87b reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:09:58.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-7k87b node/ip-10-0-154-121.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.38:8443/healthz": dial tcp 10.130.0.38:8443: connect: connection refused
Dec 01 01:09:58.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-7k87b node/ip-10-0-154-121.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.38:8443/healthz": dial tcp 10.130.0.38:8443: connect: connection refused
Dec 01 01:10:08.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-7k87b node/ip-10-0-154-121.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.38:8443/healthz": dial tcp 10.130.0.38:8443: connect: connection refused\nbody: \n (2 times)
Dec 01 01:10:08.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-7k87b node/ip-10-0-154-121.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.38:8443/healthz": dial tcp 10.130.0.38:8443: connect: connection refused\nbody: \n (2 times)
#1598100569573036032build-log.txt.gz20 hours ago
Dec 01 01:11:30.549 I ns/openshift-apiserver pod/apiserver-5f4b9ff455-5h8x2 node/ip-10-0-154-121.ec2.internal container/openshift-apiserver reason/Ready
Dec 01 01:11:30.596 I ns/openshift-apiserver pod/apiserver-6f4674bcdd-s59hr node/ip-10-0-243-124.ec2.internal reason/GracefulDelete duration/70s
Dec 01 01:11:30.692 W ns/openshift-apiserver pod/apiserver-5f4b9ff455-jlfrd reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 01:11:30.699 I ns/openshift-apiserver pod/apiserver-5f4b9ff455-jlfrd node/ reason/Created
Dec 01 01:11:32.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Dec 01 01:11:40.000 I ns/openshift-apiserver pod/apiserver-6f4674bcdd-s59hr node/apiserver-6f4674bcdd-s59hr reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 01:11:40.000 I ns/openshift-apiserver pod/apiserver-6f4674bcdd-s59hr node/apiserver-6f4674bcdd-s59hr reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:11:46.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-s59hr node/ip-10-0-243-124.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.40:8443/healthz": dial tcp 10.128.0.40:8443: connect: connection refused\nbody: \n
Dec 01 01:11:46.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-s59hr node/ip-10-0-243-124.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.40:8443/healthz": dial tcp 10.128.0.40:8443: connect: connection refused\nbody: \n
Dec 01 01:11:46.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-s59hr node/ip-10-0-243-124.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.40:8443/healthz": dial tcp 10.128.0.40:8443: connect: connection refused
Dec 01 01:11:46.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-s59hr node/ip-10-0-243-124.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.40:8443/healthz": dial tcp 10.128.0.40:8443: connect: connection refused
#1598100569573036032build-log.txt.gz20 hours ago
Dec 01 01:13:03.242 I ns/openshift-apiserver pod/apiserver-6f4674bcdd-9c2p9 node/ip-10-0-147-106.ec2.internal reason/GracefulDelete duration/70s
Dec 01 01:13:03.332 W ns/openshift-apiserver pod/apiserver-5f4b9ff455-47599 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 01:13:03.338 I ns/openshift-apiserver pod/apiserver-5f4b9ff455-47599 node/ reason/Created
Dec 01 01:13:04.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Dec 01 01:13:04.616 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Dec 01 01:13:13.000 I ns/openshift-apiserver pod/apiserver-6f4674bcdd-9c2p9 node/apiserver-6f4674bcdd-9c2p9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 01:13:13.000 I ns/openshift-apiserver pod/apiserver-6f4674bcdd-9c2p9 node/apiserver-6f4674bcdd-9c2p9 reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:13:20.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-9c2p9 node/ip-10-0-147-106.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.40:8443/healthz": dial tcp 10.129.0.40:8443: connect: connection refused\nbody: \n
Dec 01 01:13:20.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-9c2p9 node/ip-10-0-147-106.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.40:8443/healthz": dial tcp 10.129.0.40:8443: connect: connection refused\nbody: \n
Dec 01 01:13:20.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-9c2p9 node/ip-10-0-147-106.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.40:8443/healthz": dial tcp 10.129.0.40:8443: connect: connection refused
Dec 01 01:13:20.000 W ns/openshift-apiserver pod/apiserver-6f4674bcdd-9c2p9 node/ip-10-0-147-106.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.40:8443/healthz": dial tcp 10.129.0.40:8443: connect: connection refused
#1598082838018658304build-log.txt.gz21 hours ago
Nov 30 23:45:01.000 I ns/openshift-multus cronjob/ip-reconciler reason/SawCompletedJob Saw completed job: ip-reconciler-27830865, status: Complete
Nov 30 23:45:01.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulDelete Deleted job ip-reconciler-27830865
Nov 30 23:45:01.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulDelete Deleted job ip-reconciler-27830865
Nov 30 23:45:01.667 I ns/openshift-multus pod/ip-reconciler-27830865-nt62h node/ip-10-0-155-145.ec2.internal container/whereabouts reason/ContainerExit code/0 cause/Completed
Nov 30 23:45:01.737 I ns/openshift-multus pod/ip-reconciler-27830865-nt62h node/ip-10-0-155-145.ec2.internal reason/DeletedAfterCompletion
Nov 30 23:45:21.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-250-137.ec2.internal node/ip-10-0-250-137 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 23:45:21.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-250-137.ec2.internal node/ip-10-0-250-137 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 23:45:21.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-250-137.ec2.internal node/ip-10-0-250-137 reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:45:21.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-250-137.ec2.internal node/ip-10-0-250-137 reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:45:57.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-250-137.ec2.internal node/ip-10-0-250-137.ec2.internal container/kube-scheduler-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcb22579324a93faddf49caa971153fea33cec03738e76490d6de7e39a01db59
Nov 30 23:45:57.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-250-137.ec2.internal node/ip-10-0-250-137.ec2.internal container/kube-scheduler-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcb22579324a93faddf49caa971153fea33cec03738e76490d6de7e39a01db59
Nov 30 23:45:57.902 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-250-137.ec2.internal node/ip-10-0-250-137.ec2.internal container/kube-scheduler-recovery-controller reason/ContainerExit code/0 cause/Completed
#1598082838018658304build-log.txt.gz21 hours ago
Nov 30 23:50:47.195 I ns/openshift-marketplace pod/community-operators-g87tq node/ip-10-0-155-145.ec2.internal reason/GracefulDelete duration/1s
Nov 30 23:50:48.361 I ns/openshift-marketplace pod/community-operators-g87tq node/ip-10-0-155-145.ec2.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 30 23:50:49.367 I ns/openshift-marketplace pod/community-operators-g87tq node/ip-10-0-155-145.ec2.internal reason/Deleted
Nov 30 23:50:53.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (18 times)
Nov 30 23:50:56.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (19 times)
Nov 30 23:51:00.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-157-104.ec2.internal node/ip-10-0-157-104 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 23:51:00.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-157-104.ec2.internal node/ip-10-0-157-104 reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:51:28.843 I ns/openshift-marketplace pod/certified-operators-rcz5f node/ reason/Created
Nov 30 23:51:28.849 I ns/openshift-marketplace pod/certified-operators-rcz5f node/ip-10-0-155-145.ec2.internal reason/Scheduled
Nov 30 23:51:30.000 I ns/openshift-marketplace pod/certified-operators-rcz5f node/ip-10-0-155-145.ec2.internal container/registry-server reason/Created
Nov 30 23:51:30.000 I ns/openshift-marketplace pod/certified-operators-rcz5f node/ip-10-0-155-145.ec2.internal container/registry-server reason/Pulled duration/0.613s image/registry.redhat.io/redhat/certified-operator-index:v4.8
#1598082838018658304build-log.txt.gz21 hours ago
Nov 30 23:54:49.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (35 times)
Nov 30 23:55:11.000 W ns/openshift-machine-api machineset/ci-op-ybl9nmhq-978ed-j2m7d-worker-us-east-1d reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (7 times)
Nov 30 23:55:11.000 W ns/openshift-machine-api machineset/ci-op-ybl9nmhq-978ed-j2m7d-worker-us-east-1c reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (7 times)
Nov 30 23:55:51.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (36 times)
Nov 30 23:56:49.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (37 times)
Nov 30 23:57:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-143.ec2.internal node/ip-10-0-143-143 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 23:57:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-143.ec2.internal node/ip-10-0-143-143 reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:57:25.000 I ns/openshift-machine-api machine/ci-op-ybl9nmhq-978ed-j2m7d-worker-us-east-1d-hbvxw reason/Update Updated Machine ci-op-ybl9nmhq-978ed-j2m7d-worker-us-east-1d-hbvxw (2 times)
Nov 30 23:57:25.000 I ns/openshift-machine-api machine/ci-op-ybl9nmhq-978ed-j2m7d-worker-us-east-1d-vcskw reason/Update Updated Machine ci-op-ybl9nmhq-978ed-j2m7d-worker-us-east-1d-vcskw (2 times)
Nov 30 23:57:26.000 I ns/openshift-machine-api machine/ci-op-ybl9nmhq-978ed-j2m7d-worker-us-east-1c-zjzzp reason/Update Updated Machine ci-op-ybl9nmhq-978ed-j2m7d-worker-us-east-1c-zjzzp (2 times)
Nov 30 23:57:41.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-143-143.ec2.internal node/ip-10-0-143-143.ec2.internal container/kube-scheduler-recovery-controller reason/Created
#1598082838018658304build-log.txt.gz21 hours ago
Dec 01 00:05:02.827 I ns/openshift-apiserver pod/apiserver-658bd5f7b8-vm2wg node/ reason/DeletedBeforeScheduling
Dec 01 00:05:02.853 I ns/openshift-apiserver pod/apiserver-6dfb55ccdb-ddgt7 node/ reason/Created
Dec 01 00:05:02.854 W ns/openshift-apiserver pod/apiserver-6dfb55ccdb-ddgt7 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 00:05:06.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Dec 01 00:05:06.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7."
Dec 01 00:05:08.000 I ns/openshift-apiserver pod/apiserver-dfd985c97-gpdtw node/apiserver-dfd985c97-gpdtw reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 00:05:08.000 I ns/openshift-apiserver pod/apiserver-dfd985c97-gpdtw node/apiserver-dfd985c97-gpdtw reason/TerminationStoppedServing Server has stopped listening
Dec 01 00:05:14.000 W ns/openshift-apiserver pod/apiserver-dfd985c97-gpdtw node/ip-10-0-143-143.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.22:8443/healthz": dial tcp 10.129.0.22:8443: connect: connection refused\nbody: \n
Dec 01 00:05:14.000 W ns/openshift-apiserver pod/apiserver-dfd985c97-gpdtw node/ip-10-0-143-143.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.22:8443/healthz": dial tcp 10.129.0.22:8443: connect: connection refused\nbody: \n
Dec 01 00:05:14.000 W ns/openshift-apiserver pod/apiserver-dfd985c97-gpdtw node/ip-10-0-143-143.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.22:8443/healthz": dial tcp 10.129.0.22:8443: connect: connection refused
Dec 01 00:05:14.000 W ns/openshift-apiserver pod/apiserver-dfd985c97-gpdtw node/ip-10-0-143-143.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.22:8443/healthz": dial tcp 10.129.0.22:8443: connect: connection refused
#1598082838018658304build-log.txt.gz21 hours ago
Dec 01 00:06:25.561 I ns/openshift-apiserver pod/apiserver-6dfb55ccdb-ddgt7 node/ip-10-0-143-143.ec2.internal container/openshift-apiserver reason/Ready
Dec 01 00:06:25.606 I ns/openshift-apiserver pod/apiserver-dfd985c97-xvbb8 node/ip-10-0-157-104.ec2.internal reason/GracefulDelete duration/70s
Dec 01 00:06:25.664 I ns/openshift-apiserver pod/apiserver-6dfb55ccdb-j4nt4 node/ reason/Created
Dec 01 00:06:25.665 W ns/openshift-apiserver pod/apiserver-6dfb55ccdb-j4nt4 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 00:06:26.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Dec 01 00:06:35.000 I ns/openshift-apiserver pod/apiserver-dfd985c97-xvbb8 node/apiserver-dfd985c97-xvbb8 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 00:06:35.000 I ns/openshift-apiserver pod/apiserver-dfd985c97-xvbb8 node/apiserver-dfd985c97-xvbb8 reason/TerminationStoppedServing Server has stopped listening
Dec 01 00:06:41.000 W ns/openshift-apiserver pod/apiserver-dfd985c97-xvbb8 node/ip-10-0-157-104.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.22:8443/healthz": dial tcp 10.130.0.22:8443: connect: connection refused\nbody: \n
Dec 01 00:06:41.000 W ns/openshift-apiserver pod/apiserver-dfd985c97-xvbb8 node/ip-10-0-157-104.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.22:8443/healthz": dial tcp 10.130.0.22:8443: connect: connection refused\nbody: \n
Dec 01 00:06:41.000 W ns/openshift-apiserver pod/apiserver-dfd985c97-xvbb8 node/ip-10-0-157-104.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.22:8443/healthz": dial tcp 10.130.0.22:8443: connect: connection refused
Dec 01 00:06:41.000 W ns/openshift-apiserver pod/apiserver-dfd985c97-xvbb8 node/ip-10-0-157-104.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.22:8443/healthz": dial tcp 10.130.0.22:8443: connect: connection refused
periodic-ci-openshift-release-master-ci-4.9-e2e-gcp-upgrade (all) - 10 runs, 40% failed, 75% of failures match = 30% impact
#1598130992516698112build-log.txt.gz19 hours ago
Dec 01 02:23:02.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-1 node/ci-op-0hj976p7-875d2-fpsvs-master-1 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 02:23:02.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-1 node/ci-op-0hj976p7-875d2-fpsvs-master-1 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 02:23:02.850 I ns/openshift-kube-apiserver pod/installer-10-ci-op-0hj976p7-875d2-fpsvs-master-1 node/ci-op-0hj976p7-875d2-fpsvs-master-1 container/installer reason/ContainerExit code/0 cause/Completed
Dec 01 02:24:00.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.353737418689197 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-0hj976p7-875d2-fpsvs-master-2=0.03143111111111123,etcd-ci-op-0hj976p7-875d2-fpsvs-master-0=0.022320000000000024,etcd-ci-op-0hj976p7-875d2-fpsvs-master-1=0.014984615384615395. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Dec 01 02:24:00.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.353737418689197 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-0hj976p7-875d2-fpsvs-master-2=0.03143111111111123,etcd-ci-op-0hj976p7-875d2-fpsvs-master-0=0.022320000000000024,etcd-ci-op-0hj976p7-875d2-fpsvs-master-1=0.014984615384615395. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Dec 01 02:24:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-1 node/ci-op-0hj976p7-875d2-fpsvs-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 02:24:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-1 node/ci-op-0hj976p7-875d2-fpsvs-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 02:24:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-1 node/ci-op-0hj976p7-875d2-fpsvs-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 02:24:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-1 node/ci-op-0hj976p7-875d2-fpsvs-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 02:24:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-1 node/ci-op-0hj976p7-875d2-fpsvs-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 02:24:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-1 node/ci-op-0hj976p7-875d2-fpsvs-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 02:24:13.294 I ns/openshift-marketplace pod/redhat-marketplace-vql6z node/ reason/Created
#1598130992516698112build-log.txt.gz19 hours ago
Dec 01 02:25:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-2 node/ci-op-0hj976p7-875d2-fpsvs-master-2 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Dec 01 02:25:44.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-12-01-014409@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (14 times)
Dec 01 02:25:44.248 I ns/openshift-kube-apiserver pod/installer-10-ci-op-0hj976p7-875d2-fpsvs-master-2 node/ci-op-0hj976p7-875d2-fpsvs-master-2 container/installer reason/ContainerExit code/0 cause/Completed
Dec 01 02:25:46.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-12-01-014409@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (15 times)
Dec 01 02:26:06.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-12-01-014409@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (16 times)
Dec 01 02:26:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-2 node/ci-op-0hj976p7-875d2-fpsvs-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 02:26:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-2 node/ci-op-0hj976p7-875d2-fpsvs-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 02:26:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-2 node/ci-op-0hj976p7-875d2-fpsvs-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 02:26:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-2 node/ci-op-0hj976p7-875d2-fpsvs-master-2 container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.9-2022-12-01-014409@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72
Dec 01 02:26:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-2 node/ci-op-0hj976p7-875d2-fpsvs-master-2 container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.9-2022-12-01-014409@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72
Dec 01 02:26:59.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-12-01-014409@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (17 times)
#1598130992516698112build-log.txt.gz19 hours ago
Dec 01 02:29:07.577 I ns/openshift-marketplace pod/community-operators-9hgff node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb reason/GracefulDelete duration/1s
Dec 01 02:29:09.000 I ns/openshift-marketplace pod/community-operators-9hgff node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb container/registry-server reason/Killing
Dec 01 02:29:10.000 I ns/openshift-marketplace pod/community-operators-9hgff node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb container/registry-server reason/Killing
Dec 01 02:29:10.530 I ns/openshift-marketplace pod/community-operators-9hgff node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb container/registry-server reason/ContainerExit code/0 cause/Completed
Dec 01 02:29:10.578 I ns/openshift-marketplace pod/community-operators-9hgff node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb reason/Deleted
Dec 01 02:29:36.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-0 node/ci-op-0hj976p7-875d2-fpsvs-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Dec 01 02:29:36.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-0 node/ci-op-0hj976p7-875d2-fpsvs-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Dec 01 02:29:36.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-0 node/ci-op-0hj976p7-875d2-fpsvs-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Dec 01 02:29:38.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-0hj976p7-875d2-fpsvs-master-0 node/ci-op-0hj976p7-875d2-fpsvs-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Dec 01 02:29:41.655 I ns/openshift-marketplace pod/certified-operators-5l96h node/ reason/Created
Dec 01 02:29:41.686 I ns/openshift-marketplace pod/redhat-operators-d5525 node/ reason/Created
#1598130992516698112build-log.txt.gz19 hours ago
Dec 01 02:38:21.000 I ns/openshift-machine-api pod/machine-api-operator-7fffdc9787-2dc5j node/ci-op-0hj976p7-875d2-fpsvs-master-1 container/machine-api-operator reason/Killing
Dec 01 02:38:21.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Dec 01 02:38:21.001 W ns/openshift-apiserver pod/apiserver-fcb4b6656-qtm6z reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 02:38:21.001 W ns/openshift-apiserver pod/apiserver-fcb4b6656-qtm6z reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 02:38:21.036 I ns/openshift-machine-api pod/machine-api-operator-7fffdc9787-2dc5j node/ci-op-0hj976p7-875d2-fpsvs-master-1 reason/Deleted
Dec 01 02:38:32.000 I ns/openshift-apiserver pod/apiserver-5d567bbd7-m6wd9 node/apiserver-5d567bbd7-m6wd9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Dec 01 02:38:32.000 I ns/openshift-apiserver pod/apiserver-5d567bbd7-m6wd9 node/apiserver-5d567bbd7-m6wd9 reason/TerminationStoppedServing Server has stopped listening
Dec 01 02:39:14.979 I ns/openshift-marketplace pod/community-operators-grx9q node/ reason/Created
Dec 01 02:39:14.979 I ns/openshift-marketplace pod/community-operators-grx9q node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb reason/Scheduled
Dec 01 02:39:16.000 I ns/openshift-marketplace pod/community-operators-grx9q node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb container/registry-server reason/Pulling image/registry.redhat.io/redhat/community-operator-index:v4.9
Dec 01 02:39:16.000 I ns/openshift-marketplace pod/community-operators-grx9q reason/AddedInterface Add eth0 [10.131.0.36/23] from openshift-sdn
#1598130992516698112build-log.txt.gz19 hours ago
Dec 01 02:39:53.000 I ns/openshift-marketplace pod/redhat-operators-cfq92 reason/AddedInterface Add eth0 [10.131.0.37/23] from openshift-sdn
Dec 01 02:39:54.631 I ns/openshift-marketplace pod/redhat-operators-cfq92 node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb container/registry-server reason/ContainerStart duration/3.00s
Dec 01 02:40:01.754 I ns/openshift-marketplace pod/redhat-operators-cfq92 node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb container/registry-server reason/Ready
Dec 01 02:40:01.756 I ns/openshift-marketplace pod/redhat-operators-cfq92 node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb reason/GracefulDelete duration/1s
Dec 01 02:40:03.000 I ns/openshift-marketplace pod/redhat-operators-cfq92 node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb container/registry-server reason/Killing
Dec 01 02:40:04.000 I ns/openshift-apiserver pod/apiserver-5d567bbd7-r62xg node/apiserver-5d567bbd7-r62xg reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Dec 01 02:40:04.000 I ns/openshift-apiserver pod/apiserver-5d567bbd7-r62xg node/apiserver-5d567bbd7-r62xg reason/TerminationStoppedServing Server has stopped listening
Dec 01 02:40:04.673 I ns/openshift-marketplace pod/redhat-operators-cfq92 node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb container/registry-server reason/ContainerExit code/0 cause/Completed
Dec 01 02:40:04.679 W ns/openshift-apiserver pod/apiserver-fcb4b6656-2xkfw reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 02:40:04.735 I ns/openshift-marketplace pod/redhat-operators-cfq92 node/ci-op-0hj976p7-875d2-fpsvs-worker-b-gcfhb reason/Deleted
Dec 01 02:40:49.086 - 16s   W ns/openshift-apiserver pod/apiserver-fcb4b6656-2xkfw node/ pod has been pending longer than a minute
#1598082838131904512build-log.txt.gz22 hours ago
Nov 30 23:13:29.000 I ns/openshift-kube-apiserver pod/installer-9-ci-op-i0t7vqil-875d2-sw6fb-master-1 reason/StaticPodInstallerCompleted Successfully installed revision 9
Nov 30 23:13:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-1 node/ci-op-i0t7vqil-875d2-sw6fb-master-1 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 23:13:29.835 I ns/openshift-kube-apiserver pod/installer-9-ci-op-i0t7vqil-875d2-sw6fb-master-1 node/ci-op-i0t7vqil-875d2-sw6fb-master-1 container/installer reason/ContainerExit code/0 cause/Completed
Nov 30 23:14:01.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-i0t7vqil-875d2-sw6fb-master-0_65308268-42a0-4075-a6bf-8229e45487a3 became leader
Nov 30 23:14:25.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.3728747280381413 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-i0t7vqil-875d2-sw6fb-master-1=0.02586666666666665,etcd-ci-op-i0t7vqil-875d2-sw6fb-master-2=0.006719999999999998,etcd-ci-op-i0t7vqil-875d2-sw6fb-master-0=0.011880000000000043. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 30 23:14:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-1 node/ci-op-i0t7vqil-875d2-sw6fb-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 23:14:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-1 node/ci-op-i0t7vqil-875d2-sw6fb-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 23:14:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-1 node/ci-op-i0t7vqil-875d2-sw6fb-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 23:14:41.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-1 node/ci-op-i0t7vqil-875d2-sw6fb-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 23:14:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-1 node/ci-op-i0t7vqil-875d2-sw6fb-master-1 container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72
Nov 30 23:14:44.738 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-1 node/ci-op-i0t7vqil-875d2-sw6fb-master-1 reason/ForceDelete mirrored/true
#1598082838131904512build-log.txt.gz22 hours ago
Nov 30 23:15:57.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-i0t7vqil-875d2-sw6fb-master-1_fb17f5a0-ae8b-4f53-8454-e528f4bbfc70 became leader
Nov 30 23:15:57.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (12 times)
Nov 30 23:16:00.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (13 times)
Nov 30 23:16:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (14 times)
Nov 30 23:16:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (14 times)
Nov 30 23:17:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-0 node/ci-op-i0t7vqil-875d2-sw6fb-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 23:17:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-0 node/ci-op-i0t7vqil-875d2-sw6fb-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 23:17:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-0 node/ci-op-i0t7vqil-875d2-sw6fb-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 23:17:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-0 node/ci-op-i0t7vqil-875d2-sw6fb-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 23:17:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-0 node/ci-op-i0t7vqil-875d2-sw6fb-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 23:17:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-0 node/ci-op-i0t7vqil-875d2-sw6fb-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 23:17:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-0 node/ci-op-i0t7vqil-875d2-sw6fb-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
#1598082838131904512build-log.txt.gz22 hours ago
Nov 30 23:19:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (32 times)
Nov 30 23:19:41.031 I ns/openshift-marketplace pod/certified-operators-5rg7v node/ci-op-i0t7vqil-875d2-sw6fb-worker-c-wk7pb reason/Scheduled
Nov 30 23:19:41.035 I ns/openshift-marketplace pod/certified-operators-5rg7v node/ reason/Created
Nov 30 23:19:43.000 I ns/openshift-marketplace pod/certified-operators-5rg7v node/ci-op-i0t7vqil-875d2-sw6fb-worker-c-wk7pb container/registry-server reason/Pulling image/registry.redhat.io/redhat/certified-operator-index:v4.9
Nov 30 23:19:43.000 I ns/openshift-marketplace pod/certified-operators-5rg7v reason/AddedInterface Add eth0 [10.128.2.29/23] from openshift-sdn
Nov 30 23:19:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-2 node/ci-op-i0t7vqil-875d2-sw6fb-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 23:19:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-2 node/ci-op-i0t7vqil-875d2-sw6fb-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 23:19:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-i0t7vqil-875d2-sw6fb-master-2 node/ci-op-i0t7vqil-875d2-sw6fb-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 23:19:44.000 I ns/openshift-marketplace pod/certified-operators-5rg7v node/ci-op-i0t7vqil-875d2-sw6fb-worker-c-wk7pb container/registry-server reason/Created
Nov 30 23:19:44.000 I ns/openshift-marketplace pod/certified-operators-5rg7v node/ci-op-i0t7vqil-875d2-sw6fb-worker-c-wk7pb container/registry-server reason/Pulled duration/0.947s image/registry.redhat.io/redhat/certified-operator-index:v4.9
Nov 30 23:19:44.000 I ns/openshift-marketplace pod/certified-operators-5rg7v node/ci-op-i0t7vqil-875d2-sw6fb-worker-c-wk7pb container/registry-server reason/Started
#1598082838131904512build-log.txt.gz22 hours ago
Nov 30 23:28:31.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 30 23:28:31.939 W ns/openshift-apiserver pod/apiserver-dd876fcdf-wzjmf reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 23:28:31.949 I ns/openshift-machine-api pod/machine-api-operator-7fffdc9787-vssvm node/ci-op-i0t7vqil-875d2-sw6fb-master-1 container/kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
Nov 30 23:28:31.949 E ns/openshift-machine-api pod/machine-api-operator-7fffdc9787-vssvm node/ci-op-i0t7vqil-875d2-sw6fb-master-1 container/machine-api-operator reason/ContainerExit code/2 cause/Error
Nov 30 23:28:31.977 I ns/openshift-machine-api pod/machine-api-operator-7fffdc9787-vssvm node/ci-op-i0t7vqil-875d2-sw6fb-master-1 reason/Deleted
Nov 30 23:28:42.000 I ns/openshift-apiserver pod/apiserver-55969b5987-sp2wv node/apiserver-55969b5987-sp2wv reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 23:28:42.000 I ns/openshift-apiserver pod/apiserver-55969b5987-sp2wv node/apiserver-55969b5987-sp2wv reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:29:27.586 - 15s   W ns/openshift-apiserver pod/apiserver-dd876fcdf-wzjmf node/ pod has been pending longer than a minute
Nov 30 23:29:42.000 I ns/openshift-apiserver pod/apiserver-55969b5987-sp2wv node/apiserver-55969b5987-sp2wv reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 23:29:43.917 I ns/openshift-apiserver pod/apiserver-55969b5987-sp2wv node/ci-op-i0t7vqil-875d2-sw6fb-master-0 container/openshift-apiserver reason/ContainerExit code/0 cause/Completed
Nov 30 23:29:43.917 I ns/openshift-apiserver pod/apiserver-55969b5987-sp2wv node/ci-op-i0t7vqil-875d2-sw6fb-master-0 container/openshift-apiserver-check-endpoints reason/ContainerExit code/0 cause/Completed
#1598082838131904512build-log.txt.gz22 hours ago
Nov 30 23:30:06.966 W ns/openshift-apiserver pod/apiserver-dd876fcdf-9k8gs reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 23:30:06.966 W ns/openshift-apiserver pod/apiserver-dd876fcdf-9k8gs reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 23:30:06.966 W ns/openshift-apiserver pod/apiserver-dd876fcdf-9k8gs reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 23:30:07.000 I ns/openshift-operator-lifecycle-manager cronjob/collect-profiles reason/SuccessfulDelete Deleted job collect-profiles-27830805
Nov 30 23:30:07.114 I ns/openshift-operator-lifecycle-manager pod/collect-profiles-27830805--1-dm7fl node/ci-op-i0t7vqil-875d2-sw6fb-worker-b-8l858 reason/DeletedAfterCompletion
Nov 30 23:30:09.000 I ns/openshift-apiserver pod/apiserver-55969b5987-9fb6k node/apiserver-55969b5987-9fb6k reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 23:30:09.000 I ns/openshift-apiserver pod/apiserver-55969b5987-9fb6k node/apiserver-55969b5987-9fb6k reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:30:22.456 I ns/openshift-marketplace pod/redhat-operators-b84px node/ reason/Created
Nov 30 23:30:22.463 I ns/openshift-marketplace pod/certified-operators-qd4b4 node/ci-op-i0t7vqil-875d2-sw6fb-worker-c-wk7pb reason/Scheduled
Nov 30 23:30:22.471 I ns/openshift-marketplace pod/redhat-operators-b84px node/ci-op-i0t7vqil-875d2-sw6fb-worker-b-8l858 reason/Scheduled
Nov 30 23:30:22.483 I ns/openshift-marketplace pod/certified-operators-qd4b4 node/ reason/Created
#1597639927313469440build-log.txt.gz2 days ago
Nov 29 18:00:09.000 I ns/openshift-kube-apiserver pod/installer-11-ci-op-q97jfpbl-875d2-wtljd-master-1 reason/StaticPodInstallerCompleted Successfully installed revision 11
Nov 29 18:00:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-1 node/ci-op-q97jfpbl-875d2-wtljd-master-1 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 18:00:10.913 I ns/openshift-kube-apiserver pod/installer-11-ci-op-q97jfpbl-875d2-wtljd-master-1 node/ci-op-q97jfpbl-875d2-wtljd-master-1 container/installer reason/ContainerExit code/0 cause/Completed
Nov 29 18:00:13.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-q97jfpbl-875d2-wtljd-master-0_a7a28dd2-cde3-4779-991c-d38719426716 became leader
Nov 29 18:01:04.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.1122018640787714 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-q97jfpbl-875d2-wtljd-master-1=0.014144000000000004,etcd-ci-op-q97jfpbl-875d2-wtljd-master-2=0.007858181818181804,etcd-ci-op-q97jfpbl-875d2-wtljd-master-0=0.007856470588235296. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 29 18:01:19.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-1 node/ci-op-q97jfpbl-875d2-wtljd-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 18:01:19.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-1 node/ci-op-q97jfpbl-875d2-wtljd-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 18:01:19.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-1 node/ci-op-q97jfpbl-875d2-wtljd-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 18:01:21.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-1 node/ci-op-q97jfpbl-875d2-wtljd-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 18:01:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-1 node/ci-op-q97jfpbl-875d2-wtljd-master-1 container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa
Nov 29 18:01:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-1 node/ci-op-q97jfpbl-875d2-wtljd-master-1 container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa
#1597639927313469440build-log.txt.gz2 days ago
Nov 29 18:02:50.439 I ns/openshift-kube-apiserver pod/installer-11-ci-op-q97jfpbl-875d2-wtljd-master-0 node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/installer reason/ContainerExit code/0 cause/Completed
Nov 29 18:02:53.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (13 times)
Nov 29 18:02:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (14 times)
Nov 29 18:03:05.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-q97jfpbl-875d2-wtljd-master-1_ebb821e0-bb05-4b07-8c50-f0ee6ee4d17e became leader
Nov 29 18:03:13.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (15 times)
Nov 29 18:03:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-0 node/ci-op-q97jfpbl-875d2-wtljd-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 18:03:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-0 node/ci-op-q97jfpbl-875d2-wtljd-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 18:03:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-0 node/ci-op-q97jfpbl-875d2-wtljd-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 18:04:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-0 node/ci-op-q97jfpbl-875d2-wtljd-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 18:04:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-0 node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa
Nov 29 18:04:05.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (16 times)
#1597639927313469440build-log.txt.gz2 days ago
Nov 29 18:05:15.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (30 times)
Nov 29 18:05:19.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (31 times)
Nov 29 18:05:20.000 W ns/openshift-network-diagnostics node/ci-op-q97jfpbl-875d2-wtljd-worker-b-l9qf6 reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ci-op-q97jfpbl-875d2-wtljd-master-1: failed to establish a TCP connection to 10.0.0.3:6443: dial tcp 10.0.0.3:6443: connect: connection refused
Nov 29 18:05:20.000 I ns/openshift-network-diagnostics node/ci-op-q97jfpbl-875d2-wtljd-worker-b-l9qf6 reason/ConnectivityRestored roles/worker Connectivity restored after 59.999575222s: kubernetes-apiserver-endpoint-ci-op-q97jfpbl-875d2-wtljd-master-1: tcp connection to 10.0.0.3:6443 succeeded
Nov 29 18:06:13.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (32 times)
Nov 29 18:06:23.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-2 node/ci-op-q97jfpbl-875d2-wtljd-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 18:06:23.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-2 node/ci-op-q97jfpbl-875d2-wtljd-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 18:06:23.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-2 node/ci-op-q97jfpbl-875d2-wtljd-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 18:06:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-q97jfpbl-875d2-wtljd-master-2 node/ci-op-q97jfpbl-875d2-wtljd-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 18:06:28.877 I ns/openshift-marketplace pod/community-operators-hwgtb node/ci-op-q97jfpbl-875d2-wtljd-worker-b-l9qf6 reason/Scheduled
Nov 29 18:06:28.882 I ns/openshift-marketplace pod/community-operators-hwgtb node/ reason/Created
#1597639927313469440build-log.txt.gz2 days ago
Nov 29 18:15:10.666 I ns/openshift-machine-api pod/machine-api-operator-59b65cfb45-7lh9z node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/machine-api-operator reason/Ready
Nov 29 18:15:10.734 I ns/openshift-machine-api pod/machine-api-operator-7fffdc9787-jgmtc node/ci-op-q97jfpbl-875d2-wtljd-master-1 reason/GracefulDelete duration/30s
Nov 29 18:15:11.625 I ns/openshift-machine-api pod/machine-api-operator-7fffdc9787-jgmtc node/ci-op-q97jfpbl-875d2-wtljd-master-1 container/kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
Nov 29 18:15:11.625 E ns/openshift-machine-api pod/machine-api-operator-7fffdc9787-jgmtc node/ci-op-q97jfpbl-875d2-wtljd-master-1 container/machine-api-operator reason/ContainerExit code/2 cause/Error
Nov 29 18:15:11.653 I ns/openshift-machine-api pod/machine-api-operator-7fffdc9787-jgmtc node/ci-op-q97jfpbl-875d2-wtljd-master-1 reason/Deleted
Nov 29 18:15:20.000 I ns/openshift-apiserver pod/apiserver-74764654fd-bxxzs node/apiserver-74764654fd-bxxzs reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 18:15:20.000 I ns/openshift-apiserver pod/apiserver-74764654fd-bxxzs node/apiserver-74764654fd-bxxzs reason/TerminationStoppedServing Server has stopped listening
Nov 29 18:16:05.895 - 15s   W ns/openshift-apiserver pod/apiserver-6f56877fc4-wbgqj node/ pod has been pending longer than a minute
Nov 29 18:16:20.000 I ns/openshift-apiserver pod/apiserver-74764654fd-bxxzs node/apiserver-74764654fd-bxxzs reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 18:16:21.132 I ns/openshift-apiserver pod/apiserver-74764654fd-bxxzs node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/openshift-apiserver reason/ContainerExit code/0 cause/Completed
Nov 29 18:16:21.132 I ns/openshift-apiserver pod/apiserver-74764654fd-bxxzs node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/openshift-apiserver-check-endpoints reason/ContainerExit code/0 cause/Completed
#1597639927313469440build-log.txt.gz2 days ago
Nov 29 18:16:32.453 W ns/openshift-apiserver pod/apiserver-6f56877fc4-4v6mm reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 29 18:16:32.462 I ns/openshift-apiserver pod/apiserver-6f56877fc4-4v6mm node/ reason/Created
Nov 29 18:16:35.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 29 18:16:45.801 I ns/openshift-marketplace pod/community-operators-ptfdn node/ reason/Created
Nov 29 18:16:45.814 I ns/openshift-marketplace pod/community-operators-ptfdn node/ci-op-q97jfpbl-875d2-wtljd-worker-b-l9qf6 reason/Scheduled
Nov 29 18:16:47.000 I ns/openshift-apiserver pod/apiserver-74764654fd-kw4t9 node/apiserver-74764654fd-kw4t9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 18:16:47.000 I ns/openshift-apiserver pod/apiserver-74764654fd-kw4t9 node/apiserver-74764654fd-kw4t9 reason/TerminationStoppedServing Server has stopped listening
Nov 29 18:16:48.000 I ns/openshift-marketplace pod/community-operators-ptfdn node/ci-op-q97jfpbl-875d2-wtljd-worker-b-l9qf6 container/registry-server reason/Pulling image/registry.redhat.io/redhat/community-operator-index:v4.9
Nov 29 18:16:48.000 I ns/openshift-marketplace pod/community-operators-ptfdn reason/AddedInterface Add eth0 [10.131.0.46/23] from openshift-sdn
Nov 29 18:16:49.000 I ns/openshift-marketplace pod/community-operators-ptfdn node/ci-op-q97jfpbl-875d2-wtljd-worker-b-l9qf6 container/registry-server reason/Created
Nov 29 18:16:49.000 I ns/openshift-marketplace pod/community-operators-ptfdn node/ci-op-q97jfpbl-875d2-wtljd-worker-b-l9qf6 container/registry-server reason/Pulled duration/0.892s image/registry.redhat.io/redhat/community-operator-index:v4.9
#1597639927313469440build-log.txt.gz2 days ago
Nov 29 18:18:14.000 I ns/openshift-machine-api pod/machine-api-controllers-65f9bddf65-hftg5 node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/machine-controller reason/Pulling image/registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:90333cb583804df3cc2751d19ccccfc9fa5756b8e09736d80ecf500fec258913
Nov 29 18:18:14.000 I ns/openshift-machine-api pod/machine-api-controllers-65f9bddf65-hftg5 node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/machineset-controller reason/Created
Nov 29 18:18:14.000 I ns/openshift-machine-api pod/machine-api-controllers-65f9bddf65-hftg5 node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/machineset-controller reason/Pulled image/registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7c6e7e34573ead03c1d0108c9e0aa65ac5b614460982d2dd1d596163a3914c3d
Nov 29 18:18:14.000 I ns/openshift-machine-api pod/machine-api-controllers-65f9bddf65-hftg5 node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/machineset-controller reason/Started
Nov 29 18:18:14.000 I ns/openshift-machine-api pod/machine-api-controllers-65f9bddf65-hftg5 reason/AddedInterface Add eth0 [10.129.0.71/23] from openshift-sdn
Nov 29 18:18:14.000 I ns/openshift-apiserver pod/apiserver-74764654fd-pmkw2 node/apiserver-74764654fd-pmkw2 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 18:18:14.000 I ns/openshift-apiserver pod/apiserver-74764654fd-pmkw2 node/apiserver-74764654fd-pmkw2 reason/TerminationStoppedServing Server has stopped listening
Nov 29 18:18:19.000 I ns/openshift-machine-api pod/machine-api-controllers-65f9bddf65-hftg5 node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/machine-controller reason/Created
Nov 29 18:18:19.000 I ns/openshift-machine-api pod/machine-api-controllers-65f9bddf65-hftg5 node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/machine-controller reason/Pulled duration/5.034s image/registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:90333cb583804df3cc2751d19ccccfc9fa5756b8e09736d80ecf500fec258913
Nov 29 18:18:19.000 I ns/openshift-machine-api pod/machine-api-controllers-65f9bddf65-hftg5 node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/machine-controller reason/Started
Nov 29 18:18:19.000 I ns/openshift-machine-api pod/machine-api-controllers-65f9bddf65-hftg5 node/ci-op-q97jfpbl-875d2-wtljd-master-0 container/nodelink-controller reason/Pulled image/registry.ci.openshift.org/ocp/4.9-2022-11-29-170426@sha256:7c6e7e34573ead03c1d0108c9e0aa65ac5b614460982d2dd1d596163a3914c3d
release-openshift-origin-installer-e2e-aws-upgrade-4.8-to-4.9-to-4.10-to-4.11-ci (all) - 7 runs, 100% failed, 29% of failures match = 29% impact
#1598109634206371840build-log.txt.gz19 hours ago
Dec 01 01:04:30.757 I ns/openshift-marketplace pod/redhat-operators-m7pzl node/ip-10-0-147-4.us-west-2.compute.internal uid/c721cf48-cb1c-4a05-a303-718b07414777 reason/Deleted
Dec 01 01:04:30.984 I ns/e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-2387 pod/pod-configmap-7dae7ab9-9d33-43c6-aefe-c52c4b5058fa node/ip-10-0-147-4.us-west-2.compute.internal uid/70055e70-6a96-4c0a-9aea-63dd57d5c8da reason/Deleted
Dec 01 01:04:30.984 I ns/e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-2387 pod/pod-configmap-7dae7ab9-9d33-43c6-aefe-c52c4b5058fa node/ip-10-0-147-4.us-west-2.compute.internal uid/70055e70-6a96-4c0a-9aea-63dd57d5c8da reason/DeletedAfterCompletion
Dec 01 01:06:52.000 - 6969s I disruption/service-load-balancer-with-pdb connection/reused disruption/service-load-balancer-with-pdb connection/reused started responding to GET requests over reused connections
Dec 01 01:06:53.853 I ns/kube-system openshifttest/service-load-balancer-with-pdb reason/DisruptionEnded disruption/service-load-balancer-with-pdb connection/reused started responding to GET requests over reused connections
Dec 01 01:07:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-185-219.us-west-2.compute.internal node/ip-10-0-185-219 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 01:07:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-185-219.us-west-2.compute.internal node/ip-10-0-185-219 reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:07:19.000 - 3661s I disruption/service-load-balancer-with-pdb connection/new disruption/service-load-balancer-with-pdb connection/new started responding to GET requests over new connections
Dec 01 01:07:19.915 I ns/openshift-cluster-version clusterversion/cluster reason/UpgradeStarted version/ image/quay.io/openshift-release-dev/ocp-release:4.9.52-x86_64
Dec 01 01:07:20.000 I ns/openshift-cluster-version clusterversion/version reason/RetrievePayload retrieving payload version="" image="quay.io/openshift-release-dev/ocp-release:4.9.52-x86_64"
Dec 01 01:07:20.000 I ns/openshift-cluster-version job/version--w82bx reason/SuccessfulCreate Created pod: version--w82bx-lfm8m
#1598109634206371840build-log.txt.gz19 hours ago
Dec 01 01:15:11.302 - 9s    I ns/openshift-marketplace pod/community-operators-vhlvv uid/e41ea5ee-9fa9-4dac-808a-79bbbe3cd3f1 constructed/true reason/GracefulDelete duration/1s
Dec 01 01:15:12.852 I ns/openshift-marketplace pod/community-operators-vhlvv node/ip-10-0-147-4.us-west-2.compute.internal uid/e41ea5ee-9fa9-4dac-808a-79bbbe3cd3f1 container/registry-server reason/ContainerExit code/0 cause/Completed
Dec 01 01:15:12.852 W ns/openshift-marketplace pod/community-operators-vhlvv node/ip-10-0-147-4.us-west-2.compute.internal uid/e41ea5ee-9fa9-4dac-808a-79bbbe3cd3f1 container/registry-server reason/NotReady
Dec 01 01:15:20.742 I ns/openshift-marketplace pod/community-operators-vhlvv node/ip-10-0-147-4.us-west-2.compute.internal uid/e41ea5ee-9fa9-4dac-808a-79bbbe3cd3f1 reason/Deleted
Dec 01 01:15:27.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.0739501491515817 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-182-102.us-west-2.compute.internal=0.0037106666666666815,etcd-ip-10-0-185-219.us-west-2.compute.internal=NaN,etcd-ip-10-0-242-90.us-west-2.compute.internal=0.007560000000000016. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Dec 01 01:18:04.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-185-219.us-west-2.compute.internal node/ip-10-0-185-219 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 01:18:04.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-185-219.us-west-2.compute.internal node/ip-10-0-185-219 reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:19:00.000 - 1s    I ns/openshift-marketplace pod/redhat-marketplace-cmr5d uid/2d73fa4b-d425-4395-b61b-c25dee4c0a75 constructed/true reason/Created
Dec 01 01:19:00.985 I ns/openshift-marketplace pod/redhat-marketplace-cmr5d reason/Scheduled Successfully assigned openshift-marketplace/redhat-marketplace-cmr5d to ip-10-0-147-4.us-west-2.compute.internal
Dec 01 01:19:01.071 I ns/openshift-marketplace pod/redhat-marketplace-cmr5d node/ uid/2d73fa4b-d425-4395-b61b-c25dee4c0a75 reason/Created
Dec 01 01:19:01.071 I ns/openshift-marketplace pod/redhat-marketplace-cmr5d node/ip-10-0-147-4.us-west-2.compute.internal uid/2d73fa4b-d425-4395-b61b-c25dee4c0a75 reason/Scheduled node/ip-10-0-147-4.us-west-2.compute.internal
#1598109634206371840build-log.txt.gz19 hours ago
Dec 01 01:22:12.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (17 times)
Dec 01 01:23:16.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (18 times)
Dec 01 01:23:19.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (19 times)
Dec 01 01:23:50.000 W ns/openshift-marketplace pod/community-operators-57ghn node/ip-10-0-147-4.us-west-2.compute.internal reason/Unhealthy Liveness probe failed: timeout: health rpc did not complete within 1s\n
Dec 01 01:23:50.000 W ns/openshift-marketplace pod/community-operators-57ghn node/ip-10-0-147-4.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: timeout: health rpc did not complete within 1s\n
Dec 01 01:24:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-102.us-west-2.compute.internal node/ip-10-0-182-102 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 01:24:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-102.us-west-2.compute.internal node/ip-10-0-182-102 reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:24:12.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (20 times)
Dec 01 01:24:36.000 W ns/openshift-network-diagnostics node/ip-10-0-209-194.us-west-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.188.8:443: dial tcp 172.30.188.8:443: connect: connection refused
Dec 01 01:24:36.000 I ns/openshift-network-diagnostics node/ip-10-0-209-194.us-west-2.compute.internal reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000360354s: kubernetes-apiserver-service-cluster: tcp connection to 172.30.188.8:443 succeeded
Dec 01 01:24:37.000 W ns/openshift-network-diagnostics node/ip-10-0-209-194.us-west-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-182-102: failed to establish a TCP connection to 10.0.182.102:6443: dial tcp 10.0.182.102:6443: connect: connection refused
#1598109634206371840build-log.txt.gz19 hours ago
Dec 01 01:26:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (35 times)
Dec 01 01:26:36.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ip-10-0-182-102_da537081-de33-42bb-af86-efdc7f83e37b became leader
Dec 01 01:27:12.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (36 times)
Dec 01 01:28:12.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (37 times)
Dec 01 01:29:14.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (38 times)
Dec 01 01:29:57.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-242-90.us-west-2.compute.internal node/ip-10-0-242-90 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 01:29:57.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-242-90.us-west-2.compute.internal node/ip-10-0-242-90 reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:30:00.000 I ns/openshift-multus pod/ip-reconciler-27830970-wq4mp node/ip-10-0-147-4.us-west-2.compute.internal container/whereabouts reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:052d7ace6ea70f688bc2eda54544707251462fad6edf3c2b0b2289be35ceccd6
Dec 01 01:30:00.000 I ns/openshift-multus pod/ip-reconciler-27830970-wq4mp node/ip-10-0-147-4.us-west-2.compute.internal reason/Created Created container whereabouts
Dec 01 01:30:00.000 I ns/openshift-multus pod/ip-reconciler-27830970-wq4mp node/ip-10-0-147-4.us-west-2.compute.internal reason/Started Started container whereabouts
Dec 01 01:30:00.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulCreate Created job ip-reconciler-27830970
#1598109634206371840build-log.txt.gz19 hours ago
Dec 01 01:37:28.518 W ns/openshift-apiserver pod/apiserver-5445b99b4d-7rffj reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 01:37:28.641 I ns/openshift-apiserver pod/apiserver-6cf6cff77-h9d2g node/ uid/3671a7c8-aa27-457a-90fb-def19975f869 reason/Deleted
Dec 01 01:37:28.641 I ns/openshift-apiserver pod/apiserver-6cf6cff77-h9d2g node/ uid/3671a7c8-aa27-457a-90fb-def19975f869 reason/DeletedBeforeScheduling
Dec 01 01:37:28.669 I ns/openshift-apiserver pod/apiserver-5445b99b4d-7rffj node/ uid/d41ec886-ba59-4798-a86d-6f37cbc64f91 reason/Created
Dec 01 01:37:32.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Dec 01 01:37:34.000 I ns/openshift-apiserver pod/apiserver-5c6b4ccc5-zs9nk node/apiserver-5c6b4ccc5-zs9nk reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 01:37:34.000 I ns/openshift-apiserver pod/apiserver-5c6b4ccc5-zs9nk node/apiserver-5c6b4ccc5-zs9nk reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:37:38.621 W ns/openshift-apiserver pod/apiserver-5445b99b4d-7rffj reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 01:37:38.773 I ns/openshift-machine-api pod/machine-api-operator-b645b759d-snrkq node/ip-10-0-242-90.us-west-2.compute.internal uid/28f17b96-0827-494a-afa2-b14a180b3cb8 reason/Deleted
Dec 01 01:37:40.000 W ns/openshift-apiserver pod/apiserver-5c6b4ccc5-zs9nk node/ip-10-0-182-102.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.42:8443/healthz": dial tcp 10.129.0.42:8443: connect: connection refused\nbody: \n
Dec 01 01:37:40.000 W ns/openshift-apiserver pod/apiserver-5c6b4ccc5-zs9nk node/ip-10-0-182-102.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.42:8443/healthz": dial tcp 10.129.0.42:8443: connect: connection refused\nbody: \n
#1598109634206371840build-log.txt.gz19 hours ago
Dec 01 01:38:57.035 W ns/openshift-apiserver pod/apiserver-5445b99b4d-t9hqq reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 01:38:57.115 I ns/openshift-apiserver pod/apiserver-5c6b4ccc5-8cqtb node/ip-10-0-185-219.us-west-2.compute.internal uid/a1ce5fc3-a65c-4434-90ec-e2339578eb7a reason/GracefulDelete duration/70s
Dec 01 01:38:57.115 - 68s   I ns/openshift-apiserver pod/apiserver-5c6b4ccc5-8cqtb uid/a1ce5fc3-a65c-4434-90ec-e2339578eb7a constructed/true reason/GracefulDelete duration/70s
Dec 01 01:38:57.115 I ns/openshift-apiserver pod/apiserver-5445b99b4d-t9hqq node/ uid/d2c75a2c-9e80-4d5c-9cd0-ba90ea961214 reason/Created
Dec 01 01:38:58.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Dec 01 01:39:07.000 I ns/openshift-apiserver pod/apiserver-5c6b4ccc5-8cqtb node/apiserver-5c6b4ccc5-8cqtb reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 01:39:07.000 I ns/openshift-apiserver pod/apiserver-5c6b4ccc5-8cqtb node/apiserver-5c6b4ccc5-8cqtb reason/TerminationStoppedServing Server has stopped listening
Dec 01 01:39:09.000 W ns/openshift-apiserver pod/apiserver-5c6b4ccc5-8cqtb node/ip-10-0-185-219.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused\nbody: \n
Dec 01 01:39:09.000 W ns/openshift-apiserver pod/apiserver-5c6b4ccc5-8cqtb node/ip-10-0-185-219.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused\nbody: \n
Dec 01 01:39:09.000 W ns/openshift-apiserver pod/apiserver-5c6b4ccc5-8cqtb node/ip-10-0-185-219.us-west-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused
Dec 01 01:39:09.000 W ns/openshift-apiserver pod/apiserver-5c6b4ccc5-8cqtb node/ip-10-0-185-219.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused
#1597747015343673344build-log.txt.gz45 hours ago
Nov 30 01:05:41.000 I ns/e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-8411 pod/pod-secrets-9a9afff1-348a-49b3-9a99-27cc6e96fcb3 node/ip-10-0-183-143.us-west-1.compute.internal reason/Started Started container secret-volume-test
Nov 30 01:05:41.722 I ns/e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-8411 pod/pod-secrets-9a9afff1-348a-49b3-9a99-27cc6e96fcb3 node/ip-10-0-183-143.us-west-1.compute.internal uid/1368b113-e864-4932-b1c7-ea2b64ef77d4 container/secret-env-test reason/ContainerExit code/0 cause/Completed
Nov 30 01:05:41.722 I ns/e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-8411 pod/pod-secrets-9a9afff1-348a-49b3-9a99-27cc6e96fcb3 node/ip-10-0-183-143.us-west-1.compute.internal uid/1368b113-e864-4932-b1c7-ea2b64ef77d4 container/secret-volume-test reason/ContainerExit code/0 cause/Completed
Nov 30 01:05:42.659 I ns/e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-8411 pod/pod-secrets-9a9afff1-348a-49b3-9a99-27cc6e96fcb3 node/ip-10-0-183-143.us-west-1.compute.internal uid/1368b113-e864-4932-b1c7-ea2b64ef77d4 reason/Deleted
Nov 30 01:05:42.659 I ns/e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-8411 pod/pod-secrets-9a9afff1-348a-49b3-9a99-27cc6e96fcb3 node/ip-10-0-183-143.us-west-1.compute.internal uid/1368b113-e864-4932-b1c7-ea2b64ef77d4 reason/DeletedAfterCompletion
Nov 30 01:06:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-199-172.us-west-1.compute.internal node/ip-10-0-199-172 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 01:06:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-199-172.us-west-1.compute.internal node/ip-10-0-199-172 reason/TerminationStoppedServing Server has stopped listening
Nov 30 01:06:58.000 I ns/openshift-cluster-version pod/cluster-version-operator-ffdd5b744-zr7cp node/ip-10-0-199-172.us-west-1.compute.internal container/cluster-version-operator reason/Pulled image/quay.io/openshift-release-dev/ocp-release@sha256:5d3587e7daf108a697e1a320247a729337403d500f1ec51e49501e9817053d7f
Nov 30 01:06:58.000 I ns/openshift-cluster-version pod/cluster-version-operator-ffdd5b744-zr7cp node/ip-10-0-199-172.us-west-1.compute.internal reason/Created Created container cluster-version-operator (3 times)
Nov 30 01:06:58.000 I ns/openshift-cluster-version pod/cluster-version-operator-ffdd5b744-zr7cp node/ip-10-0-199-172.us-west-1.compute.internal reason/Started Started container cluster-version-operator (3 times)
Nov 30 01:06:58.479 I ns/openshift-cluster-version pod/cluster-version-operator-ffdd5b744-zr7cp node/ip-10-0-199-172.us-west-1.compute.internal uid/0f4476b9-a2be-4723-9aa5-447e6552adc3 container/cluster-version-operator reason/ContainerExit code/0 cause/Completed
periodic-ci-openshift-release-master-ci-4.9-upgrade-from-stable-4.8-e2e-aws-upgrade (all) - 10 runs, 60% failed, 100% of failures match = 60% impact
#1598100566188232704build-log.txt.gz20 hours ago
Dec 01 00:36:46.000 I ns/openshift-marketplace pod/community-operators-k58tz node/ip-10-0-131-54.ec2.internal container/registry-server reason/Killing
Dec 01 00:36:47.526 I ns/openshift-marketplace pod/community-operators-k58tz node/ip-10-0-131-54.ec2.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Dec 01 00:36:47.537 I ns/openshift-marketplace pod/certified-operators-cg5nk node/ip-10-0-131-54.ec2.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Dec 01 00:36:53.382 I ns/openshift-marketplace pod/certified-operators-cg5nk node/ip-10-0-131-54.ec2.internal reason/Deleted
Dec 01 00:36:53.402 I ns/openshift-marketplace pod/community-operators-k58tz node/ip-10-0-131-54.ec2.internal reason/Deleted
Dec 01 00:37:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-232-155.ec2.internal node/ip-10-0-232-155 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 00:37:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-232-155.ec2.internal node/ip-10-0-232-155 reason/TerminationStoppedServing Server has stopped listening
Dec 01 00:37:58.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-232-155.ec2.internal node/ip-10-0-232-155.ec2.internal container/kube-scheduler-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcb22579324a93faddf49caa971153fea33cec03738e76490d6de7e39a01db59
Dec 01 00:37:58.955 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-232-155.ec2.internal node/ip-10-0-232-155.ec2.internal container/kube-scheduler-recovery-controller reason/ContainerExit code/0 cause/Completed
Dec 01 00:37:59.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-232-155.ec2.internal node/ip-10-0-232-155.ec2.internal container/kube-scheduler-recovery-controller reason/Created
Dec 01 00:37:59.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-232-155.ec2.internal node/ip-10-0-232-155.ec2.internal container/kube-scheduler-recovery-controller reason/Started
#1598100566188232704build-log.txt.gz20 hours ago
Dec 01 00:41:39.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (17 times)
Dec 01 00:42:43.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (18 times)
Dec 01 00:42:43.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (18 times)
Dec 01 00:42:46.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (19 times)
Dec 01 00:42:46.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (19 times)
Dec 01 00:43:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-184-63.ec2.internal node/ip-10-0-184-63 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 00:43:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-184-63.ec2.internal node/ip-10-0-184-63 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 00:43:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-184-63.ec2.internal node/ip-10-0-184-63 reason/TerminationStoppedServing Server has stopped listening
Dec 01 00:43:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-184-63.ec2.internal node/ip-10-0-184-63 reason/TerminationStoppedServing Server has stopped listening
Dec 01 00:43:10.000 W ns/openshift-network-diagnostics node/ip-10-0-131-54.ec2.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.95.232:443: dial tcp 172.30.95.232:443: connect: connection refused
Dec 01 00:43:10.000 W ns/openshift-network-diagnostics node/ip-10-0-131-54.ec2.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.95.232:443: dial tcp 172.30.95.232:443: connect: connection refused
Dec 01 00:43:11.000 W ns/openshift-network-diagnostics node/ip-10-0-131-54.ec2.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-184-63: failed to establish a TCP connection to 10.0.184.63:6443: dial tcp 10.0.184.63:6443: connect: connection refused
#1598100566188232704build-log.txt.gz20 hours ago
Dec 01 00:47:24.000 I ns/openshift-marketplace pod/community-operators-wldv9 node/ip-10-0-131-54.ec2.internal container/registry-server reason/Killing
Dec 01 00:47:25.028 I ns/openshift-marketplace pod/community-operators-wldv9 node/ip-10-0-131-54.ec2.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Dec 01 00:47:26.050 I ns/openshift-marketplace pod/community-operators-wldv9 node/ip-10-0-131-54.ec2.internal reason/Deleted
Dec 01 00:47:39.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (37 times)
Dec 01 00:48:39.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-233511@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (38 times)
Dec 01 00:48:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-224.ec2.internal node/ip-10-0-142-224 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Dec 01 00:48:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-224.ec2.internal node/ip-10-0-142-224 reason/TerminationStoppedServing Server has stopped listening
Dec 01 00:49:24.000 W ns/openshift-machine-api machineset/ci-op-nwxlgstt-1d119-54tsp-worker-us-east-1d reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (9 times)
Dec 01 00:49:24.000 W ns/openshift-machine-api machineset/ci-op-nwxlgstt-1d119-54tsp-worker-us-east-1b reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (9 times)
Dec 01 00:49:29.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-142-224.ec2.internal node/ip-10-0-142-224.ec2.internal container/kube-scheduler-recovery-controller reason/Created
Dec 01 00:49:29.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-142-224.ec2.internal node/ip-10-0-142-224.ec2.internal container/kube-scheduler-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcb22579324a93faddf49caa971153fea33cec03738e76490d6de7e39a01db59
#1598100566188232704build-log.txt.gz20 hours ago
Dec 01 00:56:48.307 I ns/openshift-apiserver pod/apiserver-5887f8f678-kzq98 node/ reason/Created
Dec 01 00:56:51.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Dec 01 00:56:51.193 W ns/openshift-apiserver pod/apiserver-5887f8f678-kzq98 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 00:56:51.193 W ns/openshift-apiserver pod/apiserver-5887f8f678-kzq98 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 00:56:51.215 I ns/openshift-machine-api pod/machine-api-operator-b645b759d-vjr45 node/ip-10-0-232-155.ec2.internal reason/Deleted
Dec 01 00:56:54.000 I ns/openshift-apiserver pod/apiserver-85bdb9cfc9-hxznz node/apiserver-85bdb9cfc9-hxznz reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 00:56:54.000 I ns/openshift-apiserver pod/apiserver-85bdb9cfc9-hxznz node/apiserver-85bdb9cfc9-hxznz reason/TerminationStoppedServing Server has stopped listening
Dec 01 00:57:00.000 W ns/openshift-apiserver pod/apiserver-85bdb9cfc9-hxznz node/ip-10-0-232-155.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.66:8443/healthz": dial tcp 10.128.0.66:8443: connect: connection refused\nbody: \n
Dec 01 00:57:00.000 W ns/openshift-apiserver pod/apiserver-85bdb9cfc9-hxznz node/ip-10-0-232-155.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.66:8443/healthz": dial tcp 10.128.0.66:8443: connect: connection refused\nbody: \n
Dec 01 00:57:00.000 W ns/openshift-apiserver pod/apiserver-85bdb9cfc9-hxznz node/ip-10-0-232-155.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.66:8443/healthz": dial tcp 10.128.0.66:8443: connect: connection refused
Dec 01 00:57:00.000 W ns/openshift-apiserver pod/apiserver-85bdb9cfc9-hxznz node/ip-10-0-232-155.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.66:8443/healthz": dial tcp 10.128.0.66:8443: connect: connection refused
#1598100566188232704build-log.txt.gz20 hours ago
Dec 01 00:58:21.272 I ns/openshift-apiserver pod/apiserver-5887f8f678-kzq98 node/ip-10-0-232-155.ec2.internal container/openshift-apiserver reason/Ready
Dec 01 00:58:21.330 I ns/openshift-apiserver pod/apiserver-85bdb9cfc9-wq668 node/ip-10-0-142-224.ec2.internal reason/GracefulDelete duration/70s
Dec 01 00:58:21.447 W ns/openshift-apiserver pod/apiserver-5887f8f678-85lq4 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Dec 01 00:58:21.453 I ns/openshift-apiserver pod/apiserver-5887f8f678-85lq4 node/ reason/Created
Dec 01 00:58:22.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Dec 01 00:58:31.000 I ns/openshift-apiserver pod/apiserver-85bdb9cfc9-wq668 node/apiserver-85bdb9cfc9-wq668 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Dec 01 00:58:31.000 I ns/openshift-apiserver pod/apiserver-85bdb9cfc9-wq668 node/apiserver-85bdb9cfc9-wq668 reason/TerminationStoppedServing Server has stopped listening
Dec 01 00:58:32.000 W ns/openshift-apiserver pod/apiserver-85bdb9cfc9-wq668 node/ip-10-0-142-224.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.35:8443/healthz": dial tcp 10.130.0.35:8443: connect: connection refused\nbody: \n
Dec 01 00:58:32.000 W ns/openshift-apiserver pod/apiserver-85bdb9cfc9-wq668 node/ip-10-0-142-224.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.35:8443/healthz": dial tcp 10.130.0.35:8443: connect: connection refused\nbody: \n
Dec 01 00:58:32.000 W ns/openshift-apiserver pod/apiserver-85bdb9cfc9-wq668 node/ip-10-0-142-224.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.35:8443/healthz": dial tcp 10.130.0.35:8443: connect: connection refused
Dec 01 00:58:32.000 W ns/openshift-apiserver pod/apiserver-85bdb9cfc9-wq668 node/ip-10-0-142-224.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.35:8443/healthz": dial tcp 10.130.0.35:8443: connect: connection refused
#1598082838182236160build-log.txt.gz21 hours ago
Nov 30 23:35:31.000 I ns/openshift-marketplace pod/community-operators-fqk4h node/ip-10-0-185-218.us-west-1.compute.internal container/registry-server reason/Killing
Nov 30 23:35:31.092 I ns/openshift-marketplace pod/certified-operators-cchns node/ip-10-0-185-218.us-west-1.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 30 23:35:32.098 I ns/openshift-marketplace pod/community-operators-fqk4h node/ip-10-0-185-218.us-west-1.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 30 23:35:39.150 I ns/openshift-marketplace pod/certified-operators-cchns node/ip-10-0-185-218.us-west-1.compute.internal reason/Deleted
Nov 30 23:35:39.211 I ns/openshift-marketplace pod/community-operators-fqk4h node/ip-10-0-185-218.us-west-1.compute.internal reason/Deleted
Nov 30 23:36:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-206-50.us-west-1.compute.internal node/ip-10-0-206-50 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 23:36:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-206-50.us-west-1.compute.internal node/ip-10-0-206-50 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 23:36:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-206-50.us-west-1.compute.internal node/ip-10-0-206-50 reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:36:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-206-50.us-west-1.compute.internal node/ip-10-0-206-50 reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:37:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-206-50.us-west-1.compute.internal node/ip-10-0-206-50 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 23:37:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-206-50.us-west-1.compute.internal node/ip-10-0-206-50 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 23:37:32.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-206-50.us-west-1.compute.internal node/ip-10-0-206-50.us-west-1.compute.internal container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72
#1598082838182236160build-log.txt.gz21 hours ago
Nov 30 23:39:39.000 I ns/openshift-network-diagnostics node/ip-10-0-185-218.us-west-1.compute.internal reason/ConnectivityRestored roles/worker Connectivity restored after 2m0.000240436s: kubernetes-apiserver-endpoint-ip-10-0-206-50: tcp connection to 10.0.206.50:6443 succeeded
Nov 30 23:39:49.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (14 times)
Nov 30 23:40:49.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (15 times)
Nov 30 23:41:53.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (16 times)
Nov 30 23:41:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (17 times)
Nov 30 23:42:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-164-253.us-west-1.compute.internal node/ip-10-0-164-253 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 23:42:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-164-253.us-west-1.compute.internal node/ip-10-0-164-253 reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:42:28.797 I ns/openshift-marketplace pod/redhat-marketplace-tn5lz node/ip-10-0-185-218.us-west-1.compute.internal reason/Scheduled
Nov 30 23:42:28.820 I ns/openshift-marketplace pod/redhat-marketplace-tn5lz node/ reason/Created
Nov 30 23:42:30.000 I ns/openshift-marketplace pod/redhat-marketplace-tn5lz node/ip-10-0-185-218.us-west-1.compute.internal container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-marketplace-index:v4.8
Nov 30 23:42:30.000 I ns/openshift-marketplace pod/redhat-marketplace-tn5lz reason/AddedInterface Add eth0 [10.131.0.45/23] from openshift-sdn
#1598082838182236160build-log.txt.gz21 hours ago
Nov 30 23:45:53.000 W ns/openshift-machine-api machineset/ci-op-p459qmt5-1d119-chkkd-worker-us-west-1c reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (9 times)
Nov 30 23:46:39.000 W ns/openshift-network-diagnostics node/ip-10-0-185-218.us-west-1.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-164-253: failed to establish a TCP connection to 10.0.164.253:6443: dial tcp 10.0.164.253:6443: connect: connection refused
Nov 30 23:46:39.000 I ns/openshift-network-diagnostics node/ip-10-0-185-218.us-west-1.compute.internal reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000873504s: kubernetes-apiserver-endpoint-ip-10-0-164-253: tcp connection to 10.0.164.253:6443 succeeded
Nov 30 23:46:49.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (34 times)
Nov 30 23:47:49.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-223009@sha256:0a91d7c079673df8fe572695408886f4637e7056c1f45c3668e13682483e6c72 (35 times)
Nov 30 23:47:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-84.us-west-1.compute.internal node/ip-10-0-141-84 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 23:47:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-84.us-west-1.compute.internal node/ip-10-0-141-84 reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:48:40.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-84.us-west-1.compute.internal node/ip-10-0-141-84.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 30 23:48:40.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-84.us-west-1.compute.internal node/ip-10-0-141-84.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
Nov 30 23:48:40.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-84.us-west-1.compute.internal node/ip-10-0-141-84.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Started
Nov 30 23:48:40.571 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-84.us-west-1.compute.internal node/ip-10-0-141-84.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/ContainerExit code/0 cause/Completed
#1598082838182236160build-log.txt.gz21 hours ago
Nov 30 23:56:47.496 W ns/openshift-apiserver pod/apiserver-5778855698-5sb8g reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 23:56:47.523 W ns/openshift-apiserver pod/apiserver-77bcd59ffc-bp7dp reason/FailedScheduling skip schedule deleting pod: openshift-apiserver/apiserver-77bcd59ffc-bp7dp
Nov 30 23:56:47.595 I ns/openshift-apiserver pod/apiserver-5778855698-5sb8g node/ reason/Created
Nov 30 23:56:47.716 I ns/openshift-apiserver pod/apiserver-77bcd59ffc-bp7dp node/ reason/DeletedBeforeScheduling
Nov 30 23:56:51.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 30 23:56:53.000 I ns/openshift-apiserver pod/apiserver-85bc6d7769-vqncp node/apiserver-85bc6d7769-vqncp reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 30 23:56:53.000 I ns/openshift-apiserver pod/apiserver-85bc6d7769-vqncp node/apiserver-85bc6d7769-vqncp reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:56:54.000 W ns/openshift-apiserver pod/apiserver-85bc6d7769-vqncp node/ip-10-0-141-84.us-west-1.compute.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.47:8443/healthz": dial tcp 10.128.0.47:8443: connect: connection refused\nbody: \n
Nov 30 23:56:54.000 W ns/openshift-apiserver pod/apiserver-85bc6d7769-vqncp node/ip-10-0-141-84.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.47:8443/healthz": dial tcp 10.128.0.47:8443: connect: connection refused\nbody: \n
Nov 30 23:56:54.000 W ns/openshift-apiserver pod/apiserver-85bc6d7769-vqncp node/ip-10-0-141-84.us-west-1.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.47:8443/healthz": dial tcp 10.128.0.47:8443: connect: connection refused
Nov 30 23:56:54.000 W ns/openshift-apiserver pod/apiserver-85bc6d7769-vqncp node/ip-10-0-141-84.us-west-1.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.47:8443/healthz": dial tcp 10.128.0.47:8443: connect: connection refused
#1598082838182236160build-log.txt.gz21 hours ago
Nov 30 23:58:26.746 I ns/openshift-apiserver pod/apiserver-5778855698-5sb8g node/ip-10-0-141-84.us-west-1.compute.internal container/openshift-apiserver reason/Ready
Nov 30 23:58:26.781 W ns/openshift-apiserver pod/apiserver-5778855698-w2t5w reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 23:58:26.807 I ns/openshift-apiserver pod/apiserver-85bc6d7769-nwlnd node/ip-10-0-164-253.us-west-1.compute.internal reason/GracefulDelete duration/70s
Nov 30 23:58:26.813 I ns/openshift-apiserver pod/apiserver-5778855698-w2t5w node/ reason/Created
Nov 30 23:58:28.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 30 23:58:36.000 I ns/openshift-apiserver pod/apiserver-85bc6d7769-nwlnd node/apiserver-85bc6d7769-nwlnd reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 30 23:58:36.000 I ns/openshift-apiserver pod/apiserver-85bc6d7769-nwlnd node/apiserver-85bc6d7769-nwlnd reason/TerminationStoppedServing Server has stopped listening
Nov 30 23:58:40.000 W ns/openshift-apiserver pod/apiserver-85bc6d7769-nwlnd node/ip-10-0-164-253.us-west-1.compute.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.41:8443/healthz": dial tcp 10.130.0.41:8443: connect: connection refused\nbody: \n
Nov 30 23:58:40.000 W ns/openshift-apiserver pod/apiserver-85bc6d7769-nwlnd node/ip-10-0-164-253.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.41:8443/healthz": dial tcp 10.130.0.41:8443: connect: connection refused\nbody: \n
Nov 30 23:58:40.000 W ns/openshift-apiserver pod/apiserver-85bc6d7769-nwlnd node/ip-10-0-164-253.us-west-1.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.41:8443/healthz": dial tcp 10.130.0.41:8443: connect: connection refused
Nov 30 23:58:40.000 W ns/openshift-apiserver pod/apiserver-85bc6d7769-nwlnd node/ip-10-0-164-253.us-west-1.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.41:8443/healthz": dial tcp 10.130.0.41:8443: connect: connection refused
#1597999972987441152build-log.txt.gz27 hours ago
Nov 30 18:00:01.000 I ns/openshift-multus job/ip-reconciler-27830520 reason/Completed Job completed
Nov 30 18:00:01.000 I ns/openshift-multus cronjob/ip-reconciler reason/SawCompletedJob Saw completed job: ip-reconciler-27830520, status: Complete
Nov 30 18:00:01.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulDelete Deleted job ip-reconciler-27830520
Nov 30 18:00:01.040 I ns/openshift-multus pod/ip-reconciler-27830520-9wjt8 node/ip-10-0-148-186.us-east-2.compute.internal container/whereabouts reason/ContainerExit code/0 cause/Completed
Nov 30 18:00:01.112 I ns/openshift-multus pod/ip-reconciler-27830520-9wjt8 node/ip-10-0-148-186.us-east-2.compute.internal reason/DeletedAfterCompletion
Nov 30 18:02:47.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-152-131.us-east-2.compute.internal node/ip-10-0-152-131 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 18:02:47.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-152-131.us-east-2.compute.internal node/ip-10-0-152-131 reason/TerminationStoppedServing Server has stopped listening
Nov 30 18:03:10.000 W ns/openshift-network-diagnostics node/ip-10-0-133-217.us-east-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-152-131: failed to establish a TCP connection to 10.0.152.131:6443: dial tcp 10.0.152.131:6443: connect: connection refused
Nov 30 18:03:28.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-131.us-east-2.compute.internal node/ip-10-0-152-131.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 30 18:03:28.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-131.us-east-2.compute.internal node/ip-10-0-152-131.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
Nov 30 18:03:28.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-131.us-east-2.compute.internal node/ip-10-0-152-131.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Started
#1597999972987441152build-log.txt.gz27 hours ago
Nov 30 18:07:05.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-170328@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (15 times)
Nov 30 18:08:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-170328@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (16 times)
Nov 30 18:08:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-170328@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (16 times)
Nov 30 18:08:12.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-170328@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (17 times)
Nov 30 18:08:12.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-170328@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (17 times)
Nov 30 18:08:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-106.us-east-2.compute.internal node/ip-10-0-202-106 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 18:08:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-106.us-east-2.compute.internal node/ip-10-0-202-106 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 18:08:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-106.us-east-2.compute.internal node/ip-10-0-202-106 reason/TerminationStoppedServing Server has stopped listening
Nov 30 18:08:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-202-106.us-east-2.compute.internal node/ip-10-0-202-106 reason/TerminationStoppedServing Server has stopped listening
Nov 30 18:09:05.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-170328@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (18 times)
Nov 30 18:09:05.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-30-170328@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (18 times)
Nov 30 18:09:23.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-202-106.us-east-2.compute.internal node/ip-10-0-202-106.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Created
#1597999972987441152build-log.txt.gz27 hours ago
Nov 30 18:14:31.000 I ns/openshift-marketplace pod/redhat-operators-lptpn reason/AddedInterface Add eth0 [10.131.0.32/23] from openshift-sdn
Nov 30 18:14:32.000 I ns/openshift-marketplace pod/redhat-operators-lptpn node/ip-10-0-133-217.us-east-2.compute.internal container/registry-server reason/Created
Nov 30 18:14:32.000 I ns/openshift-marketplace pod/redhat-operators-lptpn node/ip-10-0-133-217.us-east-2.compute.internal container/registry-server reason/Pulled duration/0.633s image/registry.redhat.io/redhat/redhat-operator-index:v4.8
Nov 30 18:14:32.000 I ns/openshift-marketplace pod/redhat-operators-lptpn node/ip-10-0-133-217.us-east-2.compute.internal container/registry-server reason/Started
Nov 30 18:14:33.059 I ns/openshift-marketplace pod/redhat-operators-lptpn node/ip-10-0-133-217.us-east-2.compute.internal container/registry-server reason/ContainerStart duration/3.00s
Nov 30 18:14:34.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-46.us-east-2.compute.internal node/ip-10-0-177-46 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 18:14:34.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-46.us-east-2.compute.internal node/ip-10-0-177-46 reason/TerminationStoppedServing Server has stopped listening
Nov 30 18:14:35.668 I ns/openshift-marketplace pod/redhat-marketplace-wwvmz node/ip-10-0-133-217.us-east-2.compute.internal container/registry-server reason/Ready
Nov 30 18:14:35.668 I ns/openshift-marketplace pod/redhat-marketplace-wwvmz node/ip-10-0-133-217.us-east-2.compute.internal reason/GracefulDelete duration/1s
Nov 30 18:14:36.000 I ns/openshift-marketplace pod/redhat-marketplace-wwvmz node/ip-10-0-133-217.us-east-2.compute.internal container/registry-server reason/Killing
Nov 30 18:14:38.172 W ns/openshift-marketplace pod/redhat-marketplace-wwvmz node/ip-10-0-133-217.us-east-2.compute.internal container/registry-server reason/NotReady
#1597999972987441152build-log.txt.gz27 hours ago
Nov 30 18:22:35.599 I ns/openshift-apiserver pod/apiserver-59dfd877d5-ggsqd node/ reason/DeletedBeforeScheduling
Nov 30 18:22:35.599 W ns/openshift-apiserver pod/apiserver-848b6d9d6f-jl9f8 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 18:22:35.624 I ns/openshift-apiserver pod/apiserver-848b6d9d6f-jl9f8 node/ reason/Created
Nov 30 18:22:39.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 30 18:22:39.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7."
Nov 30 18:22:41.000 I ns/openshift-apiserver pod/apiserver-58bf8c6cc9-kmnfs node/apiserver-58bf8c6cc9-kmnfs reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 30 18:22:41.000 I ns/openshift-apiserver pod/apiserver-58bf8c6cc9-kmnfs node/apiserver-58bf8c6cc9-kmnfs reason/TerminationStoppedServing Server has stopped listening
Nov 30 18:22:46.000 W ns/openshift-apiserver pod/apiserver-58bf8c6cc9-kmnfs node/ip-10-0-202-106.us-east-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.44:8443/healthz": dial tcp 10.128.0.44:8443: connect: connection refused\nbody: \n
Nov 30 18:22:46.000 W ns/openshift-apiserver pod/apiserver-58bf8c6cc9-kmnfs node/ip-10-0-202-106.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.44:8443/healthz": dial tcp 10.128.0.44:8443: connect: connection refused\nbody: \n
Nov 30 18:22:46.000 W ns/openshift-apiserver pod/apiserver-58bf8c6cc9-kmnfs node/ip-10-0-202-106.us-east-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.44:8443/healthz": dial tcp 10.128.0.44:8443: connect: connection refused
Nov 30 18:22:46.000 W ns/openshift-apiserver pod/apiserver-58bf8c6cc9-kmnfs node/ip-10-0-202-106.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.44:8443/healthz": dial tcp 10.128.0.44:8443: connect: connection refused
#1597999972987441152build-log.txt.gz27 hours ago
Nov 30 18:24:00.808 I ns/openshift-apiserver pod/apiserver-848b6d9d6f-jl9f8 node/ip-10-0-202-106.us-east-2.compute.internal container/openshift-apiserver reason/Ready
Nov 30 18:24:00.838 I ns/openshift-apiserver pod/apiserver-58bf8c6cc9-ggfxs node/ip-10-0-152-131.us-east-2.compute.internal reason/GracefulDelete duration/70s
Nov 30 18:24:00.898 W ns/openshift-apiserver pod/apiserver-848b6d9d6f-bwtzg reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 18:24:00.908 I ns/openshift-apiserver pod/apiserver-848b6d9d6f-bwtzg node/ reason/Created
Nov 30 18:24:02.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 30 18:24:10.000 I ns/openshift-apiserver pod/apiserver-58bf8c6cc9-ggfxs node/apiserver-58bf8c6cc9-ggfxs reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 30 18:24:10.000 I ns/openshift-apiserver pod/apiserver-58bf8c6cc9-ggfxs node/apiserver-58bf8c6cc9-ggfxs reason/TerminationStoppedServing Server has stopped listening
Nov 30 18:24:11.000 W ns/openshift-network-diagnostics node/ip-10-0-133-217.us-east-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: openshift-apiserver-endpoint-ip-10-0-152-131: failed to establish a TCP connection to 10.130.0.37:8443: dial tcp 10.130.0.37:8443: connect: connection refused
Nov 30 18:24:18.000 W ns/openshift-apiserver pod/apiserver-58bf8c6cc9-ggfxs node/ip-10-0-152-131.us-east-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.37:8443/healthz": dial tcp 10.130.0.37:8443: connect: connection refused\nbody: \n
Nov 30 18:24:18.000 W ns/openshift-apiserver pod/apiserver-58bf8c6cc9-ggfxs node/ip-10-0-152-131.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.37:8443/healthz": dial tcp 10.130.0.37:8443: connect: connection refused\nbody: \n
Nov 30 18:24:18.000 W ns/openshift-apiserver pod/apiserver-58bf8c6cc9-ggfxs node/ip-10-0-152-131.us-east-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.37:8443/healthz": dial tcp 10.130.0.37:8443: connect: connection refused
#1597549373615509504build-log.txt.gz2 days ago
Nov 29 12:19:33.431 I ns/openshift-marketplace pod/redhat-marketplace-c4sbj node/ip-10-0-210-115.us-west-1.compute.internal container/registry-server reason/Ready
Nov 29 12:19:33.500 I ns/openshift-marketplace pod/redhat-marketplace-c4sbj node/ip-10-0-210-115.us-west-1.compute.internal reason/GracefulDelete duration/1s
Nov 29 12:19:35.081 I ns/openshift-marketplace pod/redhat-marketplace-c4sbj node/ip-10-0-210-115.us-west-1.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 29 12:19:47.095 I ns/openshift-marketplace pod/redhat-marketplace-c4sbj node/ip-10-0-210-115.us-west-1.compute.internal reason/Deleted
Nov 29 12:20:04.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.137208805300278 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-139-20.us-west-1.compute.internal=0.0044533333333333005,etcd-ip-10-0-176-13.us-west-1.compute.internal=0.00723999999999995,etcd-ip-10-0-228-74.us-west-1.compute.internal=0.006386666666666649. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 29 12:22:47.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-20.us-west-1.compute.internal node/ip-10-0-139-20 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 29 12:22:47.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-20.us-west-1.compute.internal node/ip-10-0-139-20 reason/TerminationStoppedServing Server has stopped listening
Nov 29 12:23:23.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-20.us-west-1.compute.internal node/ip-10-0-139-20.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 29 12:23:23.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-20.us-west-1.compute.internal node/ip-10-0-139-20.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
Nov 29 12:23:23.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-20.us-west-1.compute.internal node/ip-10-0-139-20.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Started
Nov 29 12:23:23.897 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-20.us-west-1.compute.internal node/ip-10-0-139-20.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/ContainerExit code/0 cause/Completed
#1597549373615509504build-log.txt.gz2 days ago
Nov 29 12:28:28.635 I ns/openshift-marketplace pod/community-operators-ljfbs node/ip-10-0-210-115.us-west-1.compute.internal reason/GracefulDelete duration/1s
Nov 29 12:28:29.000 I ns/openshift-marketplace pod/community-operators-ljfbs node/ip-10-0-210-115.us-west-1.compute.internal container/registry-server reason/Killing
Nov 29 12:28:29.000 I ns/openshift-marketplace pod/community-operators-ljfbs node/ip-10-0-210-115.us-west-1.compute.internal container/registry-server reason/Killing
Nov 29 12:28:30.100 I ns/openshift-marketplace pod/community-operators-ljfbs node/ip-10-0-210-115.us-west-1.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 29 12:28:31.172 I ns/openshift-marketplace pod/community-operators-ljfbs node/ip-10-0-210-115.us-west-1.compute.internal reason/Deleted
Nov 29 12:28:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-176-13.us-west-1.compute.internal node/ip-10-0-176-13 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 29 12:28:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-176-13.us-west-1.compute.internal node/ip-10-0-176-13 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 29 12:28:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-176-13.us-west-1.compute.internal node/ip-10-0-176-13 reason/TerminationStoppedServing Server has stopped listening
Nov 29 12:28:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-176-13.us-west-1.compute.internal node/ip-10-0-176-13 reason/TerminationStoppedServing Server has stopped listening
Nov 29 12:28:56.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-29-111249@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (19 times)
Nov 29 12:28:56.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-29-111249@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (19 times)
Nov 29 12:29:20.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-176-13.us-west-1.compute.internal node/ip-10-0-176-13.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Created
#1597549373615509504build-log.txt.gz2 days ago
Nov 29 12:34:46.000 I ns/openshift-marketplace pod/certified-operators-6nwfb reason/AddedInterface Add eth0 [10.129.2.16/23] from openshift-sdn
Nov 29 12:34:47.000 I ns/openshift-marketplace pod/certified-operators-6nwfb node/ip-10-0-170-209.us-west-1.compute.internal container/registry-server reason/Created
Nov 29 12:34:47.000 I ns/openshift-marketplace pod/certified-operators-6nwfb node/ip-10-0-170-209.us-west-1.compute.internal container/registry-server reason/Pulled duration/0.670s image/registry.redhat.io/redhat/certified-operator-index:v4.8
Nov 29 12:34:47.000 I ns/openshift-marketplace pod/certified-operators-6nwfb node/ip-10-0-170-209.us-west-1.compute.internal container/registry-server reason/Started
Nov 29 12:34:48.001 I ns/openshift-marketplace pod/certified-operators-6nwfb node/ip-10-0-170-209.us-west-1.compute.internal container/registry-server reason/ContainerStart duration/3.00s
Nov 29 12:34:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-228-74.us-west-1.compute.internal node/ip-10-0-228-74 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 29 12:34:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-228-74.us-west-1.compute.internal node/ip-10-0-228-74 reason/TerminationStoppedServing Server has stopped listening
Nov 29 12:34:54.000 I ns/openshift-marketplace pod/certified-operators-6nwfb node/ip-10-0-170-209.us-west-1.compute.internal container/registry-server reason/Killing
Nov 29 12:34:54.331 I ns/openshift-marketplace pod/certified-operators-6nwfb node/ip-10-0-170-209.us-west-1.compute.internal container/registry-server reason/Ready
Nov 29 12:34:54.331 I ns/openshift-marketplace pod/certified-operators-6nwfb node/ip-10-0-170-209.us-west-1.compute.internal reason/GracefulDelete duration/1s
Nov 29 12:34:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-29-111249@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (37 times)
#1597549373615509504build-log.txt.gz2 days ago
Nov 29 12:42:12.399 W ns/openshift-apiserver pod/apiserver-575f7f9ccf-b69bb reason/FailedScheduling skip schedule deleting pod: openshift-apiserver/apiserver-575f7f9ccf-b69bb
Nov 29 12:42:12.462 W ns/openshift-apiserver pod/apiserver-99bc8f756-r6ddw reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 29 12:42:12.570 I ns/openshift-apiserver pod/apiserver-575f7f9ccf-b69bb node/ reason/DeletedBeforeScheduling
Nov 29 12:42:12.634 I ns/openshift-apiserver pod/apiserver-99bc8f756-r6ddw node/ reason/Created
Nov 29 12:42:15.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 29 12:42:18.000 I ns/openshift-apiserver pod/apiserver-5687f5b56-27nwh node/apiserver-5687f5b56-27nwh reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 29 12:42:18.000 I ns/openshift-apiserver pod/apiserver-5687f5b56-27nwh node/apiserver-5687f5b56-27nwh reason/TerminationStoppedServing Server has stopped listening
Nov 29 12:42:25.000 W ns/openshift-apiserver pod/apiserver-5687f5b56-27nwh node/ip-10-0-228-74.us-west-1.compute.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused\nbody: \n
Nov 29 12:42:25.000 W ns/openshift-apiserver pod/apiserver-5687f5b56-27nwh node/ip-10-0-228-74.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused\nbody: \n
Nov 29 12:42:25.000 W ns/openshift-apiserver pod/apiserver-5687f5b56-27nwh node/ip-10-0-228-74.us-west-1.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused
Nov 29 12:42:25.000 W ns/openshift-apiserver pod/apiserver-5687f5b56-27nwh node/ip-10-0-228-74.us-west-1.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused
#1597549373615509504build-log.txt.gz2 days ago
Nov 29 12:43:45.622 I ns/openshift-apiserver pod/apiserver-99bc8f756-r6ddw node/ip-10-0-228-74.us-west-1.compute.internal container/openshift-apiserver reason/Ready
Nov 29 12:43:45.694 I ns/openshift-apiserver pod/apiserver-5687f5b56-nqcf2 node/ip-10-0-139-20.us-west-1.compute.internal reason/GracefulDelete duration/70s
Nov 29 12:43:45.718 W ns/openshift-apiserver pod/apiserver-99bc8f756-5f5vn reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 29 12:43:45.766 I ns/openshift-apiserver pod/apiserver-99bc8f756-5f5vn node/ reason/Created
Nov 29 12:43:47.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 29 12:43:55.000 I ns/openshift-apiserver pod/apiserver-5687f5b56-nqcf2 node/apiserver-5687f5b56-nqcf2 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 29 12:43:55.000 I ns/openshift-apiserver pod/apiserver-5687f5b56-nqcf2 node/apiserver-5687f5b56-nqcf2 reason/TerminationStoppedServing Server has stopped listening
Nov 29 12:43:56.000 W ns/openshift-apiserver pod/apiserver-5687f5b56-nqcf2 node/ip-10-0-139-20.us-west-1.compute.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.27:8443/healthz": dial tcp 10.129.0.27:8443: connect: connection refused\nbody: \n
Nov 29 12:43:56.000 W ns/openshift-apiserver pod/apiserver-5687f5b56-nqcf2 node/ip-10-0-139-20.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.27:8443/healthz": dial tcp 10.129.0.27:8443: connect: connection refused\nbody: \n
Nov 29 12:43:56.000 W ns/openshift-apiserver pod/apiserver-5687f5b56-nqcf2 node/ip-10-0-139-20.us-west-1.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.27:8443/healthz": dial tcp 10.129.0.27:8443: connect: connection refused
Nov 29 12:43:56.000 W ns/openshift-apiserver pod/apiserver-5687f5b56-nqcf2 node/ip-10-0-139-20.us-west-1.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.27:8443/healthz": dial tcp 10.129.0.27:8443: connect: connection refused
#1597304162175946752build-log.txt.gz3 days ago
Nov 28 19:54:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-186-121.us-west-1.compute.internal node/ip-10-0-186-121 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 28 19:54:04.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ip-10-0-196-84_38976521-03f0-4873-9e01-f17ae4e63869 became leader
Nov 28 19:54:04.004 I ns/openshift-kube-apiserver pod/installer-10-ip-10-0-186-121.us-west-1.compute.internal node/ip-10-0-186-121.us-west-1.compute.internal container/installer reason/ContainerExit code/0 cause/Completed
Nov 28 19:57:08.000 W ns/openshift-machine-api machineset/ci-op-mwlqwrpr-1d119-5k565-worker-us-west-1c reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (3 times)
Nov 28 19:57:08.000 W ns/openshift-machine-api machineset/ci-op-mwlqwrpr-1d119-5k565-worker-us-west-1b reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (3 times)
Nov 28 19:57:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-186-121.us-west-1.compute.internal node/ip-10-0-186-121 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 28 19:57:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-186-121.us-west-1.compute.internal node/ip-10-0-186-121 reason/TerminationStoppedServing Server has stopped listening
Nov 28 19:57:49.000 W ns/openshift-network-diagnostics node/ip-10-0-212-160.us-west-1.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-186-121: failed to establish a TCP connection to 10.0.186.121:6443: dial tcp 10.0.186.121:6443: connect: connection refused
Nov 28 19:58:11.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-186-121.us-west-1.compute.internal node/ip-10-0-186-121.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 28 19:58:11.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-186-121.us-west-1.compute.internal node/ip-10-0-186-121.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
Nov 28 19:58:11.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-186-121.us-west-1.compute.internal node/ip-10-0-186-121.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Started
#1597304162175946752build-log.txt.gz3 days ago
Nov 28 20:03:22.000 I ns/openshift-marketplace pod/redhat-operators-7kk8n node/ip-10-0-212-160.us-west-1.compute.internal container/registry-server reason/Killing
Nov 28 20:03:22.456 I ns/openshift-marketplace pod/redhat-operators-7kk8n node/ip-10-0-212-160.us-west-1.compute.internal container/registry-server reason/Ready
Nov 28 20:03:22.528 I ns/openshift-marketplace pod/redhat-operators-7kk8n node/ip-10-0-212-160.us-west-1.compute.internal reason/GracefulDelete duration/1s
Nov 28 20:03:23.722 I ns/openshift-marketplace pod/redhat-operators-7kk8n node/ip-10-0-212-160.us-west-1.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 28 20:03:24.798 I ns/openshift-marketplace pod/redhat-operators-7kk8n node/ip-10-0-212-160.us-west-1.compute.internal reason/Deleted
Nov 28 20:03:28.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-112.us-west-1.compute.internal node/ip-10-0-169-112 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 28 20:03:28.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-112.us-west-1.compute.internal node/ip-10-0-169-112 reason/TerminationStoppedServing Server has stopped listening
Nov 28 20:03:39.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-28-185827@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (18 times)
Nov 28 20:04:05.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-169-112.us-west-1.compute.internal node/ip-10-0-169-112.us-west-1.compute.internal container/kube-scheduler-recovery-controller reason/Created
Nov 28 20:04:05.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-169-112.us-west-1.compute.internal node/ip-10-0-169-112.us-west-1.compute.internal container/kube-scheduler-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcb22579324a93faddf49caa971153fea33cec03738e76490d6de7e39a01db59
Nov 28 20:04:05.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-169-112.us-west-1.compute.internal node/ip-10-0-169-112.us-west-1.compute.internal container/kube-scheduler-recovery-controller reason/Started
#1597304162175946752build-log.txt.gz3 days ago
Nov 28 20:06:16.000 W ns/openshift-machine-api machineset/ci-op-mwlqwrpr-1d119-5k565-worker-us-west-1b reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (4 times)
Nov 28 20:06:16.000 W ns/openshift-machine-api machineset/ci-op-mwlqwrpr-1d119-5k565-worker-us-west-1c reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (4 times)
Nov 28 20:06:39.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-28-185827@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (32 times)
Nov 28 20:07:39.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-28-185827@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (33 times)
Nov 28 20:08:42.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-28-185827@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (34 times)
Nov 28 20:09:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-196-84.us-west-1.compute.internal node/ip-10-0-196-84 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 28 20:09:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-196-84.us-west-1.compute.internal node/ip-10-0-196-84 reason/TerminationStoppedServing Server has stopped listening
Nov 28 20:09:39.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-28-185827@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (35 times)
Nov 28 20:10:03.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-196-84.us-west-1.compute.internal node/ip-10-0-196-84.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 28 20:10:03.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-196-84.us-west-1.compute.internal node/ip-10-0-196-84.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
Nov 28 20:10:03.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-196-84.us-west-1.compute.internal node/ip-10-0-196-84.us-west-1.compute.internal container/kube-controller-manager-recovery-controller reason/Started
#1597304162175946752build-log.txt.gz3 days ago
Nov 28 20:17:08.000 I ns/openshift-apiserver replicaset/apiserver-8679579496 reason/SuccessfulDelete Deleted pod: apiserver-8679579496-nwc58
Nov 28 20:17:08.735 W ns/openshift-apiserver pod/apiserver-ccd4b66b5-9zvw2 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 28 20:17:08.824 I ns/openshift-apiserver pod/apiserver-8679579496-nwc58 node/ reason/DeletedBeforeScheduling
Nov 28 20:17:08.866 I ns/openshift-apiserver pod/apiserver-ccd4b66b5-9zvw2 node/ reason/Created
Nov 28 20:17:12.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 28 20:17:14.000 I ns/openshift-apiserver pod/apiserver-6b98564dfc-9vc26 node/apiserver-6b98564dfc-9vc26 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 28 20:17:14.000 I ns/openshift-apiserver pod/apiserver-6b98564dfc-9vc26 node/apiserver-6b98564dfc-9vc26 reason/TerminationStoppedServing Server has stopped listening
Nov 28 20:17:23.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-9vc26 node/ip-10-0-169-112.us-west-1.compute.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.45:8443/healthz": dial tcp 10.128.0.45:8443: connect: connection refused\nbody: \n
Nov 28 20:17:23.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-9vc26 node/ip-10-0-169-112.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.45:8443/healthz": dial tcp 10.128.0.45:8443: connect: connection refused\nbody: \n
Nov 28 20:17:23.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-9vc26 node/ip-10-0-169-112.us-west-1.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.45:8443/healthz": dial tcp 10.128.0.45:8443: connect: connection refused
Nov 28 20:17:23.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-9vc26 node/ip-10-0-169-112.us-west-1.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.45:8443/healthz": dial tcp 10.128.0.45:8443: connect: connection refused
#1597304162175946752build-log.txt.gz3 days ago
Nov 28 20:18:47.377 I ns/openshift-apiserver pod/apiserver-6b98564dfc-6mfb5 node/ip-10-0-196-84.us-west-1.compute.internal reason/GracefulDelete duration/70s
Nov 28 20:18:47.448 I ns/openshift-apiserver pod/apiserver-ccd4b66b5-6czxk node/ reason/Created
Nov 28 20:18:48.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 28 20:18:57.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-6mfb5 node/ip-10-0-196-84.us-west-1.compute.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.56:8443/healthz": dial tcp 10.129.0.56:8443: connect: connection refused\nbody: \n
Nov 28 20:18:57.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-6mfb5 node/ip-10-0-196-84.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.56:8443/healthz": dial tcp 10.129.0.56:8443: connect: connection refused\nbody: \n
Nov 28 20:18:57.000 I ns/openshift-apiserver pod/apiserver-6b98564dfc-6mfb5 node/apiserver-6b98564dfc-6mfb5 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 28 20:18:57.000 I ns/openshift-apiserver pod/apiserver-6b98564dfc-6mfb5 node/apiserver-6b98564dfc-6mfb5 reason/TerminationStoppedServing Server has stopped listening
Nov 28 20:18:57.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-6mfb5 node/ip-10-0-196-84.us-west-1.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.56:8443/healthz": dial tcp 10.129.0.56:8443: connect: connection refused
Nov 28 20:18:57.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-6mfb5 node/ip-10-0-196-84.us-west-1.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.56:8443/healthz": dial tcp 10.129.0.56:8443: connect: connection refused
Nov 28 20:19:00.000 W ns/openshift-network-diagnostics node/ip-10-0-212-160.us-west-1.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: openshift-apiserver-endpoint-ip-10-0-169-112: failed to establish a TCP connection to 10.128.0.45:8443: dial tcp 10.128.0.45:8443: connect: connection refused
Nov 28 20:19:07.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-6mfb5 node/ip-10-0-196-84.us-west-1.compute.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.56:8443/healthz": dial tcp 10.129.0.56:8443: connect: connection refused\nbody: \n (2 times)
#1597304162175946752build-log.txt.gz3 days ago
Nov 28 20:20:20.729 I ns/openshift-apiserver pod/apiserver-6b98564dfc-nc5wb node/ip-10-0-186-121.us-west-1.compute.internal reason/GracefulDelete duration/70s
Nov 28 20:20:20.731 W ns/openshift-apiserver pod/apiserver-ccd4b66b5-x986m reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 28 20:20:20.766 I ns/openshift-apiserver pod/apiserver-ccd4b66b5-x986m node/ reason/Created
Nov 28 20:20:22.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Nov 28 20:20:22.230 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Nov 28 20:20:30.000 I ns/openshift-apiserver pod/apiserver-6b98564dfc-nc5wb node/apiserver-6b98564dfc-nc5wb reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 28 20:20:30.000 I ns/openshift-apiserver pod/apiserver-6b98564dfc-nc5wb node/apiserver-6b98564dfc-nc5wb reason/TerminationStoppedServing Server has stopped listening
Nov 28 20:20:33.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-nc5wb node/ip-10-0-186-121.us-west-1.compute.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.44:8443/healthz": dial tcp 10.130.0.44:8443: connect: connection refused\nbody: \n
Nov 28 20:20:33.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-nc5wb node/ip-10-0-186-121.us-west-1.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.44:8443/healthz": dial tcp 10.130.0.44:8443: connect: connection refused\nbody: \n
Nov 28 20:20:33.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-nc5wb node/ip-10-0-186-121.us-west-1.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.44:8443/healthz": dial tcp 10.130.0.44:8443: connect: connection refused
Nov 28 20:20:33.000 W ns/openshift-apiserver pod/apiserver-6b98564dfc-nc5wb node/ip-10-0-186-121.us-west-1.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.44:8443/healthz": dial tcp 10.130.0.44:8443: connect: connection refused
#1597201585732063232build-log.txt.gz3 days ago
Nov 28 13:11:55.430 I ns/openshift-marketplace pod/certified-operators-9vmb8 node/ip-10-0-173-104.us-east-2.compute.internal container/registry-server reason/ContainerStart duration/3.00s
Nov 28 13:12:01.548 I ns/openshift-marketplace pod/certified-operators-9vmb8 node/ip-10-0-173-104.us-east-2.compute.internal container/registry-server reason/Ready
Nov 28 13:12:01.563 I ns/openshift-marketplace pod/certified-operators-9vmb8 node/ip-10-0-173-104.us-east-2.compute.internal reason/GracefulDelete duration/1s
Nov 28 13:12:02.000 I ns/openshift-marketplace pod/certified-operators-9vmb8 node/ip-10-0-173-104.us-east-2.compute.internal container/registry-server reason/Killing
Nov 28 13:12:03.454 I ns/openshift-marketplace pod/certified-operators-9vmb8 node/ip-10-0-173-104.us-east-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 28 13:12:04.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-150-132.us-east-2.compute.internal node/ip-10-0-150-132 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 28 13:12:04.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-150-132.us-east-2.compute.internal node/ip-10-0-150-132 reason/TerminationStoppedServing Server has stopped listening
Nov 28 13:12:07.396 I ns/openshift-marketplace pod/certified-operators-9vmb8 node/ip-10-0-173-104.us-east-2.compute.internal reason/Deleted
Nov 28 13:12:49.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-150-132.us-east-2.compute.internal node/ip-10-0-150-132.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 28 13:12:49.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-150-132.us-east-2.compute.internal node/ip-10-0-150-132.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
Nov 28 13:12:49.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-150-132.us-east-2.compute.internal node/ip-10-0-150-132.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Started
#1597201585732063232build-log.txt.gz3 days ago
Nov 28 13:17:39.779 I ns/openshift-marketplace pod/community-operators-jjk72 node/ip-10-0-173-104.us-east-2.compute.internal container/registry-server reason/Ready
Nov 28 13:17:39.792 I ns/openshift-marketplace pod/community-operators-jjk72 node/ip-10-0-173-104.us-east-2.compute.internal reason/GracefulDelete duration/1s
Nov 28 13:17:40.000 I ns/openshift-marketplace pod/community-operators-jjk72 node/ip-10-0-173-104.us-east-2.compute.internal container/registry-server reason/Killing
Nov 28 13:17:41.330 I ns/openshift-marketplace pod/community-operators-jjk72 node/ip-10-0-173-104.us-east-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 28 13:17:47.397 I ns/openshift-marketplace pod/community-operators-jjk72 node/ip-10-0-173-104.us-east-2.compute.internal reason/Deleted
Nov 28 13:17:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-253-11.us-east-2.compute.internal node/ip-10-0-253-11 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 28 13:17:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-253-11.us-east-2.compute.internal node/ip-10-0-253-11 reason/TerminationStoppedServing Server has stopped listening
Nov 28 13:18:23.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-28-121046@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (19 times)
Nov 28 13:18:31.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-253-11.us-east-2.compute.internal node/ip-10-0-253-11.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 28 13:18:31.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-253-11.us-east-2.compute.internal node/ip-10-0-253-11.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
Nov 28 13:18:31.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-253-11.us-east-2.compute.internal node/ip-10-0-253-11.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Started
#1597201585732063232build-log.txt.gz3 days ago
Nov 28 13:23:23.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-28-121046@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (38 times)
Nov 28 13:23:23.116 I ns/openshift-marketplace pod/certified-operators-z9ljf node/ip-10-0-173-104.us-east-2.compute.internal container/registry-server reason/Ready
Nov 28 13:23:23.130 I ns/openshift-marketplace pod/certified-operators-z9ljf node/ip-10-0-173-104.us-east-2.compute.internal reason/GracefulDelete duration/1s
Nov 28 13:23:24.276 I ns/openshift-marketplace pod/certified-operators-z9ljf node/ip-10-0-173-104.us-east-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 28 13:23:27.399 I ns/openshift-marketplace pod/certified-operators-z9ljf node/ip-10-0-173-104.us-east-2.compute.internal reason/Deleted
Nov 28 13:23:32.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-129.us-east-2.compute.internal node/ip-10-0-132-129 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 28 13:23:32.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-129.us-east-2.compute.internal node/ip-10-0-132-129 reason/TerminationStoppedServing Server has stopped listening
Nov 28 13:23:32.461 I ns/openshift-marketplace pod/redhat-operators-ksb8s node/ reason/Created
Nov 28 13:23:32.463 I ns/openshift-marketplace pod/redhat-operators-ksb8s node/ip-10-0-173-104.us-east-2.compute.internal reason/Scheduled
Nov 28 13:23:34.000 I ns/openshift-marketplace pod/redhat-operators-ksb8s node/ip-10-0-173-104.us-east-2.compute.internal container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-operator-index:v4.8
Nov 28 13:23:34.000 I ns/openshift-marketplace pod/redhat-operators-ksb8s reason/AddedInterface Add eth0 [10.131.0.43/23] from openshift-sdn
#1597201585732063232build-log.txt.gz3 days ago
Nov 28 13:32:24.757 W ns/openshift-apiserver pod/apiserver-6fd7956d75-9nwrd reason/FailedScheduling skip schedule deleting pod: openshift-apiserver/apiserver-6fd7956d75-9nwrd
Nov 28 13:32:24.780 I ns/openshift-apiserver pod/apiserver-6fd7956d75-9nwrd node/ reason/DeletedBeforeScheduling
Nov 28 13:32:24.804 W ns/openshift-apiserver pod/apiserver-f79b7f9d8-mgn2h reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 28 13:32:24.805 I ns/openshift-apiserver pod/apiserver-f79b7f9d8-mgn2h node/ reason/Created
Nov 28 13:32:28.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 28 13:32:31.000 I ns/openshift-apiserver pod/apiserver-5f54dfccc5-d8wxd node/apiserver-5f54dfccc5-d8wxd reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 28 13:32:31.000 I ns/openshift-apiserver pod/apiserver-5f54dfccc5-d8wxd node/apiserver-5f54dfccc5-d8wxd reason/TerminationStoppedServing Server has stopped listening
Nov 28 13:32:35.000 I ns/openshift-machine-api machine/ci-op-c8cyh5z6-1d119-pklck-worker-us-east-2b-pvrpm reason/Update Updated Machine ci-op-c8cyh5z6-1d119-pklck-worker-us-east-2b-pvrpm (2 times)
Nov 28 13:32:35.000 I ns/openshift-machine-api machine/ci-op-c8cyh5z6-1d119-pklck-worker-us-east-2c-l6t8p reason/Update Updated Machine ci-op-c8cyh5z6-1d119-pklck-worker-us-east-2c-l6t8p (2 times)
Nov 28 13:32:39.000 I ns/openshift-machine-api machine/ci-op-c8cyh5z6-1d119-pklck-worker-us-east-2c-pcb6t reason/Update Updated Machine ci-op-c8cyh5z6-1d119-pklck-worker-us-east-2c-pcb6t (2 times)
Nov 28 13:32:40.000 W ns/openshift-apiserver pod/apiserver-5f54dfccc5-d8wxd node/ip-10-0-150-132.us-east-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.44:8443/healthz": dial tcp 10.129.0.44:8443: connect: connection refused\nbody: \n
#1597201585732063232build-log.txt.gz3 days ago
Nov 28 13:33:59.032 I ns/openshift-marketplace pod/certified-operators-bnqkn node/ip-10-0-173-104.us-east-2.compute.internal reason/GracefulDelete duration/1s
Nov 28 13:34:00.858 I ns/openshift-marketplace pod/certified-operators-bnqkn node/ip-10-0-173-104.us-east-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 28 13:34:01.858 W ns/openshift-apiserver pod/apiserver-f79b7f9d8-6x8sm reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 28 13:34:01.858 W ns/openshift-apiserver pod/apiserver-f79b7f9d8-6x8sm reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 28 13:34:01.872 I ns/openshift-marketplace pod/certified-operators-bnqkn node/ip-10-0-173-104.us-east-2.compute.internal reason/Deleted
Nov 28 13:34:07.000 I ns/openshift-apiserver pod/apiserver-5f54dfccc5-b98vq node/apiserver-5f54dfccc5-b98vq reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 28 13:34:07.000 I ns/openshift-apiserver pod/apiserver-5f54dfccc5-b98vq node/apiserver-5f54dfccc5-b98vq reason/TerminationStoppedServing Server has stopped listening
Nov 28 13:34:11.000 W ns/openshift-apiserver pod/apiserver-5f54dfccc5-b98vq node/ip-10-0-132-129.us-east-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.39:8443/healthz": dial tcp 10.128.0.39:8443: connect: connection refused\nbody: \n
Nov 28 13:34:11.000 W ns/openshift-apiserver pod/apiserver-5f54dfccc5-b98vq node/ip-10-0-132-129.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.39:8443/healthz": dial tcp 10.128.0.39:8443: connect: connection refused\nbody: \n
Nov 28 13:34:11.000 W ns/openshift-apiserver pod/apiserver-5f54dfccc5-b98vq node/ip-10-0-132-129.us-east-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.39:8443/healthz": dial tcp 10.128.0.39:8443: connect: connection refused
Nov 28 13:34:11.000 W ns/openshift-apiserver pod/apiserver-5f54dfccc5-b98vq node/ip-10-0-132-129.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.39:8443/healthz": dial tcp 10.128.0.39:8443: connect: connection refused
#1597201585732063232build-log.txt.gz3 days ago
Nov 28 13:35:40.840 I ns/openshift-apiserver pod/apiserver-5f54dfccc5-wmgns node/ip-10-0-253-11.us-east-2.compute.internal reason/GracefulDelete duration/70s
Nov 28 13:35:40.927 I ns/openshift-apiserver pod/apiserver-f79b7f9d8-dwddp node/ reason/Created
Nov 28 13:35:40.930 W ns/openshift-apiserver pod/apiserver-f79b7f9d8-dwddp reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 28 13:35:42.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Nov 28 13:35:42.177 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Nov 28 13:35:50.000 I ns/openshift-apiserver pod/apiserver-5f54dfccc5-wmgns node/apiserver-5f54dfccc5-wmgns reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 28 13:35:50.000 I ns/openshift-apiserver pod/apiserver-5f54dfccc5-wmgns node/apiserver-5f54dfccc5-wmgns reason/TerminationStoppedServing Server has stopped listening
Nov 28 13:35:53.000 W ns/openshift-apiserver pod/apiserver-5f54dfccc5-wmgns node/ip-10-0-253-11.us-east-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.35:8443/healthz": dial tcp 10.130.0.35:8443: connect: connection refused\nbody: \n
Nov 28 13:35:53.000 W ns/openshift-apiserver pod/apiserver-5f54dfccc5-wmgns node/ip-10-0-253-11.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.35:8443/healthz": dial tcp 10.130.0.35:8443: connect: connection refused\nbody: \n
Nov 28 13:35:53.000 W ns/openshift-apiserver pod/apiserver-5f54dfccc5-wmgns node/ip-10-0-253-11.us-east-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.35:8443/healthz": dial tcp 10.130.0.35:8443: connect: connection refused
Nov 28 13:35:53.000 W ns/openshift-apiserver pod/apiserver-5f54dfccc5-wmgns node/ip-10-0-253-11.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.35:8443/healthz": dial tcp 10.130.0.35:8443: connect: connection refused
periodic-ci-openshift-release-master-nightly-4.9-e2e-aws-upgrade (all) - 3 runs, 33% failed, 100% of failures match = 33% impact
#1598011350745878528build-log.txt.gz26 hours ago
Nov 30 18:46:40.827 I ns/openshift-marketplace pod/certified-operators-t6vk2 node/ip-10-0-167-85.us-west-2.compute.internal container/registry-server reason/Ready
Nov 30 18:46:40.890 I ns/openshift-marketplace pod/certified-operators-t6vk2 node/ip-10-0-167-85.us-west-2.compute.internal reason/GracefulDelete duration/1s
Nov 30 18:46:42.000 I ns/openshift-marketplace pod/certified-operators-t6vk2 node/ip-10-0-167-85.us-west-2.compute.internal container/registry-server reason/Killing
Nov 30 18:46:43.786 I ns/openshift-marketplace pod/certified-operators-t6vk2 node/ip-10-0-167-85.us-west-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 30 18:46:43.912 I ns/openshift-marketplace pod/certified-operators-t6vk2 node/ip-10-0-167-85.us-west-2.compute.internal reason/Deleted
Nov 30 18:47:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-199-231.us-west-2.compute.internal node/ip-10-0-199-231 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 30 18:47:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-199-231.us-west-2.compute.internal node/ip-10-0-199-231 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 18:47:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-199-231.us-west-2.compute.internal node/ip-10-0-199-231 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 18:47:26.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-199-231.us-west-2.compute.internal node/ip-10-0-199-231 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 18:47:28.008 I ns/openshift-marketplace pod/redhat-operators-hn85d node/ip-10-0-167-85.us-west-2.compute.internal reason/Scheduled
Nov 30 18:47:28.028 I ns/openshift-marketplace pod/redhat-operators-hn85d node/ reason/Created
#1598011350745878528build-log.txt.gz26 hours ago
Nov 30 18:48:44.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (15 times)
Nov 30 18:49:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (16 times)
Nov 30 18:50:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (17 times)
Nov 30 18:51:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (18 times)
Nov 30 18:52:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (19 times)
Nov 30 18:52:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-184-219.us-west-2.compute.internal node/ip-10-0-184-219 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 30 18:52:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-184-219.us-west-2.compute.internal node/ip-10-0-184-219 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 18:52:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-184-219.us-west-2.compute.internal node/ip-10-0-184-219 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 18:52:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-184-219.us-west-2.compute.internal node/ip-10-0-184-219 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 18:52:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-184-219.us-west-2.compute.internal node/ip-10-0-184-219.us-west-2.compute.internal container/setup reason/Pulling image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671
Nov 30 18:52:17.000 W ns/openshift-kube-apiserver endpoints/apiserver reason/FailedToUpdateEndpoint Failed to update endpoint openshift-kube-apiserver/apiserver: Operation cannot be fulfilled on endpoints "apiserver": the object has been modified; please apply your changes to the latest version and try again
#1598011350745878528build-log.txt.gz26 hours ago
Nov 30 18:56:52.201 I ns/openshift-marketplace pod/certified-operators-whhkj node/ip-10-0-167-85.us-west-2.compute.internal reason/GracefulDelete duration/1s
Nov 30 18:56:54.000 I ns/openshift-marketplace pod/certified-operators-whhkj node/ip-10-0-167-85.us-west-2.compute.internal container/registry-server reason/Killing
Nov 30 18:56:55.172 I ns/openshift-marketplace pod/certified-operators-whhkj node/ip-10-0-167-85.us-west-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 30 18:56:55.173 I ns/openshift-marketplace pod/certified-operators-whhkj node/ip-10-0-167-85.us-west-2.compute.internal reason/Deleted
Nov 30 18:57:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (39 times)
Nov 30 18:57:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-30.us-west-2.compute.internal node/ip-10-0-180-30 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 30 18:57:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-30.us-west-2.compute.internal node/ip-10-0-180-30 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 18:57:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-30.us-west-2.compute.internal node/ip-10-0-180-30 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 18:57:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-30.us-west-2.compute.internal node/ip-10-0-180-30 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 18:57:12.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (40 times)
Nov 30 18:57:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-30.us-west-2.compute.internal node/ip-10-0-180-30.us-west-2.compute.internal container/setup reason/Pulling image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671
#1598011350745878528build-log.txt.gz26 hours ago
Nov 30 19:07:49.000 I ns/openshift-marketplace pod/redhat-marketplace-58n8j reason/AddedInterface Add eth0 [10.128.2.42/23] from openshift-sdn
Nov 30 19:07:50.000 I ns/openshift-marketplace pod/redhat-marketplace-58n8j node/ip-10-0-167-85.us-west-2.compute.internal container/registry-server reason/Created
Nov 30 19:07:50.000 I ns/openshift-marketplace pod/redhat-marketplace-58n8j node/ip-10-0-167-85.us-west-2.compute.internal container/registry-server reason/Pulled duration/0.782s image/registry.redhat.io/redhat/redhat-marketplace-index:v4.9
Nov 30 19:07:50.000 I ns/openshift-marketplace pod/redhat-marketplace-58n8j node/ip-10-0-167-85.us-west-2.compute.internal container/registry-server reason/Started
Nov 30 19:07:51.581 I ns/openshift-marketplace pod/redhat-marketplace-58n8j node/ip-10-0-167-85.us-west-2.compute.internal container/registry-server reason/ContainerStart duration/2.00s
Nov 30 19:07:54.000 I ns/openshift-apiserver pod/apiserver-695768d8d-7vnr8 node/apiserver-695768d8d-7vnr8 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 19:07:54.000 I ns/openshift-apiserver pod/apiserver-695768d8d-7vnr8 node/apiserver-695768d8d-7vnr8 reason/TerminationStoppedServing Server has stopped listening
Nov 30 19:07:58.186 I ns/openshift-marketplace pod/certified-operators-5lxh9 node/ reason/Created
Nov 30 19:07:58.486 I ns/openshift-marketplace pod/certified-operators-5lxh9 node/ip-10-0-167-85.us-west-2.compute.internal reason/Scheduled
Nov 30 19:07:58.583 I ns/openshift-marketplace pod/redhat-marketplace-58n8j node/ip-10-0-167-85.us-west-2.compute.internal container/registry-server reason/Ready
Nov 30 19:07:58.603 I ns/openshift-marketplace pod/redhat-marketplace-58n8j node/ip-10-0-167-85.us-west-2.compute.internal reason/GracefulDelete duration/1s
#1598011350745878528build-log.txt.gz26 hours ago
Nov 30 19:09:11.096 W ns/openshift-apiserver pod/apiserver-59fd9b4d9f-grh7j reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 19:09:11.096 I ns/openshift-apiserver pod/apiserver-59fd9b4d9f-trndj node/ip-10-0-180-30.us-west-2.compute.internal container/openshift-apiserver reason/Ready
Nov 30 19:09:11.097 I ns/openshift-apiserver pod/apiserver-695768d8d-ps86h node/ip-10-0-184-219.us-west-2.compute.internal reason/GracefulDelete duration/90s
Nov 30 19:09:11.156 I ns/openshift-apiserver pod/apiserver-59fd9b4d9f-grh7j node/ reason/Created
Nov 30 19:09:12.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 30 19:09:26.000 I ns/openshift-apiserver pod/apiserver-695768d8d-ps86h node/apiserver-695768d8d-ps86h reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 19:09:26.000 I ns/openshift-apiserver pod/apiserver-695768d8d-ps86h node/apiserver-695768d8d-ps86h reason/TerminationStoppedServing Server has stopped listening
Nov 30 19:09:27.000 W ns/openshift-network-diagnostics node/ip-10-0-167-85.us-west-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: openshift-apiserver-endpoint-ip-10-0-180-30: failed to establish a TCP connection to 10.130.0.26:8443: dial tcp 10.130.0.26:8443: connect: connection refused
Nov 30 19:10:11.670 - 15s   W ns/openshift-apiserver pod/apiserver-59fd9b4d9f-grh7j node/ pod has been pending longer than a minute
Nov 30 19:10:25.736 W ns/openshift-apiserver pod/apiserver-59fd9b4d9f-grh7j reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 19:10:26.000 I ns/openshift-machine-api deployment/machine-api-controllers reason/ScalingReplicaSet Scaled up replica set machine-api-controllers-787b5b4ff7 to 1
#1598011350745878528build-log.txt.gz26 hours ago
Nov 30 19:10:48.366 I ns/openshift-apiserver pod/apiserver-695768d8d-hslsq node/ip-10-0-199-231.us-west-2.compute.internal reason/GracefulDelete duration/90s
Nov 30 19:10:48.366 I ns/openshift-apiserver pod/apiserver-59fd9b4d9f-t8gt4 node/ reason/Created
Nov 30 19:10:48.718 W ns/openshift-apiserver pod/apiserver-695768d8d-hslsq node/ip-10-0-199-231.us-west-2.compute.internal container/openshift-apiserver reason/NotReady
Nov 30 19:10:50.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-695768d8d-hslsq pod)",Progressing changed from True to False ("All is well")
Nov 30 19:10:50.189 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Nov 30 19:11:03.000 I ns/openshift-apiserver pod/apiserver-695768d8d-hslsq node/apiserver-695768d8d-hslsq reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 19:11:03.000 I ns/openshift-apiserver pod/apiserver-695768d8d-hslsq node/apiserver-695768d8d-hslsq reason/TerminationStoppedServing Server has stopped listening
Nov 30 19:11:48.671 - 14s   W ns/openshift-apiserver pod/apiserver-59fd9b4d9f-t8gt4 node/ pod has been pending longer than a minute
Nov 30 19:11:55.738 W ns/openshift-apiserver pod/apiserver-59fd9b4d9f-t8gt4 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 19:12:03.000 I ns/openshift-apiserver pod/apiserver-695768d8d-hslsq node/apiserver-695768d8d-hslsq reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 19:12:03.991 I ns/openshift-apiserver pod/apiserver-59fd9b4d9f-t8gt4 node/ip-10-0-199-231.us-west-2.compute.internal reason/Scheduled
pull-ci-openshift-machine-config-operator-release-4.9-e2e-agnostic-upgrade (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1597920766882484224build-log.txt.gz32 hours ago
Nov 30 13:05:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-1 node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 13:05:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-1 node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 13:05:08.543 I ns/openshift-kube-apiserver pod/installer-9-ci-op-n0v8mvqz-45c24-mc9rr-master-1 node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 container/installer reason/ContainerExit code/0 cause/Completed
Nov 30 13:05:46.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.2222222222222223 over 5 minutes on "Azure"; disk metrics are: etcd-ci-op-n0v8mvqz-45c24-mc9rr-master-0=0.01871999999999991,etcd-ci-op-n0v8mvqz-45c24-mc9rr-master-1=0.0076293333333333265,etcd-ci-op-n0v8mvqz-45c24-mc9rr-master-2=0.007009029095210485. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 30 13:05:46.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.2222222222222223 over 5 minutes on "Azure"; disk metrics are: etcd-ci-op-n0v8mvqz-45c24-mc9rr-master-0=0.01871999999999991,etcd-ci-op-n0v8mvqz-45c24-mc9rr-master-1=0.0076293333333333265,etcd-ci-op-n0v8mvqz-45c24-mc9rr-master-2=0.007009029095210485. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 30 13:06:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-1 node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 13:06:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-1 node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 13:06:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-1 node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 13:06:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-1 node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 13:06:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-1 node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 13:06:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-1 node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 13:06:19.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-1 node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
#1597920766882484224build-log.txt.gz32 hours ago
Nov 30 13:08:25.956 I ns/openshift-marketplace pod/certified-operators-ddwvl node/ci-op-n0v8mvqz-45c24-mc9rr-worker-centralus3-8vq5x container/registry-server reason/Ready
Nov 30 13:08:25.957 I ns/openshift-marketplace pod/certified-operators-ddwvl node/ci-op-n0v8mvqz-45c24-mc9rr-worker-centralus3-8vq5x reason/GracefulDelete duration/1s
Nov 30 13:08:27.000 I ns/openshift-marketplace pod/certified-operators-ddwvl node/ci-op-n0v8mvqz-45c24-mc9rr-worker-centralus3-8vq5x container/registry-server reason/Killing
Nov 30 13:08:28.902 I ns/openshift-marketplace pod/certified-operators-ddwvl node/ci-op-n0v8mvqz-45c24-mc9rr-worker-centralus3-8vq5x container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 30 13:08:28.968 I ns/openshift-marketplace pod/certified-operators-ddwvl node/ci-op-n0v8mvqz-45c24-mc9rr-worker-centralus3-8vq5x reason/Deleted
Nov 30 13:08:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-0 node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 13:08:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-0 node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 13:08:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-0 node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 13:08:52.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-0 node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 13:08:55.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-0 node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 container/setup reason/Pulling image/registry.build05.ci.openshift.org/ci-op-n0v8mvqz/stable@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa
Nov 30 13:08:55.000 W ns/openshift-kube-apiserver endpoints/apiserver reason/FailedToUpdateEndpoint Failed to update endpoint openshift-kube-apiserver/apiserver: Operation cannot be fulfilled on endpoints "apiserver": the object has been modified; please apply your changes to the latest version and try again
#1597920766882484224build-log.txt.gz32 hours ago
Nov 30 13:10:05.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build05.ci.openshift.org/ci-op-n0v8mvqz/stable-initial@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa,registry.build05.ci.openshift.org/ci-op-n0v8mvqz/stable@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (27 times)
Nov 30 13:10:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build05.ci.openshift.org/ci-op-n0v8mvqz/stable-initial@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa,registry.build05.ci.openshift.org/ci-op-n0v8mvqz/stable@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (28 times)
Nov 30 13:10:28.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-n0v8mvqz-45c24-mc9rr-master-0_4209db57-d8bb-433d-8788-83cbb91b0dbb became leader
Nov 30 13:10:57.000 W ns/openshift-network-diagnostics node/ci-op-n0v8mvqz-45c24-mc9rr-worker-centralus3-8vq5x reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ci-op-n0v8mvqz-45c24-mc9rr-master-0: failed to establish a TCP connection to 10.0.0.6:6443: dial tcp 10.0.0.6:6443: connect: connection refused
Nov 30 13:10:57.000 I ns/openshift-network-diagnostics node/ci-op-n0v8mvqz-45c24-mc9rr-worker-centralus3-8vq5x reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000097136s: kubernetes-apiserver-endpoint-ci-op-n0v8mvqz-45c24-mc9rr-master-0: tcp connection to 10.0.0.6:6443 succeeded
Nov 30 13:11:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-2 node/ci-op-n0v8mvqz-45c24-mc9rr-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 13:11:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-2 node/ci-op-n0v8mvqz-45c24-mc9rr-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 13:11:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-2 node/ci-op-n0v8mvqz-45c24-mc9rr-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 13:11:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build05.ci.openshift.org/ci-op-n0v8mvqz/stable-initial@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa,registry.build05.ci.openshift.org/ci-op-n0v8mvqz/stable@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (29 times)
Nov 30 13:11:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-2 node/ci-op-n0v8mvqz-45c24-mc9rr-master-2 container/kube-apiserver reason/Killing
Nov 30 13:11:19.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-n0v8mvqz-45c24-mc9rr-master-2 node/ci-op-n0v8mvqz-45c24-mc9rr-master-2 container/setup reason/Pulling image/registry.build05.ci.openshift.org/ci-op-n0v8mvqz/stable@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa
#1597920766882484224build-log.txt.gz32 hours ago
Nov 30 13:19:51.999 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-c27rl node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 container/openshift-apiserver reason/NotReady
Nov 30 13:19:56.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-c27rl node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]informer-sync ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/max-in-flight-filter ok\n[+]poststarthook/image.openshift.io-apiserver-caches ok\n[+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok\n[+]poststarthook/authorization.openshift.io-ensureopenshift-infra ok\n[+]poststarthook/project.openshift.io-projectcache ok\n[+]poststarthook/project.openshift.io-projectauthorizationcache ok\n[+]poststarthook/openshift.io-startinformers ok\n[+]poststarthook/openshift.io-restmapperupdater ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (2 times)
Nov 30 13:19:56.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-c27rl node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Nov 30 13:20:01.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-c27rl node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]informer-sync ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/max-in-flight-filter ok\n[+]poststarthook/image.openshift.io-apiserver-caches ok\n[+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok\n[+]poststarthook/authorization.openshift.io-ensureopenshift-infra ok\n[+]poststarthook/project.openshift.io-projectcache ok\n[+]poststarthook/project.openshift.io-projectauthorizationcache ok\n[+]poststarthook/openshift.io-startinformers ok\n[+]poststarthook/openshift.io-restmapperupdater ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (3 times)
Nov 30 13:20:01.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-c27rl node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (3 times)
Nov 30 13:20:03.000 I ns/openshift-apiserver pod/apiserver-5dd54cc66f-c27rl node/apiserver-5dd54cc66f-c27rl reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 13:20:03.000 I ns/openshift-apiserver pod/apiserver-5dd54cc66f-c27rl node/apiserver-5dd54cc66f-c27rl reason/TerminationStoppedServing Server has stopped listening
Nov 30 13:20:06.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-c27rl node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/ProbeError Readiness probe error: Get "https://10.128.0.64:8443/readyz": dial tcp 10.128.0.64:8443: connect: connection refused\nbody: \n
Nov 30 13:20:06.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-c27rl node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.128.0.64:8443/readyz": dial tcp 10.128.0.64:8443: connect: connection refused
Nov 30 13:20:11.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-c27rl node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/ProbeError Readiness probe error: Get "https://10.128.0.64:8443/readyz": dial tcp 10.128.0.64:8443: connect: connection refused\nbody: \n (2 times)
Nov 30 13:20:11.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-c27rl node/ci-op-n0v8mvqz-45c24-mc9rr-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.128.0.64:8443/readyz": dial tcp 10.128.0.64:8443: connect: connection refused (2 times)
#1597920766882484224build-log.txt.gz32 hours ago
Nov 30 13:21:19.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 30 13:21:22.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-d75qp node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]informer-sync ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/max-in-flight-filter ok\n[+]poststarthook/image.openshift.io-apiserver-caches ok\n[+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok\n[+]poststarthook/authorization.openshift.io-ensureopenshift-infra ok\n[+]poststarthook/project.openshift.io-projectcache ok\n[+]poststarthook/project.openshift.io-projectauthorizationcache ok\n[+]poststarthook/openshift.io-startinformers ok\n[+]poststarthook/openshift.io-restmapperupdater ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (2 times)
Nov 30 13:21:22.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-d75qp node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Nov 30 13:21:27.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-d75qp node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]informer-sync ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/max-in-flight-filter ok\n[+]poststarthook/image.openshift.io-apiserver-caches ok\n[+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok\n[+]poststarthook/authorization.openshift.io-ensureopenshift-infra ok\n[+]poststarthook/project.openshift.io-projectcache ok\n[+]poststarthook/project.openshift.io-projectauthorizationcache ok\n[+]poststarthook/openshift.io-startinformers ok\n[+]poststarthook/openshift.io-restmapperupdater ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (3 times)
Nov 30 13:21:27.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-d75qp node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (3 times)
Nov 30 13:21:30.000 I ns/openshift-apiserver pod/apiserver-5dd54cc66f-d75qp node/apiserver-5dd54cc66f-d75qp reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 13:21:30.000 I ns/openshift-apiserver pod/apiserver-5dd54cc66f-d75qp node/apiserver-5dd54cc66f-d75qp reason/TerminationStoppedServing Server has stopped listening
Nov 30 13:21:32.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-d75qp node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/ProbeError Readiness probe error: Get "https://10.130.0.47:8443/readyz": dial tcp 10.130.0.47:8443: connect: connection refused\nbody: \n
Nov 30 13:21:32.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-d75qp node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.130.0.47:8443/readyz": dial tcp 10.130.0.47:8443: connect: connection refused
Nov 30 13:21:37.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-d75qp node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/ProbeError Readiness probe error: Get "https://10.130.0.47:8443/readyz": dial tcp 10.130.0.47:8443: connect: connection refused\nbody: \n (2 times)
Nov 30 13:21:37.000 W ns/openshift-apiserver pod/apiserver-5dd54cc66f-d75qp node/ci-op-n0v8mvqz-45c24-mc9rr-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.130.0.47:8443/readyz": dial tcp 10.130.0.47:8443: connect: connection refused (2 times)
#1597512331221274624build-log.txt.gz2 days ago
Nov 29 09:52:36.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-2 node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/ShutdownInitiated Received signal to terminate, becoming unready, but keeping serving
Nov 29 09:52:36.000 I ns/openshift-kube-apiserver pod/installer-12-ci-op-1lm7t27x-45c24-9798n-master-2 reason/StaticPodInstallerCompleted Successfully installed revision 12
Nov 29 09:52:36.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-2 node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 09:52:37.218 I ns/openshift-kube-apiserver pod/installer-12-ci-op-1lm7t27x-45c24-9798n-master-2 node/ci-op-1lm7t27x-45c24-9798n-master-2 container/installer reason/ContainerExit code/0 cause/Completed
Nov 29 09:53:21.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.1103205225153614 over 5 minutes on "Azure"; disk metrics are: etcd-ci-op-1lm7t27x-45c24-9798n-master-1=0.035519999999999594,etcd-ci-op-1lm7t27x-45c24-9798n-master-2=0.010640000000000156,etcd-ci-op-1lm7t27x-45c24-9798n-master-0=NaN. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 29 09:53:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-2 node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 09:53:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-2 node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 09:53:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-2 node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 09:53:48.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-2 node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 09:53:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-2 node/ci-op-1lm7t27x-45c24-9798n-master-2 container/setup reason/Pulling image/registry.build05.ci.openshift.org/ci-op-1lm7t27x/stable@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa
Nov 29 09:53:56.788 W ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-2 node/ci-op-1lm7t27x-45c24-9798n-master-2 invariant violation (bug): static pod should not transition Running->Pending with same UID
#1597512331221274624build-log.txt.gz2 days ago
Nov 29 09:56:06.453 I ns/openshift-marketplace pod/certified-operators-kwrjd node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 container/registry-server reason/Ready
Nov 29 09:56:06.520 I ns/openshift-marketplace pod/certified-operators-kwrjd node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 reason/GracefulDelete duration/1s
Nov 29 09:56:08.000 I ns/openshift-marketplace pod/certified-operators-kwrjd node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 container/registry-server reason/Killing
Nov 29 09:56:09.460 I ns/openshift-marketplace pod/certified-operators-kwrjd node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 29 09:56:09.467 I ns/openshift-marketplace pod/certified-operators-kwrjd node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 reason/Deleted
Nov 29 09:56:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-1 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 09:56:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-1 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 09:56:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-1 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 09:56:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-1 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 09:56:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-1 node/ci-op-1lm7t27x-45c24-9798n-master-1 container/kube-apiserver reason/Killing
Nov 29 09:56:16.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build05.ci.openshift.org/ci-op-1lm7t27x/stable-initial@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa,registry.build05.ci.openshift.org/ci-op-1lm7t27x/stable@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (16 times)
#1597512331221274624build-log.txt.gz2 days ago
Nov 29 09:58:04.781 I ns/openshift-marketplace pod/community-operators-hzqxc node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 container/registry-server reason/Ready
Nov 29 09:58:04.847 I ns/openshift-marketplace pod/community-operators-hzqxc node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 reason/GracefulDelete duration/1s
Nov 29 09:58:06.000 I ns/openshift-marketplace pod/community-operators-hzqxc node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 container/registry-server reason/Killing
Nov 29 09:58:07.732 I ns/openshift-marketplace pod/community-operators-hzqxc node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 29 09:58:07.799 I ns/openshift-marketplace pod/community-operators-hzqxc node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 reason/Deleted
Nov 29 09:58:40.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-0 node/ci-op-1lm7t27x-45c24-9798n-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 09:58:40.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-0 node/ci-op-1lm7t27x-45c24-9798n-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 09:58:40.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-1lm7t27x-45c24-9798n-master-0 node/ci-op-1lm7t27x-45c24-9798n-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 09:58:40.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build05.ci.openshift.org/ci-op-1lm7t27x/stable-initial@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa,registry.build05.ci.openshift.org/ci-op-1lm7t27x/stable@sha256:7388845fb56b2ec12bfa82054ac548dedfad309d85b9b8f5dbb446af88fddffa (33 times)
Nov 29 09:58:46.719 I ns/openshift-marketplace pod/redhat-operators-vr7xr node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 reason/Scheduled
Nov 29 09:58:46.774 I ns/openshift-marketplace pod/redhat-operators-vr7xr node/ reason/Created
#1597512331221274624build-log.txt.gz2 days ago
Nov 29 10:07:28.363 W ns/openshift-apiserver pod/apiserver-7bc84c9467-sbg42 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 29 10:07:28.369 I ns/openshift-marketplace pod/redhat-marketplace-tvfqg node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 29 10:07:28.502 I ns/openshift-marketplace pod/redhat-marketplace-tvfqg node/ci-op-1lm7t27x-45c24-9798n-worker-westus-wwbb6 reason/Deleted
Nov 29 10:07:30.000 W ns/openshift-apiserver pod/apiserver-795cc69878-cn8hh node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]informer-sync ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/max-in-flight-filter ok\n[+]poststarthook/image.openshift.io-apiserver-caches ok\n[+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok\n[+]poststarthook/authorization.openshift.io-ensureopenshift-infra ok\n[+]poststarthook/project.openshift.io-projectcache ok\n[+]poststarthook/project.openshift.io-projectauthorizationcache ok\n[+]poststarthook/openshift.io-startinformers ok\n[+]poststarthook/openshift.io-restmapperupdater ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (3 times)
Nov 29 10:07:30.000 W ns/openshift-apiserver pod/apiserver-795cc69878-cn8hh node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (3 times)
Nov 29 10:07:32.000 I ns/openshift-apiserver pod/apiserver-795cc69878-cn8hh node/apiserver-795cc69878-cn8hh reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 10:07:32.000 I ns/openshift-apiserver pod/apiserver-795cc69878-cn8hh node/apiserver-795cc69878-cn8hh reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:07:35.000 W ns/openshift-apiserver pod/apiserver-795cc69878-cn8hh node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/ProbeError Readiness probe error: Get "https://10.130.0.38:8443/readyz": dial tcp 10.130.0.38:8443: connect: connection refused\nbody: \n
Nov 29 10:07:35.000 W ns/openshift-apiserver pod/apiserver-795cc69878-cn8hh node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.130.0.38:8443/readyz": dial tcp 10.130.0.38:8443: connect: connection refused
Nov 29 10:07:40.000 W ns/openshift-apiserver pod/apiserver-795cc69878-cn8hh node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/ProbeError Readiness probe error: Get "https://10.130.0.38:8443/readyz": dial tcp 10.130.0.38:8443: connect: connection refused\nbody: \n (2 times)
Nov 29 10:07:40.000 W ns/openshift-apiserver pod/apiserver-795cc69878-cn8hh node/ci-op-1lm7t27x-45c24-9798n-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.130.0.38:8443/readyz": dial tcp 10.130.0.38:8443: connect: connection refused (2 times)
#1597512331221274624build-log.txt.gz2 days ago
Nov 29 10:08:47.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 29 10:08:50.000 W ns/openshift-apiserver pod/apiserver-795cc69878-j79s4 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]informer-sync ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/max-in-flight-filter ok\n[+]poststarthook/image.openshift.io-apiserver-caches ok\n[+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok\n[+]poststarthook/authorization.openshift.io-ensureopenshift-infra ok\n[+]poststarthook/project.openshift.io-projectcache ok\n[+]poststarthook/project.openshift.io-projectauthorizationcache ok\n[+]poststarthook/openshift.io-startinformers ok\n[+]poststarthook/openshift.io-restmapperupdater ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (2 times)
Nov 29 10:08:50.000 W ns/openshift-apiserver pod/apiserver-795cc69878-j79s4 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Nov 29 10:08:55.000 W ns/openshift-apiserver pod/apiserver-795cc69878-j79s4 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]informer-sync ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/max-in-flight-filter ok\n[+]poststarthook/image.openshift.io-apiserver-caches ok\n[+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok\n[+]poststarthook/authorization.openshift.io-ensureopenshift-infra ok\n[+]poststarthook/project.openshift.io-projectcache ok\n[+]poststarthook/project.openshift.io-projectauthorizationcache ok\n[+]poststarthook/openshift.io-startinformers ok\n[+]poststarthook/openshift.io-restmapperupdater ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (3 times)
Nov 29 10:08:55.000 W ns/openshift-apiserver pod/apiserver-795cc69878-j79s4 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (3 times)
Nov 29 10:08:59.000 I ns/openshift-apiserver pod/apiserver-795cc69878-j79s4 node/apiserver-795cc69878-j79s4 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 10:08:59.000 I ns/openshift-apiserver pod/apiserver-795cc69878-j79s4 node/apiserver-795cc69878-j79s4 reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:09:00.000 W ns/openshift-apiserver pod/apiserver-795cc69878-j79s4 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/ProbeError Readiness probe error: Get "https://10.128.0.62:8443/readyz": dial tcp 10.128.0.62:8443: connect: connection refused\nbody: \n
Nov 29 10:09:00.000 W ns/openshift-apiserver pod/apiserver-795cc69878-j79s4 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.128.0.62:8443/readyz": dial tcp 10.128.0.62:8443: connect: connection refused
Nov 29 10:09:05.000 W ns/openshift-apiserver pod/apiserver-795cc69878-j79s4 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/ProbeError Readiness probe error: Get "https://10.128.0.62:8443/readyz": dial tcp 10.128.0.62:8443: connect: connection refused\nbody: \n (2 times)
Nov 29 10:09:05.000 W ns/openshift-apiserver pod/apiserver-795cc69878-j79s4 node/ci-op-1lm7t27x-45c24-9798n-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.128.0.62:8443/readyz": dial tcp 10.128.0.62:8443: connect: connection refused (2 times)
#1597512331221274624build-log.txt.gz2 days ago
Nov 29 10:10:21.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-795cc69878-fpx9g pod)"
Nov 29 10:10:24.000 W ns/openshift-apiserver pod/apiserver-795cc69878-fpx9g node/ci-op-1lm7t27x-45c24-9798n-master-0 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]informer-sync ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/max-in-flight-filter ok\n[+]poststarthook/image.openshift.io-apiserver-caches ok\n[+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok\n[+]poststarthook/authorization.openshift.io-ensureopenshift-infra ok\n[+]poststarthook/project.openshift.io-projectcache ok\n[+]poststarthook/project.openshift.io-projectauthorizationcache ok\n[+]poststarthook/openshift.io-startinformers ok\n[+]poststarthook/openshift.io-restmapperupdater ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (2 times)
Nov 29 10:10:24.000 W ns/openshift-apiserver pod/apiserver-795cc69878-fpx9g node/ci-op-1lm7t27x-45c24-9798n-master-0 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Nov 29 10:10:29.000 W ns/openshift-apiserver pod/apiserver-795cc69878-fpx9g node/ci-op-1lm7t27x-45c24-9798n-master-0 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]informer-sync ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/max-in-flight-filter ok\n[+]poststarthook/image.openshift.io-apiserver-caches ok\n[+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok\n[+]poststarthook/authorization.openshift.io-ensureopenshift-infra ok\n[+]poststarthook/project.openshift.io-projectcache ok\n[+]poststarthook/project.openshift.io-projectauthorizationcache ok\n[+]poststarthook/openshift.io-startinformers ok\n[+]poststarthook/openshift.io-restmapperupdater ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (3 times)
Nov 29 10:10:29.000 W ns/openshift-apiserver pod/apiserver-795cc69878-fpx9g node/ci-op-1lm7t27x-45c24-9798n-master-0 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (3 times)
Nov 29 10:10:30.000 I ns/openshift-apiserver pod/apiserver-795cc69878-fpx9g node/apiserver-795cc69878-fpx9g reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 10:10:30.000 I ns/openshift-apiserver pod/apiserver-795cc69878-fpx9g node/apiserver-795cc69878-fpx9g reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:10:34.000 W ns/openshift-apiserver pod/apiserver-795cc69878-fpx9g node/ci-op-1lm7t27x-45c24-9798n-master-0 reason/ProbeError Readiness probe error: Get "https://10.129.0.28:8443/readyz": dial tcp 10.129.0.28:8443: connect: connection refused\nbody: \n
Nov 29 10:10:34.000 W ns/openshift-apiserver pod/apiserver-795cc69878-fpx9g node/ci-op-1lm7t27x-45c24-9798n-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.129.0.28:8443/readyz": dial tcp 10.129.0.28:8443: connect: connection refused
Nov 29 10:10:39.000 W ns/openshift-apiserver pod/apiserver-795cc69878-fpx9g node/ci-op-1lm7t27x-45c24-9798n-master-0 reason/ProbeError Readiness probe error: Get "https://10.129.0.28:8443/readyz": dial tcp 10.129.0.28:8443: connect: connection refused\nbody: \n (2 times)
Nov 29 10:10:39.000 W ns/openshift-apiserver pod/apiserver-795cc69878-fpx9g node/ci-op-1lm7t27x-45c24-9798n-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.129.0.28:8443/readyz": dial tcp 10.129.0.28:8443: connect: connection refused (2 times)
periodic-ci-openshift-multiarch-master-nightly-4.10-upgrade-from-nightly-4.9-ocp-remote-libvirt-s390x (all) - 7 runs, 43% failed, 100% of failures match = 43% impact
#1597923573605863424build-log.txt.gz32 hours ago
Nov 30 12:46:15.055 I ns/openshift-marketplace pod/redhat-marketplace-cdc8k node/libvirt-s390x-2-0-708-vv2q6-worker-0-nq95p reason/GracefulDelete duration/1s
Nov 30 12:46:16.000 I ns/openshift-marketplace pod/redhat-marketplace-cdc8k node/libvirt-s390x-2-0-708-vv2q6-worker-0-nq95p container/registry-server reason/Killing
Nov 30 12:46:16.000 I ns/openshift-marketplace pod/redhat-marketplace-cdc8k node/libvirt-s390x-2-0-708-vv2q6-worker-0-nq95p container/registry-server reason/Killing
Nov 30 12:46:16.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d366cf4e0aaf7c3a7c9e355d74c2c093a3093bb6ff0bae1491111de5c542e7d,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fed075671de70c3297496faa6889a3440bd17c9c3c37f3ab7ff5d2dfa5af34c (17 times)
Nov 30 12:46:16.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d366cf4e0aaf7c3a7c9e355d74c2c093a3093bb6ff0bae1491111de5c542e7d,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fed075671de70c3297496faa6889a3440bd17c9c3c37f3ab7ff5d2dfa5af34c (17 times)
Nov 30 12:46:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-0 node/libvirt-s390x-2-0-708-vv2q6-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 12:46:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-0 node/libvirt-s390x-2-0-708-vv2q6-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 12:46:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-0 node/libvirt-s390x-2-0-708-vv2q6-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 12:46:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-0 node/libvirt-s390x-2-0-708-vv2q6-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 12:46:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-0 node/libvirt-s390x-2-0-708-vv2q6-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 12:46:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-0 node/libvirt-s390x-2-0-708-vv2q6-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 12:46:18.000 - 1s    E disruption/openshift-api connection/new disruption/openshift-api connection/new stopped responding to GET requests over new connections: error running request: 429 Too Many Requests: The apiserver is shutting down, please try again later.\n
#1597923573605863424build-log.txt.gz32 hours ago
Nov 30 12:49:47.945 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found
Nov 30 12:50:03.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02b6162051fbc1fdf1a079eb593b804f27b9323ca4193ab9966c483298df6f6b,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fdb73e9fbce913102080197f978dda757acf24f6d68fee7fe185ac56ba0efb (21 times)
Nov 30 12:50:09.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.2222222222222223 over 5 minutes on "Libvirt"; disk metrics are: etcd-libvirt-s390x-2-0-708-vv2q6-master-0=0.024939,etcd-libvirt-s390x-2-0-708-vv2q6-master-1=0.024346,etcd-libvirt-s390x-2-0-708-vv2q6-master-2=0.020527. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 30 12:50:09.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 2 of 3 members are available, libvirt-s390x-2-0-708-vv2q6-master-1 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available"
Nov 30 12:50:09.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, libvirt-s390x-2-0-708-vv2q6-master-1 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found"
Nov 30 12:50:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-2 node/libvirt-s390x-2-0-708-vv2q6-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 12:50:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-2 node/libvirt-s390x-2-0-708-vv2q6-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 12:50:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-2 node/libvirt-s390x-2-0-708-vv2q6-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 12:50:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-2 node/libvirt-s390x-2-0-708-vv2q6-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 12:50:17.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-libvirt-s390x-2-0-708-vv2q6-master-2 node/libvirt-s390x-2-0-708-vv2q6-master-2 reason/ProbeError Readiness probe error: Get "https://192.168.126.13:6443/healthz": dial tcp 192.168.126.13:6443: connect: connection refused\nbody: \n
Nov 30 12:50:17.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-libvirt-s390x-2-0-708-vv2q6-master-2 node/libvirt-s390x-2-0-708-vv2q6-master-2 reason/Unhealthy Readiness probe failed: Get "https://192.168.126.13:6443/healthz": dial tcp 192.168.126.13:6443: connect: connection refused
#1597923573605863424build-log.txt.gz32 hours ago
Nov 30 12:52:38.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02b6162051fbc1fdf1a079eb593b804f27b9323ca4193ab9966c483298df6f6b,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fdb73e9fbce913102080197f978dda757acf24f6d68fee7fe185ac56ba0efb (36 times)
Nov 30 12:52:42.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02b6162051fbc1fdf1a079eb593b804f27b9323ca4193ab9966c483298df6f6b,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fdb73e9fbce913102080197f978dda757acf24f6d68fee7fe185ac56ba0efb (37 times)
Nov 30 12:53:06.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02b6162051fbc1fdf1a079eb593b804f27b9323ca4193ab9966c483298df6f6b,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fdb73e9fbce913102080197f978dda757acf24f6d68fee7fe185ac56ba0efb (38 times)
Nov 30 12:53:09.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02b6162051fbc1fdf1a079eb593b804f27b9323ca4193ab9966c483298df6f6b,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fdb73e9fbce913102080197f978dda757acf24f6d68fee7fe185ac56ba0efb (39 times)
Nov 30 12:53:44.000 - 1s    E disruption/kube-api connection/new disruption/kube-api connection/new stopped responding to GET requests over new connections: error running request: 429 Too Many Requests: The apiserver is shutting down, please try again later.\n
Nov 30 12:53:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-1 node/libvirt-s390x-2-0-708-vv2q6-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 12:53:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-1 node/libvirt-s390x-2-0-708-vv2q6-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 12:53:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-0-708-vv2q6-master-1 node/libvirt-s390x-2-0-708-vv2q6-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 12:53:45.000 - 3444s I disruption/kube-api connection/new disruption/kube-api connection/new started responding to GET requests over new connections
Nov 30 12:53:45.911 W ns/kube-system openshifttest/kube-api reason/DisruptionBegan disruption/kube-api connection/new stopped responding to GET requests over new connections: error running request: 429 Too Many Requests: The apiserver is shutting down, please try again later.\n
Nov 30 12:53:46.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-libvirt-s390x-2-0-708-vv2q6-master-1 node/libvirt-s390x-2-0-708-vv2q6-master-1 reason/ProbeError Readiness probe error: Get "https://192.168.126.12:6443/healthz": dial tcp 192.168.126.12:6443: connect: connection refused\nbody: \n
#1597923573605863424build-log.txt.gz32 hours ago
Nov 30 13:07:49.000 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/ScalingReplicaSet Scaled down replica set csi-snapshot-controller-operator-584ff8ffc6 to 0
Nov 30 13:07:49.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-679c55cf55 to 1
Nov 30 13:07:49.000 I ns/openshift-oauth-apiserver replicaset/apiserver-679c55cf55 reason/SuccessfulCreate Created pod: apiserver-679c55cf55-kzr8h
Nov 30 13:07:49.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5755c5c459 reason/SuccessfulDelete Deleted pod: apiserver-5755c5c459-m9k7z
Nov 30 13:07:49.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-controller-operator-584ff8ffc6 reason/SuccessfulDelete Deleted pod: csi-snapshot-controller-operator-584ff8ffc6-c5p65
Nov 30 13:07:49.000 I ns/default namespace/kube-system node/apiserver-5755c5c459-m9k7z reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 30 13:07:49.000 I ns/default namespace/kube-system node/apiserver-5755c5c459-m9k7z reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 13:07:49.000 I ns/default namespace/kube-system node/apiserver-5755c5c459-m9k7z reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 30 13:07:49.000 I ns/default namespace/kube-system node/apiserver-5755c5c459-m9k7z reason/TerminationStoppedServing Server has stopped listening
Nov 30 13:07:49.207 I ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-9d66f865c-lcdlq node/libvirt-s390x-2-0-708-vv2q6-master-1 container/csi-snapshot-controller-operator reason/ContainerStart duration/22.00s
Nov 30 13:07:49.207 I ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-9d66f865c-lcdlq node/libvirt-s390x-2-0-708-vv2q6-master-1 container/csi-snapshot-controller-operator reason/Ready
#1597923573605863424build-log.txt.gz32 hours ago
Nov 30 13:08:05.000 I ns/openshift-monitoring pod/cluster-monitoring-operator-88f564f88-qngqv node/libvirt-s390x-2-0-708-vv2q6-master-1 container/kube-rbac-proxy reason/Created
Nov 30 13:08:05.000 I ns/openshift-monitoring pod/cluster-monitoring-operator-88f564f88-qngqv node/libvirt-s390x-2-0-708-vv2q6-master-1 container/kube-rbac-proxy reason/Started
Nov 30 13:08:05.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation"
Nov 30 13:08:05.000 I ns/openshift-image-registry deployment/cluster-image-registry-operator reason/ScalingReplicaSet Scaled down replica set cluster-image-registry-operator-56798d9566 to 0
Nov 30 13:08:05.000 I ns/openshift-image-registry replicaset/cluster-image-registry-operator-56798d9566 reason/SuccessfulDelete Deleted pod: cluster-image-registry-operator-56798d9566-9ntj8
Nov 30 13:08:05.000 I ns/openshift-apiserver pod/apiserver-84574b6699-vfgkf node/apiserver-84574b6699-vfgkf reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 13:08:05.000 I ns/openshift-apiserver pod/apiserver-84574b6699-vfgkf node/apiserver-84574b6699-vfgkf reason/TerminationStoppedServing Server has stopped listening
Nov 30 13:08:05.523 I ns/openshift-image-registry pod/cluster-image-registry-operator-64885c67d5-twhtj node/libvirt-s390x-2-0-708-vv2q6-master-1 container/cluster-image-registry-operator reason/ContainerStart duration/36.00s
Nov 30 13:08:05.523 I ns/openshift-image-registry pod/cluster-image-registry-operator-64885c67d5-twhtj node/libvirt-s390x-2-0-708-vv2q6-master-1 container/cluster-image-registry-operator reason/Ready
Nov 30 13:08:05.571 I ns/openshift-image-registry pod/cluster-image-registry-operator-56798d9566-9ntj8 node/libvirt-s390x-2-0-708-vv2q6-master-2 reason/GracefulDelete duration/30s
Nov 30 13:08:07.000 I ns/openshift-ingress-operator pod/ingress-operator-68579d65fb-gf9n8 node/libvirt-s390x-2-0-708-vv2q6-master-2 container/ingress-operator reason/Created
#1597561148599701504build-log.txt.gz2 days ago
Nov 29 12:56:47.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 nodes are at revision 8\nEtcdMembersAvailable: 2 of 3 members are available, libvirt-s390x-0-3-708-p924m-master-1 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available"
Nov 29 12:56:47.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, libvirt-s390x-0-3-708-p924m-master-1 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found"
Nov 29 12:56:47.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, libvirt-s390x-0-3-708-p924m-master-1 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found"
Nov 29 12:56:48.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fed075671de70c3297496faa6889a3440bd17c9c3c37f3ab7ff5d2dfa5af34c,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bec0b796a43bc6ae9a5143a367f22f65f828eb605223177ada98d42dffda62 (17 times)
Nov 29 12:56:48.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fed075671de70c3297496faa6889a3440bd17c9c3c37f3ab7ff5d2dfa5af34c,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bec0b796a43bc6ae9a5143a367f22f65f828eb605223177ada98d42dffda62 (17 times)
Nov 29 12:56:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-0 node/libvirt-s390x-0-3-708-p924m-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 12:56:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-0 node/libvirt-s390x-0-3-708-p924m-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 12:56:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-0 node/libvirt-s390x-0-3-708-p924m-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 12:56:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-0 node/libvirt-s390x-0-3-708-p924m-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 12:56:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-0 node/libvirt-s390x-0-3-708-p924m-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 12:56:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-0 node/libvirt-s390x-0-3-708-p924m-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 12:56:51.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-libvirt-s390x-0-3-708-p924m-master-0 node/libvirt-s390x-0-3-708-p924m-master-0 reason/ProbeError Readiness probe error: Get "https://192.168.3.11:6443/healthz": dial tcp 192.168.3.11:6443: connect: connection refused\nbody: \n
#1597561148599701504build-log.txt.gz2 days ago
Nov 29 12:59:46.000 W ns/openshift-etcd pod/etcd-quorum-guard-64759d85f4-6sjbz node/libvirt-s390x-0-3-708-p924m-master-0 reason/Unhealthy Readiness probe failed:  (13 times)
Nov 29 12:59:47.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.2222222222222223 over 5 minutes on "Libvirt"; disk metrics are: etcd-libvirt-s390x-0-3-708-p924m-master-0=0.013700,etcd-libvirt-s390x-0-3-708-p924m-master-1=0.014310,etcd-libvirt-s390x-0-3-708-p924m-master-2=0.013487. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 29 12:59:50.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: libvirt-s390x-0-3-708-p924m-master-0 (4 times)
Nov 29 12:59:51.000 W ns/openshift-etcd pod/etcd-quorum-guard-64759d85f4-6sjbz node/libvirt-s390x-0-3-708-p924m-master-0 reason/Unhealthy Readiness probe failed:  (14 times)
Nov 29 12:59:52.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fdb73e9fbce913102080197f978dda757acf24f6d68fee7fe185ac56ba0efb,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48336746ee94aa1d9cc60b122c0afe15da9faab8ad2830aa6518889905050e71 (19 times)
Nov 29 12:59:55.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-1 node/libvirt-s390x-0-3-708-p924m-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 12:59:55.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-1 node/libvirt-s390x-0-3-708-p924m-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 12:59:55.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-1 node/libvirt-s390x-0-3-708-p924m-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 12:59:55.000 - 1s    E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: error running request: 429 Too Many Requests: The apiserver is shutting down, please try again later.\n
Nov 29 12:59:55.662 - 999ms E disruption/oauth-api connection/new disruption/oauth-api connection/new stopped responding to GET requests over new connections: error running request: 429 Too Many Requests: The apiserver is shutting down, please try again later.\n
Nov 29 12:59:56.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" (3 times)
#1597561148599701504build-log.txt.gz2 days ago
Nov 29 13:02:54.000 I ns/openshift-marketplace pod/redhat-marketplace-6crmv reason/AddedInterface Add eth0 [10.131.0.36/23] from ovn-kubernetes
Nov 29 13:02:55.000 I ns/openshift-marketplace pod/redhat-marketplace-6crmv node/libvirt-s390x-0-3-708-p924m-worker-0-68zpv container/registry-server reason/Created
Nov 29 13:02:55.000 I ns/openshift-marketplace pod/redhat-marketplace-6crmv node/libvirt-s390x-0-3-708-p924m-worker-0-68zpv container/registry-server reason/Pulled duration/0.970s image/registry.redhat.io/redhat/redhat-marketplace-index:v4.9
Nov 29 13:02:55.000 I ns/openshift-marketplace pod/redhat-marketplace-6crmv node/libvirt-s390x-0-3-708-p924m-worker-0-68zpv container/registry-server reason/Started
Nov 29 13:02:55.873 I ns/openshift-marketplace pod/redhat-marketplace-6crmv node/libvirt-s390x-0-3-708-p924m-worker-0-68zpv container/registry-server reason/ContainerStart duration/4.00s
Nov 29 13:02:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-2 node/libvirt-s390x-0-3-708-p924m-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 13:02:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-2 node/libvirt-s390x-0-3-708-p924m-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 13:02:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-2 node/libvirt-s390x-0-3-708-p924m-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 13:03:01.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-0-3-708-p924m-master-2 node/libvirt-s390x-0-3-708-p924m-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 13:03:02.260 I ns/openshift-marketplace pod/redhat-marketplace-6crmv node/libvirt-s390x-0-3-708-p924m-worker-0-68zpv container/registry-server reason/Ready
Nov 29 13:03:02.310 I ns/openshift-marketplace pod/redhat-marketplace-6crmv node/libvirt-s390x-0-3-708-p924m-worker-0-68zpv reason/GracefulDelete duration/1s
#1597561148599701504build-log.txt.gz2 days ago
Nov 29 13:21:30.000 I ns/openshift-cluster-storage-operator deployment/cluster-storage-operator reason/ScalingReplicaSet Scaled down replica set cluster-storage-operator-578b88f794 to 0
Nov 29 13:21:30.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-7f855bb8b4 to 1
Nov 29 13:21:30.000 I ns/openshift-oauth-apiserver replicaset/apiserver-7f855bb8b4 reason/SuccessfulCreate Created pod: apiserver-7f855bb8b4-fk5dk
Nov 29 13:21:30.000 I ns/openshift-oauth-apiserver replicaset/apiserver-595b4c5466 reason/SuccessfulDelete Deleted pod: apiserver-595b4c5466-8hmzs
Nov 29 13:21:30.000 I ns/openshift-cluster-storage-operator replicaset/cluster-storage-operator-578b88f794 reason/SuccessfulDelete Deleted pod: cluster-storage-operator-578b88f794-xntt6
Nov 29 13:21:30.000 I ns/default namespace/kube-system node/apiserver-595b4c5466-8hmzs reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 29 13:21:30.000 I ns/default namespace/kube-system node/apiserver-595b4c5466-8hmzs reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 13:21:30.000 I ns/default namespace/kube-system node/apiserver-595b4c5466-8hmzs reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 13:21:30.000 I ns/default namespace/kube-system node/apiserver-595b4c5466-8hmzs reason/TerminationStoppedServing Server has stopped listening
Nov 29 13:21:30.557 I ns/openshift-cluster-storage-operator pod/cluster-storage-operator-797f5485ff-fqkwj node/libvirt-s390x-0-3-708-p924m-master-1 container/cluster-storage-operator reason/ContainerStart duration/29.00s
Nov 29 13:21:30.557 I ns/openshift-cluster-storage-operator pod/cluster-storage-operator-797f5485ff-fqkwj node/libvirt-s390x-0-3-708-p924m-master-1 container/cluster-storage-operator reason/Ready
#1597561148599701504build-log.txt.gz2 days ago
Nov 29 13:21:31.000 I ns/openshift-cluster-storage-operator deployment/cluster-storage-operator reason/OperatorVersionChanged clusteroperator/storage version "operator" changed from "4.9.0-0.nightly-s390x-2022-11-22-044053" to "4.10.0-0.nightly-s390x-2022-11-29-115135"
Nov 29 13:21:31.000 I ns/openshift-cluster-storage-operator deployment/cluster-storage-operator reason/OperatorVersionChanged clusteroperator/storage version "operator" changed from "4.9.0-0.nightly-s390x-2022-11-22-044053" to "4.10.0-0.nightly-s390x-2022-11-29-115135" (2 times)
Nov 29 13:21:31.000 I ns/openshift-machine-api deployment/cluster-autoscaler-operator reason/ScalingReplicaSet Scaled down replica set cluster-autoscaler-operator-6bc98cd66c to 0
Nov 29 13:21:31.000 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/ScalingReplicaSet Scaled down replica set csi-snapshot-controller-operator-5f8c7c796c to 0
Nov 29 13:21:31.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-controller-operator-5f8c7c796c reason/SuccessfulDelete Deleted pod: csi-snapshot-controller-operator-5f8c7c796c-888qf
Nov 29 13:21:31.000 I ns/openshift-apiserver pod/apiserver-5fdbdd9d8b-qs692 node/apiserver-5fdbdd9d8b-qs692 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 13:21:31.000 I ns/openshift-apiserver pod/apiserver-5fdbdd9d8b-qs692 node/apiserver-5fdbdd9d8b-qs692 reason/TerminationStoppedServing Server has stopped listening
Nov 29 13:21:31.360 I clusteroperator/storage versions: operator 4.9.0-0.nightly-s390x-2022-11-22-044053 -> 4.10.0-0.nightly-s390x-2022-11-29-115135
Nov 29 13:21:31.972 I ns/openshift-ingress-operator pod/ingress-operator-5dd6656947-jqkcl node/libvirt-s390x-0-3-708-p924m-master-2 container/ingress-operator reason/ContainerStart duration/29.00s
Nov 29 13:21:31.972 I ns/openshift-ingress-operator pod/ingress-operator-5dd6656947-jqkcl node/libvirt-s390x-0-3-708-p924m-master-2 container/kube-rbac-proxy reason/ContainerStart duration/30.00s
Nov 29 13:21:31.972 I ns/openshift-ingress-operator pod/ingress-operator-5dd6656947-jqkcl node/libvirt-s390x-0-3-708-p924m-master-2 container/kube-rbac-proxy reason/Ready
#1597198719889969152build-log.txt.gz3 days ago
Nov 28 12:50:17.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bec0b796a43bc6ae9a5143a367f22f65f828eb605223177ada98d42dffda62,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dce9b517f3ffef97ba73bec2a7ca98082f341787731c539e714648137f9c86c9 (23 times)
Nov 28 12:50:17.000 W ns/openshift-etcd pod/etcd-quorum-guard-655d68865b-6r29m node/libvirt-s390x-2-3-708-kbrbh-master-0 reason/Unhealthy Readiness probe failed:  (9 times)
Nov 28 12:50:17.000 W ns/openshift-etcd pod/etcd-quorum-guard-655d68865b-6r29m node/libvirt-s390x-2-3-708-kbrbh-master-0 reason/Unhealthy Readiness probe failed:  (9 times)
Nov 28 12:50:18.000 I ns/openshift-etcd pod/etcd-libvirt-s390x-2-3-708-kbrbh-master-0 node/libvirt-s390x-2-3-708-kbrbh-master-0 container/setup reason/Pulling image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dce9b517f3ffef97ba73bec2a7ca98082f341787731c539e714648137f9c86c9
Nov 28 12:50:18.000 I ns/openshift-etcd pod/etcd-libvirt-s390x-2-3-708-kbrbh-master-0 node/libvirt-s390x-2-3-708-kbrbh-master-0 container/setup reason/Pulling image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dce9b517f3ffef97ba73bec2a7ca98082f341787731c539e714648137f9c86c9
Nov 28 12:50:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-2 node/libvirt-s390x-2-3-708-kbrbh-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 28 12:50:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-2 node/libvirt-s390x-2-3-708-kbrbh-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 28 12:50:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-2 node/libvirt-s390x-2-3-708-kbrbh-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 12:50:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-2 node/libvirt-s390x-2-3-708-kbrbh-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 12:50:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-2 node/libvirt-s390x-2-3-708-kbrbh-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 12:50:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-2 node/libvirt-s390x-2-3-708-kbrbh-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 12:50:18.027 W ns/openshift-etcd pod/etcd-libvirt-s390x-2-3-708-kbrbh-master-0 node/libvirt-s390x-2-3-708-kbrbh-master-0 invariant violation (bug): static pod should not transition Running->Pending with same UID
#1597198719889969152build-log.txt.gz3 days ago
Nov 28 12:52:55.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, libvirt-s390x-2-3-708-kbrbh-master-2 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available"
Nov 28 12:52:55.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, libvirt-s390x-2-3-708-kbrbh-master-2 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found"
Nov 28 12:52:55.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, libvirt-s390x-2-3-708-kbrbh-master-2 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found"
Nov 28 12:52:55.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, libvirt-s390x-2-3-708-kbrbh-master-2 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" (2 times)
Nov 28 12:52:55.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, libvirt-s390x-2-3-708-kbrbh-master-2 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" (2 times)
Nov 28 12:53:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-1 node/libvirt-s390x-2-3-708-kbrbh-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 28 12:53:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-1 node/libvirt-s390x-2-3-708-kbrbh-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 28 12:53:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-1 node/libvirt-s390x-2-3-708-kbrbh-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 12:53:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-1 node/libvirt-s390x-2-3-708-kbrbh-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 12:53:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-1 node/libvirt-s390x-2-3-708-kbrbh-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 12:53:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-1 node/libvirt-s390x-2-3-708-kbrbh-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 12:53:31.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-libvirt-s390x-2-3-708-kbrbh-master-1 node/libvirt-s390x-2-3-708-kbrbh-master-1 reason/ProbeError Readiness probe error: Get "https://192.168.3.12:6443/healthz": dial tcp 192.168.3.12:6443: connect: connection refused\nbody: \n
#1597198719889969152build-log.txt.gz3 days ago
Nov 28 12:55:27.069 I ns/openshift-kube-apiserver pod/installer-12-libvirt-s390x-2-3-708-kbrbh-master-0 node/libvirt-s390x-2-3-708-kbrbh-master-0 container/installer reason/ContainerExit code/0 cause/Completed
Nov 28 12:55:30.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48336746ee94aa1d9cc60b122c0afe15da9faab8ad2830aa6518889905050e71,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be4c7282024227d44b690afd36398490531b3454c3a480487985c05ffbb18118 (35 times)
Nov 28 12:55:33.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48336746ee94aa1d9cc60b122c0afe15da9faab8ad2830aa6518889905050e71,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be4c7282024227d44b690afd36398490531b3454c3a480487985c05ffbb18118 (36 times)
Nov 28 12:55:52.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48336746ee94aa1d9cc60b122c0afe15da9faab8ad2830aa6518889905050e71,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be4c7282024227d44b690afd36398490531b3454c3a480487985c05ffbb18118 (37 times)
Nov 28 12:55:55.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.2222222222222223 over 5 minutes on "Libvirt"; disk metrics are: etcd-libvirt-s390x-2-3-708-kbrbh-master-0=0.019400,etcd-libvirt-s390x-2-3-708-kbrbh-master-1=0.015678,etcd-libvirt-s390x-2-3-708-kbrbh-master-2=0.018583. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 28 12:56:35.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-0 node/libvirt-s390x-2-3-708-kbrbh-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 28 12:56:35.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-0 node/libvirt-s390x-2-3-708-kbrbh-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 12:56:35.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-0 node/libvirt-s390x-2-3-708-kbrbh-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 12:56:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-s390x-2-3-708-kbrbh-master-0 node/libvirt-s390x-2-3-708-kbrbh-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 28 12:56:40.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-libvirt-s390x-2-3-708-kbrbh-master-0 node/libvirt-s390x-2-3-708-kbrbh-master-0 reason/ProbeError Readiness probe error: Get "https://192.168.3.11:6443/healthz": dial tcp 192.168.3.11:6443: connect: connection refused\nbody: \n
Nov 28 12:56:40.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-libvirt-s390x-2-3-708-kbrbh-master-0 node/libvirt-s390x-2-3-708-kbrbh-master-0 reason/Unhealthy Readiness probe failed: Get "https://192.168.3.11:6443/healthz": dial tcp 192.168.3.11:6443: connect: connection refused
#1597198719889969152build-log.txt.gz3 days ago
Nov 28 13:12:35.000 I ns/openshift-monitoring deployment/cluster-monitoring-operator reason/ScalingReplicaSet Scaled down replica set cluster-monitoring-operator-67cbf75fd7 to 0
Nov 28 13:12:35.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-566c76d74d to 1
Nov 28 13:12:35.000 I ns/openshift-oauth-apiserver replicaset/apiserver-566c76d74d reason/SuccessfulCreate Created pod: apiserver-566c76d74d-whhct
Nov 28 13:12:35.000 I ns/openshift-oauth-apiserver replicaset/apiserver-6599f5fcd8 reason/SuccessfulDelete Deleted pod: apiserver-6599f5fcd8-jlfv7
Nov 28 13:12:35.000 I ns/openshift-monitoring replicaset/cluster-monitoring-operator-67cbf75fd7 reason/SuccessfulDelete Deleted pod: cluster-monitoring-operator-67cbf75fd7-lbcxr
Nov 28 13:12:35.000 I ns/default namespace/kube-system node/apiserver-6599f5fcd8-jlfv7 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 28 13:12:35.000 I ns/default namespace/kube-system node/apiserver-6599f5fcd8-jlfv7 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 28 13:12:35.000 I ns/default namespace/kube-system node/apiserver-6599f5fcd8-jlfv7 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 28 13:12:35.000 I ns/default namespace/kube-system node/apiserver-6599f5fcd8-jlfv7 reason/TerminationStoppedServing Server has stopped listening
Nov 28 13:12:35.225 I ns/openshift-oauth-apiserver pod/apiserver-6599f5fcd8-jlfv7 node/libvirt-s390x-2-3-708-kbrbh-master-1 reason/GracefulDelete duration/70s
Nov 28 13:12:35.229 I ns/openshift-monitoring pod/cluster-monitoring-operator-7975dc7577-7w8h4 node/libvirt-s390x-2-3-708-kbrbh-master-0 container/cluster-monitoring-operator reason/ContainerStart duration/17.00s
pull-ci-openshift-machine-config-operator-release-4.9-okd-e2e-upgrade (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1597920767230611456build-log.txt.gz31 hours ago
Nov 30 12:55:30.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-1 node/ci-op-yjisic1x-61103-4rs6w-master-1 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 12:55:31.000 W ns/openshift-kube-apiserver pod/installer-10-ci-op-yjisic1x-61103-4rs6w-master-1 node/ci-op-yjisic1x-61103-4rs6w-master-1 reason/FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered
Nov 30 12:55:31.000 W ns/openshift-kube-apiserver pod/installer-10-ci-op-yjisic1x-61103-4rs6w-master-1 node/ci-op-yjisic1x-61103-4rs6w-master-1 reason/FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered (2 times)
Nov 30 12:55:31.124 I ns/openshift-kube-apiserver pod/installer-10-ci-op-yjisic1x-61103-4rs6w-master-1 node/ci-op-yjisic1x-61103-4rs6w-master-1 container/installer reason/ContainerExit code/0 cause/Completed
Nov 30 12:56:15.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.3155257539364826 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-yjisic1x-61103-4rs6w-master-2=0.04480000000000057,etcd-ci-op-yjisic1x-61103-4rs6w-master-0=0.05471999999999988,etcd-ci-op-yjisic1x-61103-4rs6w-master-1=0.013386666666666724. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 30 12:56:40.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-1 node/ci-op-yjisic1x-61103-4rs6w-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 12:56:40.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-1 node/ci-op-yjisic1x-61103-4rs6w-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 12:56:40.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-1 node/ci-op-yjisic1x-61103-4rs6w-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 12:56:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-1 node/ci-op-yjisic1x-61103-4rs6w-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 12:56:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-1 node/ci-op-yjisic1x-61103-4rs6w-master-1 container/kube-apiserver reason/Killing
Nov 30 12:56:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-1 node/ci-op-yjisic1x-61103-4rs6w-master-1 container/setup reason/Pulling image/registry.build04.ci.openshift.org/ci-op-yjisic1x/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18
#1597920767230611456build-log.txt.gz31 hours ago
Nov 30 12:57:45.000 W ns/openshift-kube-apiserver pod/installer-10-ci-op-yjisic1x-61103-4rs6w-master-2 node/ci-op-yjisic1x-61103-4rs6w-master-2 reason/FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered (2 times)
Nov 30 12:57:45.105 I ns/openshift-kube-apiserver pod/installer-10-ci-op-yjisic1x-61103-4rs6w-master-2 node/ci-op-yjisic1x-61103-4rs6w-master-2 container/installer reason/ContainerExit code/0 cause/Completed
Nov 30 12:57:47.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build04.ci.openshift.org/ci-op-yjisic1x/stable-initial@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.build04.ci.openshift.org/ci-op-yjisic1x/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (11 times)
Nov 30 12:58:09.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-yjisic1x-61103-4rs6w-master-0_694a97d0-e1e7-4418-b50c-45c7f41912df became leader
Nov 30 12:58:39.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build04.ci.openshift.org/ci-op-yjisic1x/stable-initial@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.build04.ci.openshift.org/ci-op-yjisic1x/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (12 times)
Nov 30 12:58:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-2 node/ci-op-yjisic1x-61103-4rs6w-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 12:58:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-2 node/ci-op-yjisic1x-61103-4rs6w-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 12:58:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-2 node/ci-op-yjisic1x-61103-4rs6w-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 12:58:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-2 node/ci-op-yjisic1x-61103-4rs6w-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 12:58:58.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-2 node/ci-op-yjisic1x-61103-4rs6w-master-2 container/kube-apiserver reason/Killing
Nov 30 12:59:00.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-2 node/ci-op-yjisic1x-61103-4rs6w-master-2 container/setup reason/Pulling image/registry.build04.ci.openshift.org/ci-op-yjisic1x/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18
#1597920767230611456build-log.txt.gz31 hours ago
Nov 30 13:00:18.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build04.ci.openshift.org/ci-op-yjisic1x/stable-initial@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.build04.ci.openshift.org/ci-op-yjisic1x/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (25 times)
Nov 30 13:00:38.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build04.ci.openshift.org/ci-op-yjisic1x/stable-initial@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.build04.ci.openshift.org/ci-op-yjisic1x/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (26 times)
Nov 30 13:00:44.000 W ns/openshift-network-diagnostics node/ci-op-yjisic1x-61103-4rs6w-worker-a-fp8bg reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ci-op-yjisic1x-61103-4rs6w-master-1: failed to establish a TCP connection to 10.0.0.5:6443: dial tcp 10.0.0.5:6443: connect: connection refused
Nov 30 13:00:44.000 I ns/openshift-network-diagnostics node/ci-op-yjisic1x-61103-4rs6w-worker-a-fp8bg reason/ConnectivityRestored roles/worker Connectivity restored after 59.988287458s: kubernetes-apiserver-endpoint-ci-op-yjisic1x-61103-4rs6w-master-1: tcp connection to 10.0.0.5:6443 succeeded
Nov 30 13:00:44.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-yjisic1x-61103-4rs6w-master-1_b1aa1cfb-3689-45f7-9255-f4a9cc65f30b became leader
Nov 30 13:01:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-0 node/ci-op-yjisic1x-61103-4rs6w-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 13:01:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-0 node/ci-op-yjisic1x-61103-4rs6w-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 13:01:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-0 node/ci-op-yjisic1x-61103-4rs6w-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 13:01:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-0 node/ci-op-yjisic1x-61103-4rs6w-master-0 container/kube-apiserver reason/Killing
Nov 30 13:01:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-0 node/ci-op-yjisic1x-61103-4rs6w-master-0 container/setup reason/Pulling image/registry.build04.ci.openshift.org/ci-op-yjisic1x/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18
Nov 30 13:01:37.337 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-yjisic1x-61103-4rs6w-master-0 node/ci-op-yjisic1x-61103-4rs6w-master-0 reason/ForceDelete mirrored/true
#1597920767230611456build-log.txt.gz31 hours ago
Nov 30 13:10:50.580 E ns/openshift-machine-api pod/machine-api-operator-7b66cc4646-lx8fk node/ci-op-yjisic1x-61103-4rs6w-master-1 container/machine-api-operator reason/ContainerExit code/2 cause/Error
Nov 30 13:10:50.643 I ns/openshift-machine-api pod/machine-api-operator-7b66cc4646-lx8fk node/ci-op-yjisic1x-61103-4rs6w-master-1 reason/Deleted
Nov 30 13:10:51.340 W ns/openshift-apiserver pod/apiserver-7cd688b99-sq92q reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 13:10:51.340 W ns/openshift-apiserver pod/apiserver-7cd688b99-sq92q reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 13:10:52.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 30 13:11:04.000 I ns/openshift-apiserver pod/apiserver-7c4d575f58-k7fqn node/apiserver-7c4d575f58-k7fqn reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 13:11:04.000 I ns/openshift-apiserver pod/apiserver-7c4d575f58-k7fqn node/apiserver-7c4d575f58-k7fqn reason/TerminationStoppedServing Server has stopped listening
Nov 30 13:11:49.224 - 16s   W ns/openshift-apiserver pod/apiserver-7cd688b99-sq92q node/ pod has been pending longer than a minute
Nov 30 13:12:04.000 I ns/openshift-apiserver pod/apiserver-7c4d575f58-k7fqn node/apiserver-7c4d575f58-k7fqn reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 13:12:06.002 I ns/openshift-apiserver pod/apiserver-7c4d575f58-k7fqn node/ci-op-yjisic1x-61103-4rs6w-master-1 container/openshift-apiserver reason/ContainerExit code/0 cause/Completed
Nov 30 13:12:06.002 I ns/openshift-apiserver pod/apiserver-7c4d575f58-k7fqn node/ci-op-yjisic1x-61103-4rs6w-master-1 container/openshift-apiserver-check-endpoints reason/ContainerExit code/0 cause/Completed
#1597920767230611456build-log.txt.gz31 hours ago
Nov 30 13:12:16.180 I ns/openshift-apiserver pod/apiserver-7cd688b99-sq92q node/ci-op-yjisic1x-61103-4rs6w-master-1 container/openshift-apiserver reason/Ready
Nov 30 13:12:16.274 I ns/openshift-apiserver pod/apiserver-7c4d575f58-2fhx8 node/ci-op-yjisic1x-61103-4rs6w-master-0 reason/GracefulDelete duration/90s
Nov 30 13:12:16.400 W ns/openshift-apiserver pod/apiserver-7cd688b99-sfjx7 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 13:12:16.402 I ns/openshift-apiserver pod/apiserver-7cd688b99-sfjx7 node/ reason/Created
Nov 30 13:12:17.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 30 13:12:31.000 I ns/openshift-apiserver pod/apiserver-7c4d575f58-2fhx8 node/apiserver-7c4d575f58-2fhx8 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 13:12:31.000 I ns/openshift-apiserver pod/apiserver-7c4d575f58-2fhx8 node/apiserver-7c4d575f58-2fhx8 reason/TerminationStoppedServing Server has stopped listening
Nov 30 13:13:16.224 - 16s   W ns/openshift-apiserver pod/apiserver-7cd688b99-sfjx7 node/ pod has been pending longer than a minute
Nov 30 13:13:24.155 W ns/openshift-apiserver pod/apiserver-7cd688b99-sfjx7 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 13:13:30.000 I clusteroperator/machine-api reason/Status upgrade Progressing towards operator: 4.9.0-0.okd.test-2022-11-30-115525-ci-op-yjisic1x-latest
Nov 30 13:13:30.949 W clusteroperator/machine-api condition/Progressing status/True reason/SyncingResources changed: Progressing towards operator: 4.9.0-0.okd.test-2022-11-30-115525-ci-op-yjisic1x-latest
#1597920767230611456build-log.txt.gz31 hours ago
Nov 30 13:13:48.305 I ns/openshift-apiserver pod/apiserver-7c4d575f58-9tqwh node/ci-op-yjisic1x-61103-4rs6w-master-2 reason/GracefulDelete duration/90s
Nov 30 13:13:48.448 W ns/openshift-apiserver pod/apiserver-7cd688b99-mn2ts reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 13:13:48.457 I ns/openshift-apiserver pod/apiserver-7cd688b99-mn2ts node/ reason/Created
Nov 30 13:13:51.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Nov 30 13:13:51.371 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Nov 30 13:14:03.000 I ns/openshift-apiserver pod/apiserver-7c4d575f58-9tqwh node/apiserver-7c4d575f58-9tqwh reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 13:14:03.000 I ns/openshift-apiserver pod/apiserver-7c4d575f58-9tqwh node/apiserver-7c4d575f58-9tqwh reason/TerminationStoppedServing Server has stopped listening
Nov 30 13:14:45.000 W ns/openshift-network-diagnostics node/ci-op-yjisic1x-61103-4rs6w-worker-a-fp8bg reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: openshift-apiserver-endpoint-ci-op-yjisic1x-61103-4rs6w-master-2: failed to establish a TCP connection to 10.130.0.59:8443: dial tcp 10.130.0.59:8443: connect: connection refused
Nov 30 13:14:48.224 - 15s   W ns/openshift-apiserver pod/apiserver-7cd688b99-mn2ts node/ pod has been pending longer than a minute
Nov 30 13:14:54.000 W ns/openshift-network-diagnostics node/ci-op-yjisic1x-61103-4rs6w-worker-a-fp8bg reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: openshift-apiserver-endpoint-ci-op-yjisic1x-61103-4rs6w-master-0: failed to establish a TCP connection to 10.128.0.32:8443: dial tcp 10.128.0.32:8443: connect: connection refused
Nov 30 13:14:54.000 W ns/openshift-network-diagnostics node/ci-op-yjisic1x-61103-4rs6w-worker-a-fp8bg reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: openshift-apiserver-endpoint-ci-op-yjisic1x-61103-4rs6w-master-1: failed to establish a TCP connection to 10.129.0.42:8443: dial tcp 10.129.0.42:8443: connect: connection refused
#1597512338376757248build-log.txt.gz2 days ago
Nov 29 09:45:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-0 node/ci-op-dgc36w5i-61103-2m8lk-master-0 reason/ShutdownInitiated Received signal to terminate, becoming unready, but keeping serving
Nov 29 09:45:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-0 node/ci-op-dgc36w5i-61103-2m8lk-master-0 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 09:45:18.650 I ns/openshift-kube-apiserver pod/installer-10-ci-op-dgc36w5i-61103-2m8lk-master-0 node/ci-op-dgc36w5i-61103-2m8lk-master-0 container/installer reason/ContainerExit code/0 cause/Completed
Nov 29 09:45:19.000 W ns/openshift-kube-apiserver pod/installer-10-ci-op-dgc36w5i-61103-2m8lk-master-0 node/ci-op-dgc36w5i-61103-2m8lk-master-0 reason/FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered (2 times)
Nov 29 09:46:03.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.2222222222222223 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-dgc36w5i-61103-2m8lk-master-0=0.019359999999999853,etcd-ci-op-dgc36w5i-61103-2m8lk-master-2=0.004880000000000037,etcd-ci-op-dgc36w5i-61103-2m8lk-master-1=0.00793176470588236. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 29 09:46:28.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-0 node/ci-op-dgc36w5i-61103-2m8lk-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 09:46:28.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-0 node/ci-op-dgc36w5i-61103-2m8lk-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 09:46:28.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-0 node/ci-op-dgc36w5i-61103-2m8lk-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 09:46:30.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-0 node/ci-op-dgc36w5i-61103-2m8lk-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 09:46:36.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-0 node/ci-op-dgc36w5i-61103-2m8lk-master-0 container/setup reason/Pulling image/registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18
Nov 29 09:46:36.105 W ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-0 node/ci-op-dgc36w5i-61103-2m8lk-master-0 invariant violation (bug): static pod should not transition Running->Pending with same UID
#1597512338376757248build-log.txt.gz2 days ago
Nov 29 09:47:48.000 I ns/openshift-kube-apiserver pod/installer-10-ci-op-dgc36w5i-61103-2m8lk-master-1 reason/StaticPodInstallerCompleted Successfully installed revision 10
Nov 29 09:47:48.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-1 node/ci-op-dgc36w5i-61103-2m8lk-master-1 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 09:47:49.843 I ns/openshift-kube-apiserver pod/installer-10-ci-op-dgc36w5i-61103-2m8lk-master-1 node/ci-op-dgc36w5i-61103-2m8lk-master-1 container/installer reason/ContainerExit code/0 cause/Completed
Nov 29 09:47:51.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable-initial@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (12 times)
Nov 29 09:48:24.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable-initial@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (13 times)
Nov 29 09:48:58.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-1 node/ci-op-dgc36w5i-61103-2m8lk-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 09:48:58.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-1 node/ci-op-dgc36w5i-61103-2m8lk-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 09:48:58.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-1 node/ci-op-dgc36w5i-61103-2m8lk-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 09:49:00.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-1 node/ci-op-dgc36w5i-61103-2m8lk-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 09:49:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-1 node/ci-op-dgc36w5i-61103-2m8lk-master-1 container/setup reason/Pulling image/registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18
Nov 29 09:49:08.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable-initial@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (14 times)
#1597512338376757248build-log.txt.gz2 days ago
Nov 29 09:50:16.446 I ns/openshift-kube-apiserver pod/installer-10-ci-op-dgc36w5i-61103-2m8lk-master-2 node/ci-op-dgc36w5i-61103-2m8lk-master-2 container/installer reason/ContainerExit code/0 cause/Completed
Nov 29 09:50:18.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable-initial@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (26 times)
Nov 29 09:50:24.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable-initial@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (27 times)
Nov 29 09:50:43.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-dgc36w5i-61103-2m8lk-master-1_099a3e9f-2de5-4bd9-8426-569178250936 became leader
Nov 29 09:51:24.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable-initial@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18,registry.build04.ci.openshift.org/ci-op-dgc36w5i/stable@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (28 times)
Nov 29 09:51:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-2 node/ci-op-dgc36w5i-61103-2m8lk-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 09:51:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-2 node/ci-op-dgc36w5i-61103-2m8lk-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 09:51:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-2 node/ci-op-dgc36w5i-61103-2m8lk-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 09:51:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-dgc36w5i-61103-2m8lk-master-2 node/ci-op-dgc36w5i-61103-2m8lk-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 09:51:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.9.0-0.okd.test-2022-11-29-085206-ci-op-dgc36w5i-latest"} {"kube-apiserver" "1.22.8"} {"operator" "4.9.0-0.okd.test-2022-11-29-084705-ci-op-dgc36w5i-initial"}] to [{"raw-internal" "4.9.0-0.okd.test-2022-11-29-085206-ci-op-dgc36w5i-latest"} {"kube-apiserver" "1.22.8"} {"operator" "4.9.0-0.okd.test-2022-11-29-085206-ci-op-dgc36w5i-latest"}]
Nov 29 09:51:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorVersionChanged clusteroperator/kube-apiserver version "operator" changed from "4.9.0-0.okd.test-2022-11-29-084705-ci-op-dgc36w5i-initial" to "4.9.0-0.okd.test-2022-11-29-085206-ci-op-dgc36w5i-latest"
#1597512338376757248build-log.txt.gz2 days ago
Nov 29 10:00:06.875 I ns/openshift-apiserver pod/apiserver-84bc69d685-xc67r node/ reason/Created
Nov 29 10:00:07.000 W ns/openshift-operator-lifecycle-manager pod/collect-profiles-27828600--1-7vqk6 node/ci-op-dgc36w5i-61103-2m8lk-worker-a-mbctf reason/FailedMount MountVolume.SetUp failed for volume "config-volume" : object "openshift-operator-lifecycle-manager"/"collect-profiles-config" not registered (2 times)
Nov 29 10:00:07.000 W ns/openshift-operator-lifecycle-manager pod/collect-profiles-27828600--1-7vqk6 node/ci-op-dgc36w5i-61103-2m8lk-worker-a-mbctf reason/FailedMount MountVolume.SetUp failed for volume "kube-api-access-sbs4m" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] (2 times)
Nov 29 10:00:07.000 W ns/openshift-operator-lifecycle-manager pod/collect-profiles-27828600--1-7vqk6 node/ci-op-dgc36w5i-61103-2m8lk-worker-a-mbctf reason/FailedMount MountVolume.SetUp failed for volume "secret-volume" : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered (2 times)
Nov 29 10:00:10.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 29 10:00:21.000 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-5sjpj node/apiserver-86c94bb6fb-5sjpj reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 10:00:21.000 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-5sjpj node/apiserver-86c94bb6fb-5sjpj reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:01:06.031 - 16s   W ns/openshift-apiserver pod/apiserver-84bc69d685-xc67r node/ pod has been pending longer than a minute
Nov 29 10:01:21.000 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-5sjpj node/apiserver-86c94bb6fb-5sjpj reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 10:01:22.969 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-5sjpj node/ci-op-dgc36w5i-61103-2m8lk-master-0 container/openshift-apiserver reason/ContainerExit code/0 cause/Completed
Nov 29 10:01:22.969 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-5sjpj node/ci-op-dgc36w5i-61103-2m8lk-master-0 container/openshift-apiserver-check-endpoints reason/ContainerExit code/0 cause/Completed
#1597512338376757248build-log.txt.gz2 days ago
Nov 29 10:01:50.000 I ns/openshift-marketplace pod/community-operators-vckgl node/ci-op-dgc36w5i-61103-2m8lk-worker-a-mbctf container/registry-server reason/Killing
Nov 29 10:01:51.931 I ns/openshift-marketplace pod/community-operators-vckgl node/ci-op-dgc36w5i-61103-2m8lk-worker-a-mbctf container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 29 10:01:51.947 W ns/openshift-apiserver pod/apiserver-84bc69d685-kszsg reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 29 10:01:51.947 W ns/openshift-apiserver pod/apiserver-84bc69d685-kszsg reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 29 10:01:51.994 I ns/openshift-marketplace pod/community-operators-vckgl node/ci-op-dgc36w5i-61103-2m8lk-worker-a-mbctf reason/Deleted
Nov 29 10:01:53.000 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-66rch node/apiserver-86c94bb6fb-66rch reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 10:01:53.000 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-66rch node/apiserver-86c94bb6fb-66rch reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:02:25.000 I ns/openshift-machine-api deployment/machine-api-controllers reason/ScalingReplicaSet Scaled up replica set machine-api-controllers-5d8d769567 to 1
Nov 29 10:02:25.000 I clusteroperator/machine-api reason/Status upgrade Progressing towards operator: 4.9.0-0.okd.test-2022-11-29-085206-ci-op-dgc36w5i-latest
Nov 29 10:02:25.000 I ns/openshift-machine-api replicaset/machine-api-controllers-5d8d769567 reason/SuccessfulCreate Created pod: machine-api-controllers-5d8d769567-vzxtj
Nov 29 10:02:25.501 W clusteroperator/machine-api condition/Progressing status/True reason/SyncingResources changed: Progressing towards operator: 4.9.0-0.okd.test-2022-11-29-085206-ci-op-dgc36w5i-latest
#1597512338376757248build-log.txt.gz2 days ago
Nov 29 10:03:04.678 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-7brz9 node/ci-op-dgc36w5i-61103-2m8lk-master-1 reason/GracefulDelete duration/90s
Nov 29 10:03:04.802 W ns/openshift-apiserver pod/apiserver-84bc69d685-wn95b reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 29 10:03:04.833 I ns/openshift-apiserver pod/apiserver-84bc69d685-wn95b node/ reason/Created
Nov 29 10:03:06.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Nov 29 10:03:06.429 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Nov 29 10:03:19.000 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-7brz9 node/apiserver-86c94bb6fb-7brz9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 10:03:19.000 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-7brz9 node/apiserver-86c94bb6fb-7brz9 reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:04:04.031 - 16s   W ns/openshift-apiserver pod/apiserver-84bc69d685-wn95b node/ pod has been pending longer than a minute
Nov 29 10:04:19.000 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-7brz9 node/apiserver-86c94bb6fb-7brz9 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 10:04:20.745 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-7brz9 node/ci-op-dgc36w5i-61103-2m8lk-master-1 container/openshift-apiserver-check-endpoints reason/ContainerExit code/0 cause/Completed
Nov 29 10:04:20.745 I ns/openshift-apiserver pod/apiserver-86c94bb6fb-7brz9 node/ci-op-dgc36w5i-61103-2m8lk-master-1 container/openshift-apiserver reason/ContainerExit code/0 cause/Completed
periodic-ci-openshift-multiarch-master-nightly-4.10-upgrade-from-nightly-4.9-ocp-remote-libvirt-ppc64le (all) - 7 runs, 43% failed, 33% of failures match = 14% impact
#1597893356921294848build-log.txt.gz35 hours ago
Nov 30 10:45:14.000 I ns/openshift-etcd pod/installer-7-libvirt-ppc64le-2-0-7-zg85v-master-2 reason/StaticPodInstallerCompleted Successfully installed revision 7
Nov 30 10:45:14.000 I ns/openshift-etcd pod/installer-7-libvirt-ppc64le-2-0-7-zg85v-master-2 reason/StaticPodInstallerCompleted Successfully installed revision 7
Nov 30 10:45:15.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4318983dbfff650a91de521440708479c250f5971791584c8b29db5ab353f22,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d14aa1ae5b481b9c7a0a33e979c9ce66906a15a5850ccd7aa13bf6209a071ba1 (19 times)
Nov 30 10:45:15.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4318983dbfff650a91de521440708479c250f5971791584c8b29db5ab353f22,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d14aa1ae5b481b9c7a0a33e979c9ce66906a15a5850ccd7aa13bf6209a071ba1 (19 times)
Nov 30 10:45:15.231 I ns/openshift-etcd pod/installer-7-libvirt-ppc64le-2-0-7-zg85v-master-2 node/libvirt-ppc64le-2-0-7-zg85v-master-2 container/installer reason/ContainerExit code/0 cause/Completed
Nov 30 10:45:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-1 node/libvirt-ppc64le-2-0-7-zg85v-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 10:45:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-1 node/libvirt-ppc64le-2-0-7-zg85v-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 10:45:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-1 node/libvirt-ppc64le-2-0-7-zg85v-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 10:45:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-1 node/libvirt-ppc64le-2-0-7-zg85v-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 10:45:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-1 node/libvirt-ppc64le-2-0-7-zg85v-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 10:45:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-1 node/libvirt-ppc64le-2-0-7-zg85v-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 10:45:17.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4318983dbfff650a91de521440708479c250f5971791584c8b29db5ab353f22,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d14aa1ae5b481b9c7a0a33e979c9ce66906a15a5850ccd7aa13bf6209a071ba1 (20 times)
#1597893356921294848build-log.txt.gz35 hours ago
Nov 30 10:48:18.937 I ns/openshift-etcd pod/etcd-libvirt-ppc64le-2-0-7-zg85v-master-0 node/libvirt-ppc64le-2-0-7-zg85v-master-0 container/etcd reason/Ready
Nov 30 10:48:19.493 I ns/openshift-etcd pod/etcd-quorum-guard-7cb99778b8-f94bf node/libvirt-ppc64le-2-0-7-zg85v-master-0 container/guard reason/Ready
Nov 30 10:48:20.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "libvirt-ppc64le-2-0-7-zg85v-master-0" from revision 5 to 7 because static pod is ready
Nov 30 10:48:20.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 5; 2 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, libvirt-ppc64le-2-0-7-zg85v-master-0 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, libvirt-ppc64le-2-0-7-zg85v-master-0 is unhealthy"
Nov 30 10:48:20.252 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found
Nov 30 10:48:23.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-2 node/libvirt-ppc64le-2-0-7-zg85v-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 10:48:23.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-2 node/libvirt-ppc64le-2-0-7-zg85v-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 10:48:23.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-2 node/libvirt-ppc64le-2-0-7-zg85v-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 10:48:23.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/PodCreated Created Pod/revision-pruner-7-libvirt-ppc64le-2-0-7-zg85v-master-1 -n openshift-etcd because it was missing
Nov 30 10:48:23.047 I ns/openshift-etcd pod/revision-pruner-7-libvirt-ppc64le-2-0-7-zg85v-master-1 node/libvirt-ppc64le-2-0-7-zg85v-master-1 reason/Created
Nov 30 10:48:25.000 I ns/openshift-etcd pod/revision-pruner-7-libvirt-ppc64le-2-0-7-zg85v-master-1 reason/AddedInterface Add eth0 [10.128.0.79/23] from ovn-kubernetes
#1597893356921294848build-log.txt.gz35 hours ago
Nov 30 10:51:07.837 I ns/openshift-marketplace pod/community-operators-fkxhz node/libvirt-ppc64le-2-0-7-zg85v-worker-0-cvn9n reason/GracefulDelete duration/1s
Nov 30 10:51:09.000 I ns/openshift-marketplace pod/community-operators-fkxhz node/libvirt-ppc64le-2-0-7-zg85v-worker-0-cvn9n container/registry-server reason/Killing
Nov 30 10:51:09.000 I ns/openshift-marketplace pod/community-operators-fkxhz node/libvirt-ppc64le-2-0-7-zg85v-worker-0-cvn9n container/registry-server reason/Killing
Nov 30 10:51:10.707 I ns/openshift-marketplace pod/community-operators-fkxhz node/libvirt-ppc64le-2-0-7-zg85v-worker-0-cvn9n container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 30 10:51:10.726 I ns/openshift-marketplace pod/community-operators-fkxhz node/libvirt-ppc64le-2-0-7-zg85v-worker-0-cvn9n reason/Deleted
Nov 30 10:51:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-0 node/libvirt-ppc64le-2-0-7-zg85v-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 10:51:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-0 node/libvirt-ppc64le-2-0-7-zg85v-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 10:51:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-0 node/libvirt-ppc64le-2-0-7-zg85v-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 10:51:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-0 node/libvirt-ppc64le-2-0-7-zg85v-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 10:51:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-0 node/libvirt-ppc64le-2-0-7-zg85v-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 10:51:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-libvirt-ppc64le-2-0-7-zg85v-master-0 node/libvirt-ppc64le-2-0-7-zg85v-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 10:51:34.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-libvirt-ppc64le-2-0-7-zg85v-master-0 node/libvirt-ppc64le-2-0-7-zg85v-master-0 reason/ProbeError Readiness probe error: Get "https://192.168.126.11:6443/healthz": dial tcp 192.168.126.11:6443: connect: connection refused\nbody: \n
#1597893356921294848build-log.txt.gz35 hours ago
Nov 30 11:07:06.000 I ns/openshift-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-6cd5d6cbd7 to 1
Nov 30 11:07:06.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-7fb6d4f9d4 to 1
Nov 30 11:07:06.000 I ns/openshift-oauth-apiserver replicaset/apiserver-7fb6d4f9d4 reason/SuccessfulCreate Created pod: apiserver-7fb6d4f9d4-mzjcb
Nov 30 11:07:06.000 I ns/openshift-oauth-apiserver replicaset/apiserver-6dcc7dc786 reason/SuccessfulDelete Deleted pod: apiserver-6dcc7dc786-qh5js
Nov 30 11:07:06.000 I ns/openshift-apiserver replicaset/apiserver-d7768b8fd reason/SuccessfulDelete Deleted pod: apiserver-d7768b8fd-tlnlm
Nov 30 11:07:06.000 I ns/default namespace/kube-system node/apiserver-6dcc7dc786-qh5js reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 30 11:07:06.000 I ns/default namespace/kube-system node/apiserver-6dcc7dc786-qh5js reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 11:07:06.000 I ns/default namespace/kube-system node/apiserver-6dcc7dc786-qh5js reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 30 11:07:06.000 I ns/default namespace/kube-system node/apiserver-6dcc7dc786-qh5js reason/TerminationStoppedServing Server has stopped listening
Nov 30 11:07:06.763 I ns/openshift-marketplace pod/redhat-marketplace-4nr7c node/libvirt-ppc64le-2-0-7-zg85v-worker-0-cvn9n container/registry-server reason/ContainerStart duration/4.00s
Nov 30 11:07:06.824 W ns/openshift-authentication pod/oauth-openshift-dc8d55d4c-rxlzf reason/FailedScheduling 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
periodic-ci-openshift-release-master-nightly-4.9-upgrade-from-stable-4.8-e2e-aws-upgrade (all) - 3 runs, 67% failed, 50% of failures match = 33% impact
#1597853624933814272build-log.txt.gz37 hours ago
Nov 30 08:15:01.000 I ns/openshift-multus cronjob/ip-reconciler reason/SawCompletedJob Saw completed job: ip-reconciler-27829935, status: Complete
Nov 30 08:15:01.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulDelete Deleted job ip-reconciler-27829935
Nov 30 08:15:01.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulDelete Deleted job ip-reconciler-27829935
Nov 30 08:15:01.521 I ns/openshift-multus pod/ip-reconciler-27829935-92w95 node/ip-10-0-242-173.ec2.internal container/whereabouts reason/ContainerExit code/0 cause/Completed
Nov 30 08:15:01.593 I ns/openshift-multus pod/ip-reconciler-27829935-92w95 node/ip-10-0-242-173.ec2.internal reason/DeletedAfterCompletion
Nov 30 08:16:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-37.ec2.internal node/ip-10-0-177-37 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 08:16:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-37.ec2.internal node/ip-10-0-177-37 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 08:16:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-37.ec2.internal node/ip-10-0-177-37 reason/TerminationStoppedServing Server has stopped listening
Nov 30 08:16:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-37.ec2.internal node/ip-10-0-177-37 reason/TerminationStoppedServing Server has stopped listening
Nov 30 08:16:58.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-177-37.ec2.internal node/ip-10-0-177-37.ec2.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 30 08:16:58.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-177-37.ec2.internal node/ip-10-0-177-37.ec2.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 30 08:16:58.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-177-37.ec2.internal node/ip-10-0-177-37.ec2.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
#1597853624933814272build-log.txt.gz37 hours ago
Nov 30 08:19:42.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (13 times)
Nov 30 08:19:42.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (13 times)
Nov 30 08:20:42.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (14 times)
Nov 30 08:21:45.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (15 times)
Nov 30 08:21:47.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (16 times)
Nov 30 08:22:19.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-231-13.ec2.internal node/ip-10-0-231-13 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 08:22:19.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-231-13.ec2.internal node/ip-10-0-231-13 reason/TerminationStoppedServing Server has stopped listening
Nov 30 08:22:42.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (17 times)
Nov 30 08:22:59.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-231-13.ec2.internal node/ip-10-0-231-13.ec2.internal container/kube-scheduler-recovery-controller reason/Created
Nov 30 08:22:59.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-231-13.ec2.internal node/ip-10-0-231-13.ec2.internal container/kube-scheduler-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcb22579324a93faddf49caa971153fea33cec03738e76490d6de7e39a01db59
Nov 30 08:22:59.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-231-13.ec2.internal node/ip-10-0-231-13.ec2.internal container/kube-scheduler-recovery-controller reason/Started
#1597853624933814272build-log.txt.gz37 hours ago
Nov 30 08:24:48.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (32 times)
Nov 30 08:24:54.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ip-10-0-177-37_938b60fc-a309-4f55-89a4-9ff3bc8c4bb8 became leader
Nov 30 08:25:40.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (33 times)
Nov 30 08:26:42.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (34 times)
Nov 30 08:27:40.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (35 times)
Nov 30 08:28:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-18.ec2.internal node/ip-10-0-155-18 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Nov 30 08:28:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-18.ec2.internal node/ip-10-0-155-18 reason/TerminationStoppedServing Server has stopped listening
Nov 30 08:28:40.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25 (36 times)
Nov 30 08:28:50.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-18.ec2.internal node/ip-10-0-155-18.ec2.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 30 08:28:50.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-18.ec2.internal node/ip-10-0-155-18.ec2.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
Nov 30 08:28:50.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-18.ec2.internal node/ip-10-0-155-18.ec2.internal container/kube-controller-manager-recovery-controller reason/Started
#1597853624933814272build-log.txt.gz37 hours ago
Nov 30 08:36:06.262 I ns/openshift-apiserver pod/apiserver-5f5c6777fc-shbhd node/ reason/DeletedBeforeScheduling
Nov 30 08:36:06.278 W ns/openshift-apiserver pod/apiserver-7bf5f9f888-4t6dw reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 08:36:06.284 I ns/openshift-apiserver pod/apiserver-7bf5f9f888-4t6dw node/ reason/Created
Nov 30 08:36:09.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 30 08:36:09.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7."
Nov 30 08:36:12.000 I ns/openshift-apiserver pod/apiserver-6d8c978784-lpf8h node/apiserver-6d8c978784-lpf8h reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 30 08:36:12.000 I ns/openshift-apiserver pod/apiserver-6d8c978784-lpf8h node/apiserver-6d8c978784-lpf8h reason/TerminationStoppedServing Server has stopped listening
Nov 30 08:36:18.000 W ns/openshift-apiserver pod/apiserver-6d8c978784-lpf8h node/ip-10-0-155-18.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.32:8443/healthz": dial tcp 10.129.0.32:8443: connect: connection refused\nbody: \n
Nov 30 08:36:18.000 W ns/openshift-apiserver pod/apiserver-6d8c978784-lpf8h node/ip-10-0-155-18.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.32:8443/healthz": dial tcp 10.129.0.32:8443: connect: connection refused\nbody: \n
Nov 30 08:36:18.000 W ns/openshift-apiserver pod/apiserver-6d8c978784-lpf8h node/ip-10-0-155-18.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.32:8443/healthz": dial tcp 10.129.0.32:8443: connect: connection refused
Nov 30 08:36:18.000 W ns/openshift-apiserver pod/apiserver-6d8c978784-lpf8h node/ip-10-0-155-18.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.32:8443/healthz": dial tcp 10.129.0.32:8443: connect: connection refused
#1597853624933814272build-log.txt.gz37 hours ago
Nov 30 08:37:34.143 I ns/openshift-apiserver pod/apiserver-7bf5f9f888-6n7wr node/ reason/Created
Nov 30 08:37:35.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 30 08:37:36.000 I ns/openshift-machine-api machine/ci-op-ynfhd1c6-61161-dmjmm-worker-us-east-1a-n6vlc reason/Update Updated Machine ci-op-ynfhd1c6-61161-dmjmm-worker-us-east-1a-n6vlc (2 times)
Nov 30 08:37:36.000 I ns/openshift-machine-api machine/ci-op-ynfhd1c6-61161-dmjmm-worker-us-east-1f-7j5kf reason/Update Updated Machine ci-op-ynfhd1c6-61161-dmjmm-worker-us-east-1f-7j5kf (2 times)
Nov 30 08:37:36.000 I ns/openshift-machine-api machine/ci-op-ynfhd1c6-61161-dmjmm-worker-us-east-1f-g8wc9 reason/Update Updated Machine ci-op-ynfhd1c6-61161-dmjmm-worker-us-east-1f-g8wc9 (2 times)
Nov 30 08:37:44.000 I ns/openshift-apiserver pod/apiserver-6d8c978784-l7zzn node/apiserver-6d8c978784-l7zzn reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 30 08:37:44.000 I ns/openshift-apiserver pod/apiserver-6d8c978784-l7zzn node/apiserver-6d8c978784-l7zzn reason/TerminationStoppedServing Server has stopped listening
Nov 30 08:37:48.000 W ns/openshift-apiserver pod/apiserver-6d8c978784-l7zzn node/ip-10-0-177-37.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.37:8443/healthz": dial tcp 10.130.0.37:8443: connect: connection refused\nbody: \n
Nov 30 08:37:48.000 W ns/openshift-apiserver pod/apiserver-6d8c978784-l7zzn node/ip-10-0-177-37.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.37:8443/healthz": dial tcp 10.130.0.37:8443: connect: connection refused\nbody: \n
Nov 30 08:37:48.000 W ns/openshift-apiserver pod/apiserver-6d8c978784-l7zzn node/ip-10-0-177-37.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.37:8443/healthz": dial tcp 10.130.0.37:8443: connect: connection refused
Nov 30 08:37:48.000 W ns/openshift-apiserver pod/apiserver-6d8c978784-l7zzn node/ip-10-0-177-37.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.37:8443/healthz": dial tcp 10.130.0.37:8443: connect: connection refused
periodic-ci-openshift-release-master-nightly-4.9-e2e-aws-single-node-serial (all) - 4 runs, 100% failed, 25% of failures match = 25% impact
#1597853634157088768build-log.txt.gz38 hours ago
Nov 30 08:36:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-174-18.us-east-2.compute.internal node/ip-10-0-174-18.us-east-2.compute.internal container/kube-apiserver-cert-regeneration-controller reason/Killing
Nov 30 08:36:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-174-18.us-east-2.compute.internal node/ip-10-0-174-18.us-east-2.compute.internal container/kube-apiserver-cert-syncer reason/Killing
Nov 30 08:36:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-174-18.us-east-2.compute.internal node/ip-10-0-174-18.us-east-2.compute.internal container/kube-apiserver-check-endpoints reason/Killing
Nov 30 08:36:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-174-18.us-east-2.compute.internal node/ip-10-0-174-18.us-east-2.compute.internal container/kube-apiserver-insecure-readyz reason/Killing
Nov 30 08:36:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-startup-monitor-ip-10-0-174-18.us-east-2.compute.internal node/ip-10-0-174-18.us-east-2.compute.internal container/startup-monitor reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1fbe47caec80501bcdeace53dc21338744a5ead6a2ff584a05f2ec47e6e455e
Nov 30 08:36:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-174-18.us-east-2.compute.internal node/ip-10-0-174-18 reason/AfterShutdownDelayDuration The minimal shutdown duration of 0s finished
Nov 30 08:36:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-174-18.us-east-2.compute.internal node/ip-10-0-174-18 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 08:36:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-174-18.us-east-2.compute.internal node/ip-10-0-174-18 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 08:36:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-174-18.us-east-2.compute.internal node/ip-10-0-174-18 reason/ShutdownInitiated Received signal to terminate, becoming unready, but keeping serving
Nov 30 08:36:56.000 I ns/openshift-kube-apiserver pod/installer-6-ip-10-0-174-18.us-east-2.compute.internal reason/StaticPodInstallerCompleted Successfully installed revision 6
Nov 30 08:36:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-174-18.us-east-2.compute.internal node/ip-10-0-174-18 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
release-openshift-origin-installer-e2e-aws-upgrade-4.6-to-4.7-to-4.8-to-4.9-ci (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1597729510998937600build-log.txt.gz44 hours ago
Nov 29 23:49:40.982 I ns/openshift-kube-apiserver pod/installer-7-ip-10-0-148-206.us-east-2.compute.internal node/ip-10-0-148-206.us-east-2.compute.internal container/installer reason/Ready
Nov 29 23:49:48.000 I ns/openshift-kube-apiserver pod/installer-7-ip-10-0-148-206.us-east-2.compute.internal reason/StaticPodInstallerCompleted Successfully installed revision 7
Nov 29 23:49:48.000 I ns/openshift-kube-apiserver pod/installer-7-ip-10-0-148-206.us-east-2.compute.internal reason/StaticPodInstallerCompleted Successfully installed revision 7
Nov 29 23:49:49.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 3.75 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-148-206.us-east-2.compute.internal=0.005802284626169215,etcd-ip-10-0-153-55.us-east-2.compute.internal=0.0034180000000000035,etcd-ip-10-0-238-107.us-east-2.compute.internal=0.005433333333333408
Nov 29 23:49:49.010 I ns/openshift-kube-apiserver pod/installer-7-ip-10-0-148-206.us-east-2.compute.internal node/ip-10-0-148-206.us-east-2.compute.internal container/installer reason/ContainerExit code/0 cause/Completed
Nov 29 23:50:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-206.us-east-2.compute.internal node/ip-10-0-148-206 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 29 23:50:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-206.us-east-2.compute.internal node/ip-10-0-148-206 reason/TerminationStoppedServing Server has stopped listening
Nov 29 23:50:48.989 I ns/openshift-marketplace pod/redhat-marketplace-9c2jw node/ reason/Created
Nov 29 23:50:48.992 I ns/openshift-marketplace pod/redhat-marketplace-9c2jw node/ip-10-0-130-87.us-east-2.compute.internal reason/Scheduled
Nov 29 23:50:49.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 3.75 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-148-206.us-east-2.compute.internal=0.0058004399096763136,etcd-ip-10-0-153-55.us-east-2.compute.internal=0.003418000000000003,etcd-ip-10-0-238-107.us-east-2.compute.internal=0.005433333333333314
Nov 29 23:50:50.000 I ns/openshift-marketplace pod/redhat-marketplace-9c2jw reason/AddedInterface Add eth0 [10.129.2.17/23]
#1597729510998937600build-log.txt.gz44 hours ago
Nov 29 23:53:11.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c4694635d722c28365be1847e0d43825591dac19b8d8c8e03566bb85cd2ec0e,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (14 times)
Nov 29 23:53:17.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ip-10-0-148-206_4a7ec6c3-ab04-4fe4-ba94-bc257ab3932e became leader
Nov 29 23:53:17.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c4694635d722c28365be1847e0d43825591dac19b8d8c8e03566bb85cd2ec0e,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (15 times)
Nov 29 23:53:26.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c4694635d722c28365be1847e0d43825591dac19b8d8c8e03566bb85cd2ec0e,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (16 times)
Nov 29 23:53:57.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c4694635d722c28365be1847e0d43825591dac19b8d8c8e03566bb85cd2ec0e,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (17 times)
Nov 29 23:54:19.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-238-107.us-east-2.compute.internal node/ip-10-0-238-107 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 29 23:54:19.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-238-107.us-east-2.compute.internal node/ip-10-0-238-107 reason/TerminationStoppedServing Server has stopped listening
Nov 29 23:54:27.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c4694635d722c28365be1847e0d43825591dac19b8d8c8e03566bb85cd2ec0e,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (18 times)
Nov 29 23:54:57.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-238-107.us-east-2.compute.internal node/ip-10-0-238-107.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 29 23:54:57.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-238-107.us-east-2.compute.internal node/ip-10-0-238-107.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e81d3764b1326b5a932b4f2bbe33e38b8a6d88e2613c96f1559a095c15548da5
Nov 29 23:54:57.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-238-107.us-east-2.compute.internal node/ip-10-0-238-107.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Started
#1597729510998937600build-log.txt.gz44 hours ago
Nov 29 23:57:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c4694635d722c28365be1847e0d43825591dac19b8d8c8e03566bb85cd2ec0e,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (33 times)
Nov 29 23:57:15.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c4694635d722c28365be1847e0d43825591dac19b8d8c8e03566bb85cd2ec0e,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (34 times)
Nov 29 23:57:27.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c4694635d722c28365be1847e0d43825591dac19b8d8c8e03566bb85cd2ec0e,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (35 times)
Nov 29 23:57:59.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c4694635d722c28365be1847e0d43825591dac19b8d8c8e03566bb85cd2ec0e,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (36 times)
Nov 29 23:58:05.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c4694635d722c28365be1847e0d43825591dac19b8d8c8e03566bb85cd2ec0e,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (37 times)
Nov 29 23:58:15.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-55.us-east-2.compute.internal node/ip-10-0-153-55 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 29 23:58:15.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-55.us-east-2.compute.internal node/ip-10-0-153-55 reason/TerminationStoppedServing Server has stopped listening
Nov 29 23:58:27.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c4694635d722c28365be1847e0d43825591dac19b8d8c8e03566bb85cd2ec0e,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (38 times)
Nov 29 23:58:54.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-55.us-east-2.compute.internal node/ip-10-0-153-55.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Nov 29 23:58:54.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-55.us-east-2.compute.internal node/ip-10-0-153-55.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e81d3764b1326b5a932b4f2bbe33e38b8a6d88e2613c96f1559a095c15548da5
Nov 29 23:58:54.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-55.us-east-2.compute.internal node/ip-10-0-153-55.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Started
#1597729510998937600build-log.txt.gz44 hours ago
Nov 30 00:05:30.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.")
Nov 30 00:05:30.000 I ns/openshift-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-79bd4687b9 to 2
Nov 30 00:05:30.000 I ns/openshift-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-589444bfdd to 1
Nov 30 00:05:30.000 I ns/openshift-apiserver replicaset/apiserver-589444bfdd reason/SuccessfulCreate Created pod: apiserver-589444bfdd-wnqrw
Nov 30 00:05:30.000 I ns/openshift-apiserver replicaset/apiserver-79bd4687b9 reason/SuccessfulDelete Deleted pod: apiserver-79bd4687b9-5spb5
Nov 30 00:05:30.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-5spb5 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 30 00:05:30.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-5spb5 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 00:05:30.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-5spb5 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 30 00:05:30.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-5spb5 reason/TerminationStoppedServing Server has stopped listening
Nov 30 00:05:30.689 W clusteroperator/openshift-apiserver condition/Progressing status/True reason/APIServerDeployment_NewGeneration changed: APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.
Nov 30 00:05:30.689 - 12s   W clusteroperator/openshift-apiserver condition/Progressing status/True reason/APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.
#1597729510998937600build-log.txt.gz44 hours ago
Nov 30 00:06:53.000 I ns/openshift-apiserver pod/apiserver-79bd4687b9-ctbqt node/ip-10-0-153-55.us-east-2.compute.internal container/openshift-apiserver-check-endpoints reason/Killing
Nov 30 00:06:53.000 I ns/openshift-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-79bd4687b9 to 1
Nov 30 00:06:53.000 I ns/openshift-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-589444bfdd to 2
Nov 30 00:06:53.000 I ns/openshift-apiserver replicaset/apiserver-589444bfdd reason/SuccessfulCreate Created pod: apiserver-589444bfdd-8lqtf
Nov 30 00:06:53.000 I ns/openshift-apiserver replicaset/apiserver-79bd4687b9 reason/SuccessfulDelete Deleted pod: apiserver-79bd4687b9-ctbqt
Nov 30 00:06:53.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-ctbqt reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 30 00:06:53.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-ctbqt reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 00:06:53.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-ctbqt reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 30 00:06:53.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-ctbqt reason/TerminationStoppedServing Server has stopped listening
Nov 30 00:06:53.000 W ns/openshift-apiserver pod/apiserver-79bd4687b9-ctbqt node/ip-10-0-153-55.us-east-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.30:8443/healthz": dial tcp 10.130.0.30:8443: connect: connection refused
Nov 30 00:06:53.529 I ns/openshift-apiserver pod/apiserver-589444bfdd-wnqrw node/ip-10-0-148-206.us-east-2.compute.internal container/openshift-apiserver reason/Ready
#1597729510998937600build-log.txt.gz44 hours ago
Nov 30 00:08:05.000 I ns/openshift-apiserver pod/apiserver-79bd4687b9-rqr2f node/ip-10-0-238-107.us-east-2.compute.internal container/openshift-apiserver-check-endpoints reason/Killing
Nov 30 00:08:05.000 I ns/openshift-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-79bd4687b9 to 0
Nov 30 00:08:05.000 I ns/openshift-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-589444bfdd to 3
Nov 30 00:08:05.000 I ns/openshift-apiserver replicaset/apiserver-589444bfdd reason/SuccessfulCreate Created pod: apiserver-589444bfdd-p5c5s
Nov 30 00:08:05.000 I ns/openshift-apiserver replicaset/apiserver-79bd4687b9 reason/SuccessfulDelete Deleted pod: apiserver-79bd4687b9-rqr2f
Nov 30 00:08:05.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-rqr2f reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 30 00:08:05.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-rqr2f reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 00:08:05.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-rqr2f reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 30 00:08:05.000 I ns/default namespace/kube-system node/apiserver-79bd4687b9-rqr2f reason/TerminationStoppedServing Server has stopped listening
Nov 30 00:08:05.118 I ns/openshift-apiserver pod/apiserver-589444bfdd-8lqtf node/ip-10-0-153-55.us-east-2.compute.internal container/openshift-apiserver reason/Ready
Nov 30 00:08:05.166 I ns/openshift-apiserver pod/apiserver-79bd4687b9-rqr2f node/ip-10-0-238-107.us-east-2.compute.internal reason/GracefulDelete duration/70s
periodic-ci-openshift-release-master-nightly-4.9-e2e-metal-ipi-upgrade (all) - 3 runs, 33% failed, 100% of failures match = 33% impact
#1597853645901139968build-log.txt.gz36 hours ago
Nov 30 08:48:14.000 I ns/openshift-kube-apiserver pod/installer-10-master-0 reason/StaticPodInstallerCompleted Successfully installed revision 10
Nov 30 08:48:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-0 node/master-0 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 30 08:48:15.450 I ns/openshift-kube-apiserver pod/installer-10-master-0 node/master-0 container/installer reason/ContainerExit code/0 cause/Completed
Nov 30 08:48:23.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection master-1_8feb4aaf-c6a0-46e7-b37f-1dd10957c01f became leader
Nov 30 08:49:08.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.084563805023799 over 5 minutes on "BareMetal"; disk metrics are: etcd-master-0=0.0019600000000000034,etcd-master-1=0.000999252336448598,etcd-master-2=0.00271000000000001. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 30 08:49:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-0 node/master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 08:49:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-0 node/master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 08:49:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-0 node/master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 08:49:26.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-0 node/master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 08:49:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-0 node/master-0 container/setup reason/Pulling image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671
Nov 30 08:49:39.492 W ns/openshift-kube-apiserver pod/kube-apiserver-master-0 node/master-0 invariant violation (bug): static pod should not transition Running->Pending with same UID
#1597853645901139968build-log.txt.gz36 hours ago
Nov 30 08:50:53.042 I ns/openshift-kube-apiserver pod/installer-10-master-2 node/master-2 container/installer reason/ContainerExit code/0 cause/Completed
Nov 30 08:50:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (15 times)
Nov 30 08:51:18.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (16 times)
Nov 30 08:51:49.000 W ns/openshift-network-diagnostics node/worker-0 reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-master-0: failed to establish a TCP connection to 192.168.111.20:6443: dial tcp 192.168.111.20:6443: connect: connection refused
Nov 30 08:51:49.000 I ns/openshift-network-diagnostics node/worker-0 reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000469502s: kubernetes-apiserver-endpoint-master-0: tcp connection to 192.168.111.20:6443 succeeded
Nov 30 08:52:02.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-2 node/master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 08:52:02.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-2 node/master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 08:52:02.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-2 node/master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 08:52:04.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-2 node/master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 08:52:14.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-2 node/master-2 container/setup reason/Pulling image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671
Nov 30 08:52:14.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (17 times)
#1597853645901139968build-log.txt.gz36 hours ago
Nov 30 08:53:28.383 I ns/openshift-kube-apiserver pod/installer-10-master-1 node/master-1 container/installer reason/ContainerExit code/0 cause/Completed
Nov 30 08:53:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (30 times)
Nov 30 08:53:33.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (31 times)
Nov 30 08:53:40.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection master-2_9314716f-fdc6-45f0-b631-f3a9beaa8947 became leader
Nov 30 08:54:18.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5f2a67e511600a82fd42c52acbdefec55f3720c642f220c819126968e6755671 (32 times)
Nov 30 08:54:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-1 node/master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 30 08:54:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-1 node/master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 30 08:54:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-1 node/master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 30 08:54:40.000 I ns/openshift-kube-apiserver pod/kube-apiserver-master-1 node/master-1 container/kube-apiserver reason/Killing
Nov 30 08:54:42.305 I ns/openshift-marketplace pod/community-operators-fcn99 node/worker-1 reason/Scheduled
Nov 30 08:54:42.308 I ns/openshift-marketplace pod/certified-operators-kmhpk node/worker-1 reason/Scheduled
#1597853645901139968build-log.txt.gz36 hours ago
Nov 30 09:04:02.095 W ns/openshift-apiserver pod/apiserver-65d45d654b-jw6dx reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 09:04:02.120 I ns/openshift-machine-api pod/machine-api-operator-78c4c4664b-2qj7c node/master-2 container/kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
Nov 30 09:04:02.120 E ns/openshift-machine-api pod/machine-api-operator-78c4c4664b-2qj7c node/master-2 container/machine-api-operator reason/ContainerExit code/2 cause/Error
Nov 30 09:04:02.202 I ns/openshift-machine-api pod/machine-api-operator-78c4c4664b-2qj7c node/master-2 reason/Deleted
Nov 30 09:04:03.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 30 09:04:14.000 I ns/openshift-apiserver pod/apiserver-5f946c4997-nz4cw node/apiserver-5f946c4997-nz4cw reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 09:04:14.000 I ns/openshift-apiserver pod/apiserver-5f946c4997-nz4cw node/apiserver-5f946c4997-nz4cw reason/TerminationStoppedServing Server has stopped listening
Nov 30 09:04:50.676 I ns/openshift-marketplace pod/certified-operators-7zpmn node/worker-1 reason/Scheduled
Nov 30 09:04:50.700 I ns/openshift-marketplace pod/certified-operators-7zpmn node/ reason/Created
Nov 30 09:04:52.000 I ns/openshift-marketplace pod/certified-operators-7zpmn node/worker-1 container/registry-server reason/Pulling image/registry.redhat.io/redhat/certified-operator-index:v4.9
Nov 30 09:04:52.000 I ns/openshift-marketplace pod/certified-operators-7zpmn reason/AddedInterface Add eth0 [10.129.2.25/23] from openshift-sdn
#1597853645901139968build-log.txt.gz36 hours ago
Nov 30 09:05:31.937 W ns/openshift-apiserver pod/apiserver-65d45d654b-fn6s2 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 09:05:31.962 I ns/openshift-apiserver pod/apiserver-65d45d654b-jw6dx node/master-2 container/openshift-apiserver reason/Ready
Nov 30 09:05:32.042 I ns/openshift-apiserver pod/apiserver-5f946c4997-l8v7b node/master-1 reason/GracefulDelete duration/90s
Nov 30 09:05:32.043 I ns/openshift-apiserver pod/apiserver-65d45d654b-fn6s2 node/ reason/Created
Nov 30 09:05:33.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 30 09:05:46.000 I ns/openshift-apiserver pod/apiserver-5f946c4997-l8v7b node/apiserver-5f946c4997-l8v7b reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 09:05:46.000 I ns/openshift-apiserver pod/apiserver-5f946c4997-l8v7b node/apiserver-5f946c4997-l8v7b reason/TerminationStoppedServing Server has stopped listening
Nov 30 09:05:50.000 W ns/openshift-network-diagnostics node/worker-0 reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: openshift-apiserver-endpoint-master-1: failed to establish a TCP connection to 10.130.0.44:8443: dial tcp 10.130.0.44:8443: connect: connection refused
Nov 30 09:05:59.000 W ns/openshift-network-diagnostics node/worker-0 reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: openshift-apiserver-endpoint-master-2: failed to establish a TCP connection to 10.129.0.71:8443: dial tcp 10.129.0.71:8443: connect: connection refused
Nov 30 09:06:09.497 I ns/openshift-marketplace pod/community-operators-fxdpc node/worker-1 reason/Scheduled
Nov 30 09:06:09.523 I ns/openshift-marketplace pod/community-operators-fxdpc node/ reason/Created
#1597853645901139968build-log.txt.gz36 hours ago
Nov 30 09:07:03.198 I ns/openshift-apiserver pod/apiserver-65d45d654b-fn6s2 node/master-1 container/openshift-apiserver reason/Ready
Nov 30 09:07:03.279 I ns/openshift-apiserver pod/apiserver-5f946c4997-xmlj6 node/master-0 reason/GracefulDelete duration/90s
Nov 30 09:07:03.279 I ns/openshift-apiserver pod/apiserver-65d45d654b-t2h7g node/ reason/Created
Nov 30 09:07:04.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Nov 30 09:07:04.761 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Nov 30 09:07:18.000 I ns/openshift-apiserver pod/apiserver-5f946c4997-xmlj6 node/apiserver-5f946c4997-xmlj6 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 30 09:07:18.000 I ns/openshift-apiserver pod/apiserver-5f946c4997-xmlj6 node/apiserver-5f946c4997-xmlj6 reason/TerminationStoppedServing Server has stopped listening
Nov 30 09:08:03.560 - 16s   W ns/openshift-apiserver pod/apiserver-65d45d654b-t2h7g node/ pod has been pending longer than a minute
Nov 30 09:08:16.606 W ns/openshift-apiserver pod/apiserver-65d45d654b-t2h7g reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 30 09:08:18.000 I ns/openshift-apiserver pod/apiserver-5f946c4997-xmlj6 node/apiserver-5f946c4997-xmlj6 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 30 09:08:19.662 I ns/openshift-apiserver pod/apiserver-65d45d654b-t2h7g node/master-0 reason/Scheduled
periodic-ci-openshift-release-master-nightly-4.10-upgrade-from-stable-4.9-e2e-aws-upgrade (all) - 5 runs, 40% failed, 100% of failures match = 40% impact
#1597625143641772032build-log.txt.gz2 days ago
Nov 29 17:02:12.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-131-49.us-west-2.compute.internal is unhealthy"
Nov 29 17:02:12.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-131-49.us-west-2.compute.internal is unhealthy"
Nov 29 17:02:12.457 - 59s   I alert/etcdMembersDown ns/openshift-etcd pod/etcd-ip-10-0-131-49.us-west-2.compute.internal ALERTS{alertname="etcdMembersDown", alertstate="pending", job="etcd", namespace="openshift-etcd", pod="etcd-ip-10-0-131-49.us-west-2.compute.internal", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
Nov 29 17:02:15.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7716740890ddfd97b8a08679a7b9b4e976bf31108136787707a22f5e00e7cf5 (41 times)
Nov 29 17:02:15.000 W ns/openshift-etcd pod/etcd-quorum-guard-5d7698f7f4-w65hg node/ip-10-0-131-49.us-west-2.compute.internal reason/Unhealthy Readiness probe failed:  (6 times)
Nov 29 17:02:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-233-214.us-west-2.compute.internal node/ip-10-0-233-214 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 17:02:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-233-214.us-west-2.compute.internal node/ip-10-0-233-214 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 17:02:17.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-233-214.us-west-2.compute.internal node/ip-10-0-233-214 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 17:02:18.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7716740890ddfd97b8a08679a7b9b4e976bf31108136787707a22f5e00e7cf5 (42 times)
Nov 29 17:02:19.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-233-214.us-west-2.compute.internal node/ip-10-0-233-214 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 17:02:20.000 W ns/openshift-etcd pod/etcd-quorum-guard-5d7698f7f4-w65hg node/ip-10-0-131-49.us-west-2.compute.internal reason/Unhealthy Readiness probe failed:  (7 times)
#1597625143641772032build-log.txt.gz2 days ago
Nov 29 17:05:58.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a039743ceb1300478a5bc4181dbba2a87027ff9bc4213e030b72a9ac82002b2b (19 times)
Nov 29 17:07:01.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a039743ceb1300478a5bc4181dbba2a87027ff9bc4213e030b72a9ac82002b2b (20 times)
Nov 29 17:07:01.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a039743ceb1300478a5bc4181dbba2a87027ff9bc4213e030b72a9ac82002b2b (20 times)
Nov 29 17:07:04.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a039743ceb1300478a5bc4181dbba2a87027ff9bc4213e030b72a9ac82002b2b (21 times)
Nov 29 17:07:04.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a039743ceb1300478a5bc4181dbba2a87027ff9bc4213e030b72a9ac82002b2b (21 times)
Nov 29 17:07:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-161.us-west-2.compute.internal node/ip-10-0-172-161 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 17:07:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-161.us-west-2.compute.internal node/ip-10-0-172-161 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 17:07:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-161.us-west-2.compute.internal node/ip-10-0-172-161 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 17:07:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-161.us-west-2.compute.internal node/ip-10-0-172-161 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 17:07:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-161.us-west-2.compute.internal node/ip-10-0-172-161 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 17:07:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-161.us-west-2.compute.internal node/ip-10-0-172-161 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 17:07:51.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-161.us-west-2.compute.internal node/ip-10-0-172-161 reason/TerminationGracefulTerminationFinished All pending requests processed
#1597625143641772032build-log.txt.gz2 days ago
Nov 29 17:10:47.457 - 59s   I alert/KubeContainerWaiting ns/openshift-marketplace pod/community-operators-895mt container/registry-server ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="registry-server", namespace="openshift-marketplace", pod="community-operators-895mt", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 29 17:10:47.457 - 59s   I alert/KubePodNotReady ns/openshift-marketplace pod/community-operators-895mt ALERTS{alertname="KubePodNotReady", alertstate="pending", namespace="openshift-marketplace", pod="community-operators-895mt", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 29 17:10:58.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a039743ceb1300478a5bc4181dbba2a87027ff9bc4213e030b72a9ac82002b2b (38 times)
Nov 29 17:11:58.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a039743ceb1300478a5bc4181dbba2a87027ff9bc4213e030b72a9ac82002b2b (39 times)
Nov 29 17:12:58.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a039743ceb1300478a5bc4181dbba2a87027ff9bc4213e030b72a9ac82002b2b (40 times)
Nov 29 17:13:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-49.us-west-2.compute.internal node/ip-10-0-131-49 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 17:13:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-49.us-west-2.compute.internal node/ip-10-0-131-49 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 17:13:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-49.us-west-2.compute.internal node/ip-10-0-131-49 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 17:13:05.000 W ns/openshift-network-diagnostics node/ip-10-0-172-223.us-west-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-131-49: failed to establish a TCP connection to 10.0.131.49:6443: dial tcp 10.0.131.49:6443: connect: connection refused
Nov 29 17:13:05.000 W ns/openshift-network-diagnostics node/ip-10-0-172-223.us-west-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-172-161: failed to establish a TCP connection to 10.0.172.161:6443: dial tcp 10.0.172.161:6443: connect: connection refused
Nov 29 17:13:05.000 W ns/openshift-network-diagnostics node/ip-10-0-172-223.us-west-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.173.222:443: dial tcp 172.30.173.222:443: connect: connection refused
#1597625143641772032build-log.txt.gz2 days ago
Nov 29 17:28:00.000 I ns/openshift-ingress deployment/router-default reason/ScalingReplicaSet Scaled up replica set router-default-7bb786b78c to 1
Nov 29 17:28:00.000 I ns/openshift-ingress replicaset/router-default-7bb786b78c reason/SuccessfulCreate Created pod: router-default-7bb786b78c-5sfth
Nov 29 17:28:00.000 I ns/openshift-ingress-canary daemonset/ingress-canary reason/SuccessfulDelete Deleted pod: ingress-canary-k4tqv
Nov 29 17:28:00.000 I ns/openshift-marketplace replicaset/marketplace-operator-7c75f4849f reason/SuccessfulDelete Deleted pod: marketplace-operator-7c75f4849f-p4ctw
Nov 29 17:28:00.000 I ns/openshift-ingress replicaset/router-default-597b8b745c reason/SuccessfulDelete Deleted pod: router-default-597b8b745c-mrfff
Nov 29 17:28:00.000 I ns/openshift-apiserver pod/apiserver-7fffd5dc4f-mr7z2 node/apiserver-7fffd5dc4f-mr7z2 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 17:28:00.000 I ns/openshift-apiserver pod/apiserver-7fffd5dc4f-mr7z2 node/apiserver-7fffd5dc4f-mr7z2 reason/TerminationStoppedServing Server has stopped listening
Nov 29 17:28:00.000 W ns/openshift-marketplace pod/marketplace-operator-7c75f4849f-p4ctw node/ip-10-0-233-214.us-west-2.compute.internal reason/Unhealthy Liveness probe failed: Get "http://10.128.0.33:8080/healthz": dial tcp 10.128.0.33:8080: connect: connection refused (13 times)
Nov 29 17:28:00.000 W ns/openshift-marketplace pod/marketplace-operator-7c75f4849f-p4ctw node/ip-10-0-233-214.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "http://10.128.0.33:8080/healthz": dial tcp 10.128.0.33:8080: connect: connection refused (32 times)
Nov 29 17:28:00.010 I clusteroperator/ingress versions: operator 4.9.52 -> 4.10.0-0.nightly-2022-11-29-161234, ingress-controller quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:855f98aae26cdd63a5376481556dd78665f1e4ad13a7cf9695d574d023abc921 -> quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe0778892d83f66b35b11f792748f1f848add781a0c47fd69c27c0523b2c720, canary-server quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:14e4eeb042c8b0c35d4538d7a30241dbae34f4cdc2b368d91309281df3653e48 -> quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58fb65444d0015feaf6da82a6bf421fd7e88611b1dd9d3df9680a42687d8935a
Nov 29 17:28:00.471 I ns/openshift-marketplace pod/marketplace-operator-57d747594c-xwtqm node/ip-10-0-233-214.us-west-2.compute.internal container/marketplace-operator reason/Ready
#1597625143641772032build-log.txt.gz2 days ago
Nov 29 17:28:42.000 I ns/openshift-console deployment/downloads reason/ScalingReplicaSet Scaled down replica set downloads-74986c5f86 to 0
Nov 29 17:28:42.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-57dd7fd4c6 to 1
Nov 29 17:28:42.000 I ns/openshift-oauth-apiserver replicaset/apiserver-57dd7fd4c6 reason/SuccessfulCreate Created pod: apiserver-57dd7fd4c6-59t58
Nov 29 17:28:42.000 I ns/openshift-oauth-apiserver replicaset/apiserver-54794d7bb reason/SuccessfulDelete Deleted pod: apiserver-54794d7bb-6kjx5
Nov 29 17:28:42.000 I ns/openshift-console replicaset/downloads-74986c5f86 reason/SuccessfulDelete Deleted pod: downloads-74986c5f86-p5nhr
Nov 29 17:28:42.000 I ns/default namespace/kube-system node/apiserver-54794d7bb-6kjx5 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 29 17:28:42.000 I ns/default namespace/kube-system node/apiserver-54794d7bb-6kjx5 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 17:28:42.000 I ns/default namespace/kube-system node/apiserver-54794d7bb-6kjx5 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 17:28:42.000 I ns/default namespace/kube-system node/apiserver-54794d7bb-6kjx5 reason/TerminationStoppedServing Server has stopped listening
Nov 29 17:28:42.604 I ns/openshift-oauth-apiserver pod/apiserver-54794d7bb-6kjx5 node/ip-10-0-131-49.us-west-2.compute.internal reason/GracefulDelete duration/70s
Nov 29 17:28:42.675 W ns/openshift-oauth-apiserver pod/apiserver-57dd7fd4c6-59t58 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
#1596156590547800064build-log.txt.gz6 days ago
Nov 25 15:50:48.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-180-59.us-east-2.compute.internal is unhealthy"
Nov 25 15:50:50.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0fca8dc7fa582d358e8083ff7b8a51dd92ecde0537e22aa587c8b583358e722 (39 times)
Nov 25 15:50:52.000 W ns/openshift-etcd pod/etcd-quorum-guard-56bd8cc5f8-v7bmq node/ip-10-0-180-59.us-east-2.compute.internal reason/Unhealthy Readiness probe failed:  (8 times)
Nov 25 15:50:52.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ip-10-0-180-59.us-east-2.compute.internal (2 times)
Nov 25 15:50:53.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0fca8dc7fa582d358e8083ff7b8a51dd92ecde0537e22aa587c8b583358e722 (40 times)
Nov 25 15:50:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-200-105.us-east-2.compute.internal node/ip-10-0-200-105 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 25 15:50:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-200-105.us-east-2.compute.internal node/ip-10-0-200-105 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 15:50:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-200-105.us-east-2.compute.internal node/ip-10-0-200-105 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 15:50:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-200-105.us-east-2.compute.internal node/ip-10-0-200-105 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 25 15:50:57.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-200-105.us-east-2.compute.internal node/ip-10-0-200-105.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.200.105:6443/healthz": dial tcp 10.0.200.105:6443: connect: connection refused\nbody: \n
Nov 25 15:50:57.000 W ns/openshift-etcd pod/etcd-quorum-guard-56bd8cc5f8-v7bmq node/ip-10-0-180-59.us-east-2.compute.internal reason/Unhealthy Readiness probe failed:  (9 times)
#1596156590547800064build-log.txt.gz6 days ago
Nov 25 15:55:41.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2467d18855faaab4015f62229cdaa407613cededf90d496f2f29090c06e239b5,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa (20 times)
Nov 25 15:55:55.168 - 209s  I alert/APIRemovedInNextEUSReleaseInUse ns/openshift-kube-apiserver ALERTS{alertname="APIRemovedInNextEUSReleaseInUse", alertstate="pending", group="autoscaling", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", resource="horizontalpodautoscalers", severity="info", version="v2beta1"}
Nov 25 15:55:55.168 - 209s  I alert/APIRemovedInNextEUSReleaseInUse ns/openshift-kube-apiserver ALERTS{alertname="APIRemovedInNextEUSReleaseInUse", alertstate="pending", group="policy", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", resource="poddisruptionbudgets", severity="info", version="v1beta1"}
Nov 25 15:56:07.000 W ns/openshift-network-diagnostics node/ip-10-0-208-252.us-east-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-200-105: failed to establish a TCP connection to 10.0.200.105:6443: dial tcp 10.0.200.105:6443: connect: connection refused
Nov 25 15:56:07.000 I ns/openshift-network-diagnostics node/ip-10-0-208-252.us-east-2.compute.internal reason/ConnectivityRestored roles/worker Connectivity restored after 59.99935838s: kubernetes-apiserver-endpoint-ip-10-0-200-105: tcp connection to 10.0.200.105:6443 succeeded
Nov 25 15:56:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-32.us-east-2.compute.internal node/ip-10-0-169-32 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 25 15:56:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-32.us-east-2.compute.internal node/ip-10-0-169-32 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 15:56:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-32.us-east-2.compute.internal node/ip-10-0-169-32 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 15:56:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-32.us-east-2.compute.internal node/ip-10-0-169-32 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 25 15:56:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-32.us-east-2.compute.internal node/ip-10-0-169-32.us-east-2.compute.internal container/setup reason/Pulling image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2467d18855faaab4015f62229cdaa407613cededf90d496f2f29090c06e239b5
Nov 25 15:56:29.022 W ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-32.us-east-2.compute.internal node/ip-10-0-169-32.us-east-2.compute.internal invariant violation (bug): static pod should not transition Running->Pending with same UID
#1596156590547800064build-log.txt.gz6 days ago
Nov 25 16:01:33.824 I ns/openshift-marketplace pod/redhat-marketplace-vqrhx node/ip-10-0-173-184.us-east-2.compute.internal container/registry-server reason/Ready
Nov 25 16:01:33.843 I ns/openshift-marketplace pod/redhat-marketplace-vqrhx node/ip-10-0-173-184.us-east-2.compute.internal reason/GracefulDelete duration/1s
Nov 25 16:01:35.000 I ns/openshift-marketplace pod/redhat-marketplace-vqrhx node/ip-10-0-173-184.us-east-2.compute.internal container/registry-server reason/Killing
Nov 25 16:01:36.811 I ns/openshift-marketplace pod/redhat-marketplace-vqrhx node/ip-10-0-173-184.us-east-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 25 16:01:36.812 I ns/openshift-marketplace pod/redhat-marketplace-vqrhx node/ip-10-0-173-184.us-east-2.compute.internal reason/Deleted
Nov 25 16:01:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-59.us-east-2.compute.internal node/ip-10-0-180-59 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 25 16:01:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-59.us-east-2.compute.internal node/ip-10-0-180-59 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:01:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-59.us-east-2.compute.internal node/ip-10-0-180-59 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 16:01:46.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-180-59.us-east-2.compute.internal node/ip-10-0-180-59.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.180.59:6443/healthz": dial tcp 10.0.180.59:6443: connect: connection refused\nbody: \n
Nov 25 16:01:46.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-180-59.us-east-2.compute.internal node/ip-10-0-180-59.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.0.180.59:6443/healthz": dial tcp 10.0.180.59:6443: connect: connection refused
Nov 25 16:01:47.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-180-59.us-east-2.compute.internal node/ip-10-0-180-59 reason/TerminationGracefulTerminationFinished All pending requests processed
#1596156590547800064build-log.txt.gz6 days ago
Nov 25 16:18:26.000 I ns/openshift-monitoring pod/thanos-querier-57c8cdb669-cmzhs node/ip-10-0-156-20.us-east-2.compute.internal container/thanos-query reason/Killing
Nov 25 16:18:26.000 I ns/openshift-monitoring pod/thanos-querier-57c8cdb669-rjhb8 node/ip-10-0-208-252.us-east-2.compute.internal container/thanos-query reason/Killing
Nov 25 16:18:26.000 I ns/openshift-monitoring pod/grafana-c8fdffc9-4m2g6 reason/AddedInterface Add eth0 [10.128.2.18/23] from openshift-sdn
Nov 25 16:18:26.000 I ns/openshift-monitoring pod/prometheus-adapter-7cbc9c8bf8-vmsfh reason/AddedInterface Add eth0 [10.129.2.35/23] from openshift-sdn
Nov 25 16:18:26.000 I ns/openshift-ingress-canary daemonset/ingress-canary reason/SuccessfulCreate Created pod: ingress-canary-t8fzj
Nov 25 16:18:26.000 I ns/openshift-apiserver pod/apiserver-575c48c487-5jkwt node/apiserver-575c48c487-5jkwt reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 25 16:18:26.000 I ns/openshift-apiserver pod/apiserver-575c48c487-5jkwt node/apiserver-575c48c487-5jkwt reason/TerminationStoppedServing Server has stopped listening
Nov 25 16:18:26.143 I ns/openshift-monitoring pod/thanos-querier-57c8cdb669-cmzhs node/ip-10-0-156-20.us-east-2.compute.internal reason/GracefulDelete duration/120s
Nov 25 16:18:26.158 I ns/openshift-monitoring pod/thanos-querier-57c8cdb669-rjhb8 node/ip-10-0-208-252.us-east-2.compute.internal reason/GracefulDelete duration/120s
Nov 25 16:18:26.277 E ns/openshift-ingress-canary pod/ingress-canary-49cgn node/ip-10-0-156-20.us-east-2.compute.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Nov 25 16:18:26.305 I ns/openshift-ingress-canary pod/ingress-canary-49cgn node/ip-10-0-156-20.us-east-2.compute.internal reason/Deleted
#1596156590547800064build-log.txt.gz6 days ago
Nov 25 16:19:07.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-795c5dfcfb to 1
Nov 25 16:19:07.000 I ns/openshift-oauth-apiserver replicaset/apiserver-795c5dfcfb reason/SuccessfulCreate Created pod: apiserver-795c5dfcfb-2jsrt
Nov 25 16:19:07.000 I ns/openshift-image-registry daemonset/node-ca reason/SuccessfulCreate Created pod: node-ca-4fmmh
Nov 25 16:19:07.000 I ns/openshift-oauth-apiserver replicaset/apiserver-c5f6f4976 reason/SuccessfulDelete Deleted pod: apiserver-c5f6f4976-d2zxl
Nov 25 16:19:07.000 I ns/openshift-console replicaset/console-5b6c74986f reason/SuccessfulDelete Deleted pod: console-5b6c74986f-fjx8m
Nov 25 16:19:07.000 I ns/default namespace/kube-system node/apiserver-c5f6f4976-d2zxl reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 25 16:19:07.000 I ns/default namespace/kube-system node/apiserver-c5f6f4976-d2zxl reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 16:19:07.000 I ns/default namespace/kube-system node/apiserver-c5f6f4976-d2zxl reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 16:19:07.000 I ns/default namespace/kube-system node/apiserver-c5f6f4976-d2zxl reason/TerminationStoppedServing Server has stopped listening
Nov 25 16:19:07.092 I ns/openshift-image-registry pod/node-ca-t6lm4 node/ip-10-0-173-184.us-east-2.compute.internal container/node-ca reason/ContainerExit code/0 cause/Completed
Nov 25 16:19:07.105 I ns/openshift-image-registry pod/node-ca-t6lm4 node/ip-10-0-173-184.us-east-2.compute.internal reason/Deleted
#1596156590547800064build-log.txt.gz6 days ago
Nov 25 16:19:50.893 I ns/openshift-cluster-node-tuning-operator pod/tuned-cmh5v node/ip-10-0-208-252.us-east-2.compute.internal container/tuned reason/Ready
Nov 25 16:19:54.168 - 299s  I alert/RedhatOperatorsCatalogError node/10.128.0.86:8443 ns/openshift-operator-lifecycle-manager pod/catalog-operator-77f6d9686f-862bm container/catalog-operator ALERTS{alertname="RedhatOperatorsCatalogError", alertstate="pending", container="catalog-operator", endpoint="https-metrics", exported_namespace="openshift-marketplace", instance="10.128.0.86:8443", job="catalog-operator-metrics", name="redhat-operators", namespace="openshift-operator-lifecycle-manager", pod="catalog-operator-77f6d9686f-862bm", prometheus="openshift-monitoring/k8s", service="catalog-operator-metrics", severity="warning"}
Nov 25 16:19:56.168 - 299s  I alert/ThanosSidecarNoConnectionToStartedPrometheus node/10.131.0.41:10902 ns/openshift-monitoring pod/prometheus-k8s-1 container/kube-rbac-proxy-thanos ALERTS{alertname="ThanosSidecarNoConnectionToStartedPrometheus", alertstate="pending", container="kube-rbac-proxy-thanos", endpoint="thanos-proxy", instance="10.131.0.41:10902", job="prometheus-k8s-thanos-sidecar", namespace="openshift-monitoring", pod="prometheus-k8s-1", prometheus="openshift-monitoring/k8s", service="prometheus-k8s-thanos-sidecar", severity="warning"}
Nov 25 16:19:57.168 - 299s  I alert/PrometheusNotConnectedToAlertmanagers node/10.131.0.41:9091 ns/openshift-monitoring pod/prometheus-k8s-1 container/prometheus-proxy ALERTS{alertname="PrometheusNotConnectedToAlertmanagers", alertstate="pending", container="prometheus-proxy", endpoint="web", instance="10.131.0.41:9091", job="prometheus-k8s", namespace="openshift-monitoring", pod="prometheus-k8s-1", prometheus="openshift-monitoring/k8s", service="prometheus-k8s", severity="warning"}
Nov 25 16:19:59.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OAuthAPIServerWaitForLatest the oauth-apiserver hasn't reported its version to be "4.10.0-0.nightly-2022-11-25-145653" yet, its current version is "4.9.52" (2 times)
Nov 25 16:20:01.000 I ns/openshift-apiserver pod/apiserver-575c48c487-zw4kf node/apiserver-575c48c487-zw4kf reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 25 16:20:01.000 I ns/openshift-apiserver pod/apiserver-575c48c487-zw4kf node/apiserver-575c48c487-zw4kf reason/TerminationStoppedServing Server has stopped listening
Nov 25 16:20:03.168 - 209s  I alert/KubeContainerWaiting ns/openshift-operator-lifecycle-manager pod/catalog-operator-77f6d9686f-862bm container/catalog-operator ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="catalog-operator", namespace="openshift-operator-lifecycle-manager", pod="catalog-operator-77f6d9686f-862bm", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 25 16:20:03.168 - 209s  I alert/KubeContainerWaiting ns/openshift-marketplace pod/marketplace-operator-749fdddffd-pfzt6 container/marketplace-operator ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="marketplace-operator", namespace="openshift-marketplace", pod="marketplace-operator-749fdddffd-pfzt6", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 25 16:20:03.168 - 209s  I alert/KubeContainerWaiting ns/openshift-operator-lifecycle-manager pod/olm-operator-5bfb96978d-mk4x9 container/olm-operator ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="olm-operator", namespace="openshift-operator-lifecycle-manager", pod="olm-operator-5bfb96978d-mk4x9", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 25 16:20:03.168 - 209s  I alert/KubeContainerWaiting ns/openshift-operator-lifecycle-manager pod/package-server-manager-9c74dd77c-bhwlz container/package-server-manager ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="package-server-manager", namespace="openshift-operator-lifecycle-manager", pod="package-server-manager-9c74dd77c-bhwlz", prometheus="openshift-monitoring/k8s", severity="warning"}
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-aws-upgrade (all) - 10 runs, 30% failed, 67% of failures match = 20% impact
#1597521807814955008build-log.txt.gz2 days ago
Nov 29 10:14:36.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-129-93.ec2.internal is unhealthy"
Nov 29 10:14:38.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (40 times)
Nov 29 10:14:40.000 W ns/openshift-etcd pod/etcd-quorum-guard-56659848c-f8qfb node/ip-10-0-129-93.ec2.internal reason/Unhealthy Readiness probe failed:  (6 times)
Nov 29 10:14:41.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (41 times)
Nov 29 10:14:42.708 - 89s   I alert/etcdMembersDown ns/openshift-etcd pod/etcd-ip-10-0-129-93.ec2.internal ALERTS{alertname="etcdMembersDown", alertstate="pending", job="etcd", namespace="openshift-etcd", pod="etcd-ip-10-0-129-93.ec2.internal", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
Nov 29 10:14:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-197-106.ec2.internal node/ip-10-0-197-106 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 10:14:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-197-106.ec2.internal node/ip-10-0-197-106 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 10:14:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-197-106.ec2.internal node/ip-10-0-197-106 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 10:14:45.000 W ns/openshift-etcd pod/etcd-quorum-guard-56659848c-f8qfb node/ip-10-0-129-93.ec2.internal reason/Unhealthy Readiness probe failed:  (7 times)
Nov 29 10:14:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-197-106.ec2.internal node/ip-10-0-197-106 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 10:14:48.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-197-106.ec2.internal node/ip-10-0-197-106.ec2.internal container/kube-apiserver reason/Killing
#1597521807814955008build-log.txt.gz2 days ago
Nov 29 10:17:31.897 I ns/openshift-marketplace pod/redhat-marketplace-9np2x node/ip-10-0-202-211.ec2.internal reason/Deleted
Nov 29 10:18:21.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (17 times)
Nov 29 10:19:25.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (18 times)
Nov 29 10:19:28.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (19 times)
Nov 29 10:19:55.708 - 179s  I alert/APIRemovedInNextEUSReleaseInUse ns/openshift-kube-apiserver ALERTS{alertname="APIRemovedInNextEUSReleaseInUse", alertstate="pending", group="policy", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", resource="podsecuritypolicies", severity="info", version="v1beta1"}
Nov 29 10:19:58.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-93.ec2.internal node/ip-10-0-129-93 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 10:19:58.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-93.ec2.internal node/ip-10-0-129-93 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 10:19:58.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-93.ec2.internal node/ip-10-0-129-93 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 10:20:00.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-129-93.ec2.internal node/ip-10-0-129-93.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.0.129.93:6443/healthz": dial tcp 10.0.129.93:6443: connect: connection refused\nbody: \n
Nov 29 10:20:00.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-93.ec2.internal node/ip-10-0-129-93 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 29 10:20:00.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-129-93.ec2.internal node/ip-10-0-129-93.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.0.129.93:6443/healthz": dial tcp 10.0.129.93:6443: connect: connection refused
#1597521807814955008build-log.txt.gz2 days ago
Nov 29 10:21:54.000 I ns/openshift-kube-apiserver lease/cert-regeneration-controller-lock reason/LeaderElection ip-10-0-129-93_5112e2f3-25fd-43c5-9f52-91a934f3615c became leader
Nov 29 10:21:54.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (34 times)
Nov 29 10:22:23.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (35 times)
Nov 29 10:23:21.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (36 times)
Nov 29 10:24:21.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-29-092248@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (37 times)
Nov 29 10:25:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-151-153.ec2.internal node/ip-10-0-151-153 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 29 10:25:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-151-153.ec2.internal node/ip-10-0-151-153 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 10:25:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-151-153.ec2.internal node/ip-10-0-151-153 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 10:25:20.000 W ns/openshift-network-diagnostics node/ip-10-0-202-211.ec2.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-151-153: failed to establish a TCP connection to 10.0.151.153:6443: dial tcp 10.0.151.153:6443: connect: connection refused
Nov 29 10:25:20.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-151-153.ec2.internal node/ip-10-0-151-153.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.0.151.153:6443/healthz": dial tcp 10.0.151.153:6443: connect: connection refused\nbody: \n
Nov 29 10:25:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-151-153.ec2.internal node/ip-10-0-151-153 reason/TerminationGracefulTerminationFinished All pending requests processed
#1597521807814955008build-log.txt.gz2 days ago
Nov 29 10:41:30.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4."
Nov 29 10:41:30.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-579fb6d8cd to 2
Nov 29 10:41:30.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-79955b96fd to 1
Nov 29 10:41:30.000 I ns/openshift-oauth-apiserver replicaset/apiserver-79955b96fd reason/SuccessfulCreate Created pod: apiserver-79955b96fd-v6qpz
Nov 29 10:41:30.000 I ns/openshift-oauth-apiserver replicaset/apiserver-579fb6d8cd reason/SuccessfulDelete Deleted pod: apiserver-579fb6d8cd-ksq8n
Nov 29 10:41:30.000 I ns/default namespace/kube-system node/apiserver-579fb6d8cd-ksq8n reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 29 10:41:30.000 I ns/default namespace/kube-system node/apiserver-579fb6d8cd-ksq8n reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 10:41:30.000 I ns/default namespace/kube-system node/apiserver-579fb6d8cd-ksq8n reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 10:41:30.000 I ns/default namespace/kube-system node/apiserver-579fb6d8cd-ksq8n reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:41:30.090 I ns/openshift-oauth-apiserver pod/apiserver-579fb6d8cd-ksq8n node/ip-10-0-151-153.ec2.internal reason/GracefulDelete duration/70s
Nov 29 10:41:30.155 W ns/openshift-oauth-apiserver pod/apiserver-79955b96fd-v6qpz reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
#1597521807814955008build-log.txt.gz2 days ago
Nov 29 10:41:43.000 I ns/openshift-ingress deployment/router-default reason/ScalingReplicaSet Scaled up replica set router-default-c99788df to 1
Nov 29 10:41:43.000 I ns/openshift-ingress deployment/router-default reason/ScalingReplicaSet Scaled up replica set router-default-c99788df to 2
Nov 29 10:41:43.000 I ns/openshift-ingress replicaset/router-default-c99788df reason/SuccessfulCreate Created pod: router-default-c99788df-hm6hl
Nov 29 10:41:43.000 I ns/openshift-ingress replicaset/router-default-c99788df reason/SuccessfulCreate Created pod: router-default-c99788df-tm747
Nov 29 10:41:43.000 I ns/openshift-ingress replicaset/router-default-8cf78c868 reason/SuccessfulDelete Deleted pod: router-default-8cf78c868-xjlvj
Nov 29 10:41:43.000 I ns/openshift-apiserver pod/apiserver-7b57767468-qtlhc node/apiserver-7b57767468-qtlhc reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 10:41:43.000 I ns/openshift-apiserver pod/apiserver-7b57767468-qtlhc node/apiserver-7b57767468-qtlhc reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:41:43.000 W ns/openshift-marketplace pod/marketplace-operator-db75bf555-q6dlh node/ip-10-0-197-106.ec2.internal reason/Unhealthy Liveness probe failed: Get "http://10.128.0.90:8080/healthz": dial tcp 10.128.0.90:8080: connect: connection refused
Nov 29 10:41:43.000 W ns/openshift-marketplace pod/marketplace-operator-db75bf555-q6dlh node/ip-10-0-197-106.ec2.internal reason/Unhealthy Readiness probe failed: Get "http://10.128.0.90:8080/healthz": dial tcp 10.128.0.90:8080: connect: connection refused
Nov 29 10:41:43.000 W ns/openshift-marketplace pod/marketplace-operator-db75bf555-q6dlh node/ip-10-0-197-106.ec2.internal reason/Unhealthy Readiness probe failed: Get "http://10.128.0.90:8080/healthz": dial tcp 10.128.0.90:8080: connect: connection refused (2 times)
Nov 29 10:41:43.148 I ns/openshift-ingress pod/router-default-8cf78c868-xjlvj node/ip-10-0-174-17.ec2.internal reason/GracefulDelete duration/3600s
#1597521807814955008build-log.txt.gz2 days ago
Nov 29 10:42:41.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-579fb6d8cd to 1
Nov 29 10:42:41.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-69899f645f to 2
Nov 29 10:42:41.000 I ns/openshift-oauth-apiserver replicaset/apiserver-69899f645f reason/SuccessfulCreate Created pod: apiserver-69899f645f-p4skf
Nov 29 10:42:41.000 I ns/openshift-monitoring daemonset/node-exporter reason/SuccessfulCreate Created pod: node-exporter-b57c4
Nov 29 10:42:41.000 I ns/openshift-oauth-apiserver replicaset/apiserver-579fb6d8cd reason/SuccessfulDelete Deleted pod: apiserver-579fb6d8cd-8mcqt
Nov 29 10:42:41.000 I ns/default namespace/kube-system node/apiserver-579fb6d8cd-8mcqt reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 29 10:42:41.000 I ns/default namespace/kube-system node/apiserver-579fb6d8cd-8mcqt reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 10:42:41.000 I ns/default namespace/kube-system node/apiserver-579fb6d8cd-8mcqt reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 10:42:41.000 I ns/default namespace/kube-system node/apiserver-579fb6d8cd-8mcqt reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:42:41.000 W ns/openshift-oauth-apiserver pod/apiserver-579fb6d8cd-8mcqt node/ip-10-0-129-93.ec2.internal reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Nov 29 10:42:41.234 I ns/openshift-monitoring pod/node-exporter-9rpxc node/ip-10-0-184-107.ec2.internal container/kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
#1597340758086520832build-log.txt.gz2 days ago
Nov 28 22:10:13.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-28-212248@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (38 times)
Nov 28 22:10:13.000 W ns/openshift-etcd pod/etcd-quorum-guard-64b8877b7d-5ndr2 node/ip-10-0-175-15.us-east-2.compute.internal reason/Unhealthy Readiness probe failed:  (6 times)
Nov 28 22:10:16.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-28-212248@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (39 times)
Nov 28 22:10:16.285 - 59s   I alert/PodDisruptionBudgetAtLimit ns/openshift-etcd ALERTS{alertname="PodDisruptionBudgetAtLimit", alertstate="pending", namespace="openshift-etcd", poddisruptionbudget="etcd-quorum-guard", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 28 22:10:18.000 W ns/openshift-etcd pod/etcd-quorum-guard-64b8877b7d-5ndr2 node/ip-10-0-175-15.us-east-2.compute.internal reason/Unhealthy Readiness probe failed:  (7 times)
Nov 28 22:10:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-198-96.us-east-2.compute.internal node/ip-10-0-198-96 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 28 22:10:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-198-96.us-east-2.compute.internal node/ip-10-0-198-96 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 22:10:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-198-96.us-east-2.compute.internal node/ip-10-0-198-96 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 22:10:23.000 W ns/openshift-etcd pod/etcd-quorum-guard-64b8877b7d-5ndr2 node/ip-10-0-175-15.us-east-2.compute.internal reason/Unhealthy Readiness probe failed:  (8 times)
Nov 28 22:10:23.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ip-10-0-175-15.us-east-2.compute.internal (2 times)
Nov 28 22:10:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-198-96.us-east-2.compute.internal node/ip-10-0-198-96 reason/TerminationGracefulTerminationFinished All pending requests processed
#1597340758086520832build-log.txt.gz2 days ago
Nov 28 22:15:04.798 I ns/openshift-operator-lifecycle-manager pod/collect-profiles-27827895--1-f9nbq node/ip-10-0-166-193.us-east-2.compute.internal container/collect-profiles reason/ContainerExit code/0 cause/Completed
Nov 28 22:15:05.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-28-212248@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (19 times)
Nov 28 22:15:06.000 I ns/openshift-operator-lifecycle-manager job/collect-profiles-27827895 reason/Completed Job completed
Nov 28 22:15:06.000 I ns/openshift-operator-lifecycle-manager cronjob/collect-profiles reason/SawCompletedJob Saw completed job: collect-profiles-27827895, status: Complete
Nov 28 22:15:07.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-28-212248@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (20 times)
Nov 28 22:15:36.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-162-168.us-east-2.compute.internal node/ip-10-0-162-168 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 28 22:15:36.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-162-168.us-east-2.compute.internal node/ip-10-0-162-168 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 22:15:36.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-162-168.us-east-2.compute.internal node/ip-10-0-162-168 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 22:15:38.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-162-168.us-east-2.compute.internal node/ip-10-0-162-168.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.0.162.168:6443/healthz": dial tcp 10.0.162.168:6443: connect: connection refused\nbody: \n
Nov 28 22:15:38.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-162-168.us-east-2.compute.internal node/ip-10-0-162-168 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 28 22:15:38.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-162-168.us-east-2.compute.internal node/ip-10-0-162-168.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.0.162.168:6443/healthz": dial tcp 10.0.162.168:6443: connect: connection refused
#1597340758086520832build-log.txt.gz2 days ago
Nov 28 22:20:34.000 I ns/openshift-marketplace pod/redhat-operators-ww5kj node/ip-10-0-130-73.us-east-2.compute.internal reason/BackOff Back-off pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.9" (2 times)
Nov 28 22:20:34.000 W ns/openshift-marketplace pod/redhat-operators-ww5kj node/ip-10-0-130-73.us-east-2.compute.internal reason/Failed Error: ImagePullBackOff (2 times)
Nov 28 22:20:34.997 I ns/openshift-marketplace pod/redhat-operators-ww5kj node/ip-10-0-130-73.us-east-2.compute.internal container/registry-server reason/ContainerWait cause/ErrImagePull duration/37.00s rpc error: code = Unknown desc = reading signatures: Error reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=44bf9a794fbca6a5313f11840aa8d24cbb4a2dfa697f3570028dee282844ecfc/signature-5: status 503 (Service Unavailable)
Nov 28 22:20:48.000 I ns/openshift-marketplace pod/redhat-operators-ww5kj node/ip-10-0-130-73.us-east-2.compute.internal container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-operator-index:v4.9
Nov 28 22:20:48.997 I ns/openshift-marketplace pod/redhat-operators-ww5kj node/ip-10-0-130-73.us-east-2.compute.internal container/registry-server reason/ContainerWait cause/ImagePullBackOff duration/51.00s Back-off pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.9"
Nov 28 22:20:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-175-15.us-east-2.compute.internal node/ip-10-0-175-15 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Nov 28 22:20:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-175-15.us-east-2.compute.internal node/ip-10-0-175-15 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 22:20:50.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-175-15.us-east-2.compute.internal node/ip-10-0-175-15 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 22:20:51.000 W ns/openshift-marketplace pod/redhat-operators-ww5kj node/ip-10-0-130-73.us-east-2.compute.internal reason/Failed Error: ErrImagePull (3 times)
Nov 28 22:20:51.000 W ns/openshift-marketplace pod/redhat-operators-ww5kj node/ip-10-0-130-73.us-east-2.compute.internal reason/Failed Failed to pull image "registry.redhat.io/redhat/redhat-operator-index:v4.9": rpc error: code = Unknown desc = reading signatures: Error reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=44bf9a794fbca6a5313f11840aa8d24cbb4a2dfa697f3570028dee282844ecfc/signature-5: status 503 (Service Unavailable) (3 times)
Nov 28 22:20:52.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-175-15.us-east-2.compute.internal node/ip-10-0-175-15 reason/TerminationGracefulTerminationFinished All pending requests processed
#1597340758086520832build-log.txt.gz2 days ago
Nov 28 22:35:12.000 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/OperatorVersionChanged clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "4.9.52" to "4.10.0-0.ci-2022-11-28-212248"
Nov 28 22:35:12.000 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/OperatorVersionChanged clusteroperator/csi-snapshot-controller version "operator" changed from "4.9.52" to "4.10.0-0.ci-2022-11-28-212248"
Nov 28 22:35:12.000 I ns/openshift-service-ca-operator deployment/service-ca-operator reason/ScalingReplicaSet Scaled up replica set service-ca-operator-8f6b886bf to 1
Nov 28 22:35:12.000 I ns/openshift-service-ca-operator replicaset/service-ca-operator-8f6b886bf reason/SuccessfulCreate Created pod: service-ca-operator-8f6b886bf-4gwld
Nov 28 22:35:12.000 I ns/openshift-monitoring daemonset/node-exporter reason/SuccessfulDelete Deleted pod: node-exporter-66mnx
Nov 28 22:35:12.000 I ns/openshift-apiserver pod/apiserver-586778d7fd-wtrc2 node/apiserver-586778d7fd-wtrc2 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 28 22:35:12.000 I ns/openshift-apiserver pod/apiserver-586778d7fd-wtrc2 node/apiserver-586778d7fd-wtrc2 reason/TerminationStoppedServing Server has stopped listening
Nov 28 22:35:12.490 I ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5ffcb669fd-8bnc6 node/ip-10-0-162-168.us-east-2.compute.internal container/snapshot-controller reason/ContainerStart duration/4.00s
Nov 28 22:35:12.490 I ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5ffcb669fd-8bnc6 node/ip-10-0-162-168.us-east-2.compute.internal container/snapshot-controller reason/Ready
Nov 28 22:35:12.508 I ns/openshift-monitoring pod/node-exporter-x6fds node/ip-10-0-162-168.us-east-2.compute.internal container/kube-rbac-proxy reason/ContainerStart duration/0.00s
Nov 28 22:35:12.508 I ns/openshift-monitoring pod/node-exporter-x6fds node/ip-10-0-162-168.us-east-2.compute.internal container/node-exporter reason/ContainerStart duration/0.00s
#1597340758086520832build-log.txt.gz2 days ago
Nov 28 22:35:37.000 W ns/openshift-oauth-apiserver pod/apiserver-779d948998-jdqzn node/ip-10-0-175-15.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.22:8443/readyz": dial tcp 10.130.0.22:8443: connect: connection refused\nbody: \n
Nov 28 22:35:37.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-779d948998 to 2
Nov 28 22:35:37.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-d9fd6c544 to 1
Nov 28 22:35:37.000 I ns/openshift-oauth-apiserver replicaset/apiserver-d9fd6c544 reason/SuccessfulCreate Created pod: apiserver-d9fd6c544-46ktw
Nov 28 22:35:37.000 I ns/openshift-oauth-apiserver replicaset/apiserver-779d948998 reason/SuccessfulDelete Deleted pod: apiserver-779d948998-jdqzn
Nov 28 22:35:37.000 I ns/default namespace/kube-system node/apiserver-779d948998-jdqzn reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 28 22:35:37.000 I ns/default namespace/kube-system node/apiserver-779d948998-jdqzn reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 28 22:35:37.000 I ns/default namespace/kube-system node/apiserver-779d948998-jdqzn reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 28 22:35:37.000 I ns/default namespace/kube-system node/apiserver-779d948998-jdqzn reason/TerminationStoppedServing Server has stopped listening
Nov 28 22:35:37.000 W ns/openshift-oauth-apiserver pod/apiserver-779d948998-jdqzn node/ip-10-0-175-15.us-east-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.22:8443/healthz": dial tcp 10.130.0.22:8443: connect: connection refused
Nov 28 22:35:37.000 W ns/openshift-oauth-apiserver pod/apiserver-779d948998-jdqzn node/ip-10-0-175-15.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.22:8443/readyz": dial tcp 10.130.0.22:8443: connect: connection refused
#1597340758086520832build-log.txt.gz2 days ago
Nov 28 22:36:51.000 I ns/openshift-oauth-apiserver pod/apiserver-664c9665f-p4l4g node/ip-10-0-175-15.us-east-2.compute.internal container/fix-audit-permissions reason/Pulled duration/2.093s image/registry.ci.openshift.org/ocp/4.10-2022-11-28-212248@sha256:4d5f82dd3038e12af679431a051e468f2e6eecd2a398a22ded9bf4cceba470b3
Nov 28 22:36:51.000 I ns/openshift-oauth-apiserver pod/apiserver-664c9665f-p4l4g node/ip-10-0-175-15.us-east-2.compute.internal container/fix-audit-permissions reason/Started
Nov 28 22:36:52.000 I ns/openshift-oauth-apiserver pod/apiserver-664c9665f-p4l4g node/ip-10-0-175-15.us-east-2.compute.internal container/oauth-apiserver reason/Created
Nov 28 22:36:52.000 I ns/openshift-oauth-apiserver pod/apiserver-664c9665f-p4l4g node/ip-10-0-175-15.us-east-2.compute.internal container/oauth-apiserver reason/Pulled image/registry.ci.openshift.org/ocp/4.10-2022-11-28-212248@sha256:4d5f82dd3038e12af679431a051e468f2e6eecd2a398a22ded9bf4cceba470b3
Nov 28 22:36:52.000 I ns/openshift-oauth-apiserver pod/apiserver-664c9665f-p4l4g node/ip-10-0-175-15.us-east-2.compute.internal container/oauth-apiserver reason/Started
Nov 28 22:36:52.000 I ns/openshift-apiserver pod/apiserver-586778d7fd-mbc4w node/apiserver-586778d7fd-mbc4w reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 28 22:36:52.000 I ns/openshift-apiserver pod/apiserver-586778d7fd-mbc4w node/apiserver-586778d7fd-mbc4w reason/TerminationStoppedServing Server has stopped listening
Nov 28 22:36:52.106 I ns/openshift-oauth-apiserver pod/apiserver-664c9665f-p4l4g node/ip-10-0-175-15.us-east-2.compute.internal container/fix-audit-permissions reason/ContainerExit code/0 cause/Completed
Nov 28 22:36:53.000 W ns/openshift-oauth-apiserver pod/apiserver-664c9665f-p4l4g node/ip-10-0-175-15.us-east-2.compute.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]informer-sync ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/openshift.io-StartUserInformer ok\n[+]poststarthook/openshift.io-StartOAuthInformer ok\n[+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok\n[+]shutdown ok\nreadyz check failed\n\n
Nov 28 22:36:53.000 W ns/openshift-oauth-apiserver pod/apiserver-664c9665f-p4l4g node/ip-10-0-175-15.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500
Nov 28 22:36:53.116 I ns/openshift-oauth-apiserver pod/apiserver-664c9665f-p4l4g node/ip-10-0-175-15.us-east-2.compute.internal container/oauth-apiserver reason/ContainerStart duration/0.00s
pull-ci-openshift-machine-config-operator-release-4.9-e2e-aws-disruptive (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1597512331414212608build-log.txt.gz2 days ago
Nov 29 09:29:53.000 I ns/openshift-cluster-storage-operator namespace/openshift-cluster-storage-operator reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("CSISnapshotWebhookControllerProgressing: 1 out of 2 pods running")
Nov 29 09:29:53.000 I ns/openshift-cluster-storage-operator namespace/openshift-cluster-storage-operator reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well")
Nov 29 09:29:53.000 I ns/openshift-service-ca-operator namespace/openshift-service-ca-operator reason/OperatorStatusChanged Status for clusteroperator/service-ca changed: Progressing changed from False to True ("Progressing: \nProgressing: service-ca does not have available replicas")
Nov 29 09:29:53.000 I ns/openshift-cluster-storage-operator namespace/openshift-cluster-storage-operator reason/OperatorStatusChanged Status for clusteroperator/storage changed: Progressing changed from False to True ("AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods")
Nov 29 09:29:53.000 I ns/openshift-oauth-apiserver replicaset/apiserver-7bd64d7bf9 reason/SuccessfulCreate Created pod: apiserver-7bd64d7bf9-xfjv5
Nov 29 09:29:53.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-dqfdn reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 29 09:29:53.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-dqfdn reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 09:29:53.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-dqfdn reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 09:29:53.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-dqfdn reason/TerminationStoppedServing Server has stopped listening
Nov 29 09:29:53.012 I ns/openshift-kube-apiserver pod/revision-pruner-9-ip-10-0-133-236.us-west-2.compute.internal node/ip-10-0-133-236.us-west-2.compute.internal reason/DeletedAfterCompletion
Nov 29 09:29:53.015 I ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7966d975b8-7l8r6 node/ reason/Created
#1597512331414212608build-log.txt.gz2 days ago
Nov 29 09:30:07.000 I ns/openshift-etcd deployment/etcd-quorum-guard reason/ScalingReplicaSet Scaled down replica set etcd-quorum-guard-5b97dfcf4f to 0 (4 times)
Nov 29 09:30:07.000 I ns/openshift-etcd deployment/etcd-quorum-guard reason/ScalingReplicaSet Scaled up replica set etcd-quorum-guard-5b97dfcf4f to 3 (4 times)
Nov 29 09:30:07.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-5b97dfcf4f reason/SuccessfulDelete (combined from similar events): Deleted pod: etcd-quorum-guard-5b97dfcf4f-24bl9
Nov 29 09:30:07.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-5b97dfcf4f reason/SuccessfulDelete (combined from similar events): Deleted pod: etcd-quorum-guard-5b97dfcf4f-9x5f9 (3 times)
Nov 29 09:30:07.000 I ns/openshift-etcd replicaset/etcd-quorum-guard-5b97dfcf4f reason/SuccessfulDelete (combined from similar events): Deleted pod: etcd-quorum-guard-5b97dfcf4f-dlx64 (2 times)
Nov 29 09:30:07.000 I ns/openshift-apiserver pod/apiserver-98f479c48-bp84m node/apiserver-98f479c48-bp84m reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 09:30:07.000 I ns/openshift-apiserver pod/apiserver-98f479c48-bp84m node/apiserver-98f479c48-bp84m reason/TerminationStoppedServing Server has stopped listening
Nov 29 09:30:07.820 I ns/openshift-etcd pod/etcd-quorum-guard-5b97dfcf4f-9x5f9 node/ip-10-0-150-46.us-west-2.compute.internal reason/GracefulDelete duration/3s
Nov 29 09:30:07.840 I ns/openshift-etcd pod/etcd-quorum-guard-5b97dfcf4f-dlx64 node/ip-10-0-245-62.us-west-2.compute.internal reason/GracefulDelete duration/3s
Nov 29 09:30:07.840 I ns/openshift-etcd pod/etcd-quorum-guard-5b97dfcf4f-24bl9 node/ reason/DeletedBeforeScheduling
Nov 29 09:30:08.000 I ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7966d975b8-7l8r6 node/ip-10-0-150-46.us-west-2.compute.internal container/csi-resizer reason/Created
#1597512331414212608build-log.txt.gz2 days ago
Nov 29 09:30:15.000 W ns/openshift-apiserver pod/apiserver-98f479c48-bp84m node/ip-10-0-133-236.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.46:8443/readyz": dial tcp 10.129.0.46:8443: connect: connection refused\nbody: \n (2 times)
Nov 29 09:30:15.000 W ns/openshift-operator-lifecycle-manager pod/packageserver-5d9f947475-p5ht8 node/ip-10-0-150-46.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.12:5443/healthz": dial tcp 10.130.0.12:5443: connect: connection refused\nbody: \n
Nov 29 09:30:15.000 W ns/openshift-apiserver pod/apiserver-98f479c48-bp84m node/ip-10-0-133-236.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.46:8443/readyz": dial tcp 10.129.0.46:8443: connect: connection refused (2 times)
Nov 29 09:30:15.000 W ns/openshift-operator-lifecycle-manager pod/packageserver-5d9f947475-p5ht8 node/ip-10-0-150-46.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.12:5443/healthz": dial tcp 10.130.0.12:5443: connect: connection refused
Nov 29 09:30:15.122 I ns/openshift-cloud-controller-manager-operator pod/cluster-cloud-controller-manager-operator-5c948888d-z9cs5 node/ip-10-0-150-46.us-west-2.compute.internal reason/Deleted
Nov 29 09:30:15.422 E ns/openshift-console-operator pod/console-operator-b86f5f95d-7gb92 node/ip-10-0-150-46.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error roller.\nI1129 09:30:13.194236       1 genericapiserver.go:398] [graceful-termination] RunPreShutdownHooks has completed\nI1129 09:30:13.194805       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-b86f5f95d-7gb92", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI1129 09:30:13.194334       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI1129 09:30:13.194358       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI1129 09:30:13.195101       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-b86f5f95d-7gb92", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI1129 09:30:13.195161       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-b86f5f95d-7gb92", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI1129 09:30:13.195218       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI1129 09:30:13.195276       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-b86f5f95d-7gb92", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI1129 09:30:13.195331       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nW1129 09:30:13.194391       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Nov 29 09:30:15.580 I ns/openshift-console-operator pod/console-operator-b86f5f95d-7gb92 node/ip-10-0-150-46.us-west-2.compute.internal reason/Deleted
Nov 29 09:30:15.616 I ns/openshift-etcd pod/revision-pruner-7-ip-10-0-150-46.us-west-2.compute.internal node/ip-10-0-150-46.us-west-2.compute.internal reason/Created
Nov 29 09:30:15.873 I ns/openshift-etcd pod/revision-pruner-7-ip-10-0-133-236.us-west-2.compute.internal node/ip-10-0-133-236.us-west-2.compute.internal container/pruner reason/ContainerExit code/0 cause/Completed
Nov 29 09:30:15.877 E ns/openshift-service-ca pod/service-ca-fc4796774-fxrwz node/ip-10-0-150-46.us-west-2.compute.internal container/service-ca-controller reason/ContainerExit code/1 cause/Error
Nov 29 09:30:15.910 I ns/openshift-service-ca pod/service-ca-fc4796774-fxrwz node/ip-10-0-150-46.us-west-2.compute.internal reason/Deleted
#1597512331414212608build-log.txt.gz2 days ago
Nov 29 09:34:54.000 I ns/openshift-authentication-operator namespace/openshift-authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4."
Nov 29 09:34:54.000 I ns/openshift-kube-controller-manager-operator namespace/openshift-kube-controller-manager-operator reason/OperatorStatusChanged Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 9" to "NodeInstallerProgressing: 2 nodes are at revision 9",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 9" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 9"
Nov 29 09:34:54.000 W ns/openshift-oauth-apiserver pod/apiserver-7bd64d7bf9-dxx4s node/ip-10-0-150-46.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.26:8443/readyz": dial tcp 10.130.0.26:8443: connect: connection refused\nbody: \n
Nov 29 09:34:54.000 I ns/openshift-oauth-apiserver replicaset/apiserver-7bd64d7bf9 reason/SuccessfulCreate Created pod: apiserver-7bd64d7bf9-s95mc
Nov 29 09:34:54.000 I ns/openshift-apiserver replicaset/apiserver-98f479c48 reason/SuccessfulCreate Created pod: apiserver-98f479c48-d4qrn
Nov 29 09:34:54.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-dxx4s reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 29 09:34:54.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-dxx4s reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 09:34:54.000 I ns/openshift-apiserver pod/apiserver-98f479c48-q2mbj node/apiserver-98f479c48-q2mbj reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 09:34:54.000 I ns/openshift-apiserver pod/apiserver-98f479c48-q2mbj node/apiserver-98f479c48-q2mbj reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 09:34:54.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-dxx4s reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 09:34:54.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-dxx4s reason/TerminationStoppedServing Server has stopped listening
#1597512331414212608build-log.txt.gz2 days ago
Nov 29 09:35:07.834 I ns/openshift-etcd pod/etcd-quorum-guard-5b97dfcf4f-p7hq2 node/ reason/Created
Nov 29 09:35:07.834 I ns/openshift-etcd pod/etcd-quorum-guard-5b97dfcf4f-vpqsl node/ reason/Created
Nov 29 09:35:08.000 I ns/openshift-etcd pod/etcd-quorum-guard-5b97dfcf4f-5nb68 node/ip-10-0-245-62.us-west-2.compute.internal container/guard reason/Killing
Nov 29 09:35:09.000 I ns/openshift-etcd pod/etcd-quorum-guard-5b97dfcf4f-wl6t7 node/ip-10-0-245-62.us-west-2.compute.internal container/guard reason/Pulled image/registry.build03.ci.openshift.org/ci-op-1lm7t27x/stable@sha256:c353c9f7a6f6705632c05b4568e67e758102d5a0a3653cb7b2746e9a7be1cd16
Nov 29 09:35:09.000 W ns/openshift-apiserver pod/apiserver-98f479c48-q2mbj node/ip-10-0-150-46.us-west-2.compute.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]informer-sync ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/max-in-flight-filter ok\n[+]poststarthook/image.openshift.io-apiserver-caches ok\n[+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok\n[+]poststarthook/authorization.openshift.io-ensureopenshift-infra ok\n[+]poststarthook/project.openshift.io-projectcache ok\n[+]poststarthook/project.openshift.io-projectauthorizationcache ok\n[+]poststarthook/openshift.io-startinformers ok\n[+]poststarthook/openshift.io-restmapperupdater ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (3 times)
Nov 29 09:35:09.000 I ns/openshift-apiserver pod/apiserver-98f479c48-q2mbj node/apiserver-98f479c48-q2mbj reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 29 09:35:09.000 I ns/openshift-apiserver pod/apiserver-98f479c48-q2mbj node/apiserver-98f479c48-q2mbj reason/TerminationStoppedServing Server has stopped listening
Nov 29 09:35:09.000 W ns/openshift-apiserver pod/apiserver-98f479c48-q2mbj node/ip-10-0-150-46.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (3 times)
Nov 29 09:35:09.539 W ns/openshift-etcd pod/etcd-quorum-guard-5b97dfcf4f-p7hq2 reason/FailedScheduling 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector.
Nov 29 09:35:09.539 W ns/openshift-etcd pod/etcd-quorum-guard-5b97dfcf4f-vpqsl reason/FailedScheduling 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector.
Nov 29 09:35:09.547 I ns/openshift-etcd pod/etcd-quorum-guard-5b97dfcf4f-5nb68 node/ip-10-0-245-62.us-west-2.compute.internal container/guard reason/ContainerExit code/0 cause/Completed
#1597512331414212608build-log.txt.gz2 days ago
Nov 29 10:07:18.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-7bd64d7bf9 to 1 (2 times)
Nov 29 10:07:18.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-6768dd9c97 to 2
Nov 29 10:07:18.000 I ns/openshift-etcd pod/installer-10-ip-10-0-215-2.us-west-2.compute.internal reason/StaticPodInstallerCompleted Successfully installed revision 10
Nov 29 10:07:18.000 I ns/openshift-oauth-apiserver replicaset/apiserver-6768dd9c97 reason/SuccessfulCreate Created pod: apiserver-6768dd9c97-x56gd
Nov 29 10:07:18.000 I ns/openshift-oauth-apiserver replicaset/apiserver-7bd64d7bf9 reason/SuccessfulDelete Deleted pod: apiserver-7bd64d7bf9-ktw7w
Nov 29 10:07:18.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-ktw7w reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 29 10:07:18.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-ktw7w reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 29 10:07:18.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-ktw7w reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 29 10:07:18.000 I ns/default namespace/kube-system node/apiserver-7bd64d7bf9-ktw7w reason/TerminationStoppedServing Server has stopped listening
Nov 29 10:07:18.000 W ns/openshift-etcd pod/etcd-quorum-guard-5b97dfcf4f-szx8x node/ip-10-0-215-2.us-west-2.compute.internal reason/Unhealthy Readiness probe failed:  (9 times)
Nov 29 10:07:18.018 I ns/openshift-kube-controller-manager pod/installer-9-ip-10-0-215-2.us-west-2.compute.internal node/ip-10-0-215-2.us-west-2.compute.internal container/installer reason/Ready
rehearse-33937-periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-gcp-ovn-rt-upgrade (all) - 4 runs, 25% failed, 100% of failures match = 25% impact
#1597223437217042432build-log.txt.gz3 days ago
Nov 28 14:38:34.000 I ns/openshift-marketplace pod/certified-operators-xk69w reason/AddedInterface Add eth0 [10.131.0.40/23] from ovn-kubernetes
Nov 28 14:38:34.872 I ns/openshift-marketplace pod/community-operators-vlrlp node/ci-op-4sppixf8-efac9-45zws-worker-c-h255r container/registry-server reason/ContainerStart duration/3.00s
Nov 28 14:38:35.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-ldrkx node/ci-op-4sppixf8-efac9-45zws-master-2 reason/Unhealthy Readiness probe failed:  (7 times)
Nov 28 14:38:35.454 - 89s   I alert/TargetDown ns/openshift-etcd ALERTS{alertname="TargetDown", alertstate="pending", job="etcd", namespace="openshift-etcd", prometheus="openshift-monitoring/k8s", service="etcd", severity="warning"}
Nov 28 14:38:35.910 I ns/openshift-marketplace pod/certified-operators-xk69w node/ci-op-4sppixf8-efac9-45zws-worker-c-h255r container/registry-server reason/ContainerStart duration/3.00s
Nov 28 14:38:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-0 node/ci-op-4sppixf8-efac9-45zws-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 28 14:38:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-0 node/ci-op-4sppixf8-efac9-45zws-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 14:38:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-0 node/ci-op-4sppixf8-efac9-45zws-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 14:38:38.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-4sppixf8-efac9-45zws-master-0 node/ci-op-4sppixf8-efac9-45zws-master-0 reason/ProbeError Readiness probe error: Get "https://10.0.0.4:6443/healthz": dial tcp 10.0.0.4:6443: connect: connection refused\nbody: \n
Nov 28 14:38:38.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-4sppixf8-efac9-45zws-master-0 node/ci-op-4sppixf8-efac9-45zws-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.4:6443/healthz": dial tcp 10.0.0.4:6443: connect: connection refused
Nov 28 14:38:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-0 node/ci-op-4sppixf8-efac9-45zws-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
#1597223437217042432build-log.txt.gz3 days ago
Nov 28 14:41:16.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available"
Nov 28 14:41:16.454 - 59s   I alert/PodDisruptionBudgetAtLimit ns/openshift-etcd ALERTS{alertname="PodDisruptionBudgetAtLimit", alertstate="pending", namespace="openshift-etcd", poddisruptionbudget="etcd-quorum-guard", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 28 14:41:16.665 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found
Nov 28 14:41:21.966 I ns/openshift-etcd pod/installer-2-ci-op-4sppixf8-efac9-45zws-master-1 node/ci-op-4sppixf8-efac9-45zws-master-1 reason/DeletedAfterCompletion
Nov 28 14:41:23.454 - 59s   I alert/KubeDeploymentReplicasMismatch ns/openshift-etcd container/kube-rbac-proxy-main ALERTS{alertname="KubeDeploymentReplicasMismatch", alertstate="pending", container="kube-rbac-proxy-main", deployment="etcd-quorum-guard", endpoint="https-main", job="kube-state-metrics", namespace="openshift-etcd", prometheus="openshift-monitoring/k8s", service="kube-state-metrics", severity="warning"}
Nov 28 14:41:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-1 node/ci-op-4sppixf8-efac9-45zws-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 28 14:41:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-1 node/ci-op-4sppixf8-efac9-45zws-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 28 14:41:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-1 node/ci-op-4sppixf8-efac9-45zws-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 14:41:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-1 node/ci-op-4sppixf8-efac9-45zws-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 14:41:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-1 node/ci-op-4sppixf8-efac9-45zws-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 14:41:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-1 node/ci-op-4sppixf8-efac9-45zws-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 14:41:42.454 - 329s  I alert/etcdGRPCRequestsSlow node/10.0.0.4:9979 ns/openshift-etcd pod/etcd-ci-op-4sppixf8-efac9-45zws-master-0 ALERTS{alertname="etcdGRPCRequestsSlow", alertstate="pending", endpoint="etcd-metrics", grpc_method="MemberList", grpc_service="etcdserverpb.Cluster", instance="10.0.0.4:9979", job="etcd", namespace="openshift-etcd", pod="etcd-ci-op-4sppixf8-efac9-45zws-master-0", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
#1597223437217042432build-log.txt.gz3 days ago
Nov 28 14:43:47.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (33 times)
Nov 28 14:43:50.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (34 times)
Nov 28 14:43:50.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (34 times)
Nov 28 14:44:14.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (35 times)
Nov 28 14:44:14.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (35 times)
Nov 28 14:44:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-2 node/ci-op-4sppixf8-efac9-45zws-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 28 14:44:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-2 node/ci-op-4sppixf8-efac9-45zws-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 28 14:44:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-2 node/ci-op-4sppixf8-efac9-45zws-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 14:44:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-2 node/ci-op-4sppixf8-efac9-45zws-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 28 14:44:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-2 node/ci-op-4sppixf8-efac9-45zws-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 14:44:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-2 node/ci-op-4sppixf8-efac9-45zws-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 28 14:44:55.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-4sppixf8-efac9-45zws-master-2 node/ci-op-4sppixf8-efac9-45zws-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
#1597223437217042432build-log.txt.gz3 days ago
Nov 28 15:01:48.000 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy csi-snapshot-controller pods\nCSISnapshotWebhookControllerProgressing: desired generation 2, current generation 1" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy csi-snapshot-controller pods\nCSISnapshotWebhookControllerProgressing: 1 out of 2 pods running" (7 times)
Nov 28 15:01:48.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-5566c9d94 to 2
Nov 28 15:01:48.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-57f6685d7b to 1
Nov 28 15:01:48.000 I ns/openshift-oauth-apiserver replicaset/apiserver-57f6685d7b reason/SuccessfulCreate Created pod: apiserver-57f6685d7b-nzg24
Nov 28 15:01:48.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5566c9d94 reason/SuccessfulDelete Deleted pod: apiserver-5566c9d94-cq4bc
Nov 28 15:01:48.000 I ns/default namespace/kube-system node/apiserver-5566c9d94-cq4bc reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 28 15:01:48.000 I ns/default namespace/kube-system node/apiserver-5566c9d94-cq4bc reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 28 15:01:48.000 I ns/default namespace/kube-system node/apiserver-5566c9d94-cq4bc reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 28 15:01:48.000 I ns/default namespace/kube-system node/apiserver-5566c9d94-cq4bc reason/TerminationStoppedServing Server has stopped listening
Nov 28 15:01:48.316 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-7b89dcf465-xv946 node/ci-op-4sppixf8-efac9-45zws-master-1 container/snapshot-controller reason/ContainerExit code/2 cause/Error
Nov 28 15:01:48.351 I ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-7b89dcf465-xv946 node/ci-op-4sppixf8-efac9-45zws-master-1 reason/Deleted
release-openshift-ocp-installer-e2e-gcp-serial-4.9 (all) - 5 runs, 40% failed, 50% of failures match = 20% impact
#1597388535101394944build-log.txt.gz2 days ago
Nov 29 01:04:05.053 I ns/e2e-sched-pred-6054 pod/with-tolerations node/ci-op-16248cbl-5f249-k9xks-worker-c-n26d4 container/with-tolerations reason/Ready
Nov 29 01:04:06.854 I e2e-test/"[sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching [Suite:openshift/conformance/serial] [Suite:k8s]" finishedStatus/Passed
Nov 29 01:04:06.854 I e2e-test/"[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for [Suite:openshift/conformance/serial] [Suite:k8s]" started
Nov 29 01:04:06.854 - 19s   I e2e-test/"[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for [Suite:openshift/conformance/serial] [Suite:k8s]" e2e test finished As "Passed"
Nov 29 01:04:07.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-16248cbl-5f249-k9xks-master-2 reason/CreatedSCCRanges created SCC ranges for e2e-sched-pred-4274 namespace
Nov 29 01:04:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-16248cbl-5f249-k9xks-master-0 node/ci-op-16248cbl-5f249-k9xks-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 29 01:04:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-16248cbl-5f249-k9xks-master-0 node/ci-op-16248cbl-5f249-k9xks-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 29 01:04:08.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-16248cbl-5f249-k9xks-master-0 node/ci-op-16248cbl-5f249-k9xks-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 29 01:04:08.393 I ns/e2e-sched-pred-4274 pod/without-label node/ci-op-16248cbl-5f249-k9xks-worker-c-n26d4 reason/Scheduled
Nov 29 01:04:08.397 I ns/e2e-sched-pred-4274 pod/without-label node/ reason/Created
Nov 29 01:04:10.000 I ns/e2e-sched-pred-4274 pod/without-label node/ci-op-16248cbl-5f249-k9xks-worker-c-n26d4 container/without-label reason/Created
periodic-ci-shiftstack-shiftstack-ci-main-periodic-4.9-upgrade-from-stable-4.8-e2e-openstack-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1597059733510950912build-log.txt.gz3 days ago
Nov 28 04:17:54.000 I ns/openshift-kube-scheduler-operator deployment/openshift-kube-scheduler-operator reason/OperatorStatusChanged Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: etcdserver: request timed out"
Nov 28 04:17:54.000 I ns/openshift-kube-scheduler-operator deployment/openshift-kube-scheduler-operator reason/OperatorStatusChanged Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: etcdserver: request timed out"
Nov 28 04:17:54.000 I ns/openshift-kube-scheduler-operator deployment/openshift-kube-scheduler-operator reason/OperatorStatusChanged Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: etcdserver: request timed out"
Nov 28 04:17:54.000 I ns/openshift-kube-scheduler-operator deployment/openshift-kube-scheduler-operator reason/OperatorStatusChanged Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: etcdserver: request timed out"
Nov 28 04:17:54.000 I ns/openshift-kube-scheduler-operator deployment/openshift-kube-scheduler-operator reason/OperatorStatusChanged Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: etcdserver: request timed out"
Nov 28 04:17:54.000 I ns/default namespace/kube-system node/apiserver-8d885446c-xz8x9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 28 04:17:54.000 I ns/default namespace/kube-system node/apiserver-8d885446c-xz8x9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 28 04:17:54.000 I ns/default namespace/kube-system node/apiserver-8d885446c-xz8x9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 28 04:17:54.000 I ns/default namespace/kube-system node/apiserver-8d885446c-xz8x9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 28 04:17:54.000 I ns/default namespace/kube-system node/apiserver-8d885446c-xz8x9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 28 04:17:54.000 I ns/default namespace/kube-system node/apiserver-8d885446c-xz8x9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 28 04:17:54.000 I ns/default namespace/kube-system node/apiserver-8d885446c-xz8x9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 28 04:17:54.000 I ns/default namespace/kube-system node/apiserver-8d885446c-xz8x9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 28 04:17:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"v4.1.0/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): etcdserver: leader changed\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": etcdserver: leader changed" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"v4.1.0/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): etcdserver: leader changed\nKubeAPIServerStaticResourcesDegraded: "
Nov 28 04:17:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"v4.1.0/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): etcdserver: leader changed\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": etcdserver: leader changed" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"v4.1.0/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): etcdserver: leader changed\nKubeAPIServerStaticResourcesDegraded: "
Nov 28 04:17:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"v4.1.0/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): etcdserver: leader changed\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": etcdserver: leader changed" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"v4.1.0/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): etcdserver: leader changed\nKubeAPIServerStaticResourcesDegraded: "
Nov 28 04:17:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"v4.1.0/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): etcdserver: leader changed\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": etcdserver: leader changed" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"v4.1.0/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): etcdserver: leader changed\nKubeAPIServerStaticResourcesDegraded: "
Nov 28 04:17:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"v4.1.0/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): etcdserver: leader changed\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": etcdserver: leader changed" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"v4.1.0/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): etcdserver: leader changed\nKubeAPIServerStaticResourcesDegraded: "
periodic-ci-openshift-release-master-okd-4.9-upgrade-from-okd-4.8-e2e-upgrade-gcp (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1597072568106356736build-log.txt.gz3 days ago
Nov 28 04:46:34.154 I ns/openshift-marketplace pod/community-operators-5qkrk node/ci-op-9cpzqi4i-8caf8-jqfsr-worker-a-5bwt5 container/registry-server reason/Ready
Nov 28 04:46:34.155 I ns/openshift-marketplace pod/community-operators-5qkrk node/ci-op-9cpzqi4i-8caf8-jqfsr-worker-a-5bwt5 reason/GracefulDelete duration/1s
Nov 28 04:46:36.009 I ns/openshift-marketplace pod/community-operators-5qkrk node/ci-op-9cpzqi4i-8caf8-jqfsr-worker-a-5bwt5 container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 28 04:46:45.966 I ns/openshift-marketplace pod/community-operators-5qkrk node/ci-op-9cpzqi4i-8caf8-jqfsr-worker-a-5bwt5 reason/Deleted
Nov 28 04:47:23.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.0705719955137605 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-9cpzqi4i-8caf8-jqfsr-master-1=0.014260651957366284,etcd-ci-op-9cpzqi4i-8caf8-jqfsr-master-2=0.006577142857142858,etcd-ci-op-9cpzqi4i-8caf8-jqfsr-master-0=0.007907999999999998. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 28 04:47:32.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9cpzqi4i-8caf8-jqfsr-master-2 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-2 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 28 04:47:32.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9cpzqi4i-8caf8-jqfsr-master-2 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-2 reason/TerminationStoppedServing Server has stopped listening
Nov 28 04:48:32.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9cpzqi4i-8caf8-jqfsr-master-2 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 28 04:48:36.000 W ns/openshift-kube-apiserver endpoints/apiserver reason/FailedToUpdateEndpoint Failed to update endpoint openshift-kube-apiserver/apiserver: Operation cannot be fulfilled on endpoints "apiserver": the object has been modified; please apply your changes to the latest version and try again (5 times)
Nov 28 04:48:36.656 W ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9cpzqi4i-8caf8-jqfsr-master-2 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-2 invariant violation (bug): static pod should not transition Running->Pending with same UID
Nov 28 04:48:36.656 W ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9cpzqi4i-8caf8-jqfsr-master-2 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-2 container/kube-apiserver-cert-syncer reason/NotReady
#1597072568106356736build-log.txt.gz3 days ago
Nov 28 04:50:10.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/origin/4.8-2022-11-24-040045@sha256:0a2c3f015e4863aa6a4d7394037d61f61741e8bc3ea6b23fbea38301b986e632,registry.ci.openshift.org/origin/4.9-2022-11-23-035736@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (12 times)
Nov 28 04:50:14.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/origin/4.8-2022-11-24-040045@sha256:0a2c3f015e4863aa6a4d7394037d61f61741e8bc3ea6b23fbea38301b986e632,registry.ci.openshift.org/origin/4.9-2022-11-23-035736@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (13 times)
Nov 28 04:50:31.000 W ns/openshift-network-diagnostics node/ci-op-9cpzqi4i-8caf8-jqfsr-worker-a-5bwt5 reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ci-op-9cpzqi4i-8caf8-jqfsr-master-2: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused
Nov 28 04:50:31.000 I ns/openshift-network-diagnostics node/ci-op-9cpzqi4i-8caf8-jqfsr-worker-a-5bwt5 reason/ConnectivityRestored roles/worker Connectivity restored after 59.999995079s: kubernetes-apiserver-endpoint-ci-op-9cpzqi4i-8caf8-jqfsr-master-2: tcp connection to 10.0.0.4:6443 succeeded
Nov 28 04:51:13.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/origin/4.8-2022-11-24-040045@sha256:0a2c3f015e4863aa6a4d7394037d61f61741e8bc3ea6b23fbea38301b986e632,registry.ci.openshift.org/origin/4.9-2022-11-23-035736@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (14 times)
Nov 28 04:51:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9cpzqi4i-8caf8-jqfsr-master-1 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 28 04:51:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9cpzqi4i-8caf8-jqfsr-master-1 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 reason/TerminationStoppedServing Server has stopped listening
Nov 28 04:51:53.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-9cpzqi4i-8caf8-jqfsr-master-1 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 container/kube-scheduler-recovery-controller reason/Created
Nov 28 04:51:53.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-9cpzqi4i-8caf8-jqfsr-master-1 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 container/kube-scheduler-recovery-controller reason/Pulled image/registry.ci.openshift.org/origin/4.8-2022-11-24-040045@sha256:69271ac3d87bfe96ca5f6911a634305b03eb7f673c37f1d391174a9e0e24325c
Nov 28 04:51:53.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-9cpzqi4i-8caf8-jqfsr-master-1 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 container/kube-scheduler-recovery-controller reason/Started
Nov 28 04:51:53.620 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-9cpzqi4i-8caf8-jqfsr-master-1 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 container/kube-scheduler-recovery-controller reason/ContainerExit code/0 cause/Completed
#1597072568106356736build-log.txt.gz3 days ago
Nov 28 04:53:51.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/origin/4.8-2022-11-24-040045@sha256:0a2c3f015e4863aa6a4d7394037d61f61741e8bc3ea6b23fbea38301b986e632,registry.ci.openshift.org/origin/4.9-2022-11-23-035736@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (30 times)
Nov 28 04:54:00.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-9cpzqi4i-8caf8-jqfsr-master-1_80b97c81-b0d5-47ec-bd00-984bf7bf953f became leader
Nov 28 04:54:00.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-9cpzqi4i-8caf8-jqfsr-master-1_80b97c81-b0d5-47ec-bd00-984bf7bf953f became leader
Nov 28 04:54:12.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/origin/4.8-2022-11-24-040045@sha256:0a2c3f015e4863aa6a4d7394037d61f61741e8bc3ea6b23fbea38301b986e632,registry.ci.openshift.org/origin/4.9-2022-11-23-035736@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (31 times)
Nov 28 04:54:12.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/origin/4.8-2022-11-24-040045@sha256:0a2c3f015e4863aa6a4d7394037d61f61741e8bc3ea6b23fbea38301b986e632,registry.ci.openshift.org/origin/4.9-2022-11-23-035736@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (31 times)
Nov 28 04:54:57.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9cpzqi4i-8caf8-jqfsr-master-0 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-0 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 28 04:54:58.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-9cpzqi4i-8caf8-jqfsr-master-0 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-0 reason/TerminationStoppedServing Server has stopped listening
Nov 28 04:55:16.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/origin/4.8-2022-11-24-040045@sha256:0a2c3f015e4863aa6a4d7394037d61f61741e8bc3ea6b23fbea38301b986e632,registry.ci.openshift.org/origin/4.9-2022-11-23-035736@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (32 times)
Nov 28 04:55:18.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: registry.ci.openshift.org/origin/4.8-2022-11-24-040045@sha256:0a2c3f015e4863aa6a4d7394037d61f61741e8bc3ea6b23fbea38301b986e632,registry.ci.openshift.org/origin/4.9-2022-11-23-035736@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (33 times)
Nov 28 04:55:42.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-9cpzqi4i-8caf8-jqfsr-master-0 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-0 container/kube-scheduler-recovery-controller reason/Created
Nov 28 04:55:42.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-9cpzqi4i-8caf8-jqfsr-master-0 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-0 container/kube-scheduler-recovery-controller reason/Pulled image/registry.ci.openshift.org/origin/4.8-2022-11-24-040045@sha256:69271ac3d87bfe96ca5f6911a634305b03eb7f673c37f1d391174a9e0e24325c
#1597072568106356736build-log.txt.gz3 days ago
Nov 28 05:04:38.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 28 05:04:38.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6."
Nov 28 05:04:39.058 W ns/openshift-apiserver pod/apiserver-7c76f54d9c-njlx2 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 28 05:04:39.058 W ns/openshift-apiserver pod/apiserver-7c76f54d9c-njlx2 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 28 05:04:39.098 I ns/openshift-machine-api pod/machine-api-operator-77d9b5fcd7-pbqg9 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 reason/Deleted
Nov 28 05:04:41.000 I ns/openshift-apiserver pod/apiserver-678c7d55cd-ghp46 node/apiserver-678c7d55cd-ghp46 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 28 05:04:41.000 I ns/openshift-apiserver pod/apiserver-678c7d55cd-ghp46 node/apiserver-678c7d55cd-ghp46 reason/TerminationStoppedServing Server has stopped listening
Nov 28 05:04:44.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-ghp46 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-0 reason/ProbeError Liveness probe error: Get "https://10.128.0.26:8443/healthz": dial tcp 10.128.0.26:8443: connect: connection refused\nbody: \n
Nov 28 05:04:44.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-ghp46 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-0 reason/ProbeError Readiness probe error: Get "https://10.128.0.26:8443/healthz": dial tcp 10.128.0.26:8443: connect: connection refused\nbody: \n
Nov 28 05:04:44.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-ghp46 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-0 reason/Unhealthy Liveness probe failed: Get "https://10.128.0.26:8443/healthz": dial tcp 10.128.0.26:8443: connect: connection refused
Nov 28 05:04:44.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-ghp46 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.128.0.26:8443/healthz": dial tcp 10.128.0.26:8443: connect: connection refused
#1597072568106356736build-log.txt.gz3 days ago
Nov 28 05:06:21.563 I ns/openshift-apiserver pod/apiserver-7c76f54d9c-njlx2 node/ci-op-9cpzqi4i-8caf8-jqfsr-master-0 container/openshift-apiserver reason/Ready
Nov 28 05:06:21.658 I ns/openshift-apiserver pod/apiserver-678c7d55cd-thq9r node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 reason/GracefulDelete duration/70s
Nov 28 05:06:21.824 W ns/openshift-apiserver pod/apiserver-7c76f54d9c-hdfdt reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 28 05:06:21.839 I ns/openshift-apiserver pod/apiserver-7c76f54d9c-hdfdt node/ reason/Created
Nov 28 05:06:24.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 28 05:06:31.000 I ns/openshift-apiserver pod/apiserver-678c7d55cd-thq9r node/apiserver-678c7d55cd-thq9r reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 28 05:06:31.000 I ns/openshift-apiserver pod/apiserver-678c7d55cd-thq9r node/apiserver-678c7d55cd-thq9r reason/TerminationStoppedServing Server has stopped listening
Nov 28 05:06:32.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-thq9r node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 reason/ProbeError Liveness probe error: Get "https://10.129.0.36:8443/healthz": dial tcp 10.129.0.36:8443: connect: connection refused\nbody: \n
Nov 28 05:06:32.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-thq9r node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 reason/ProbeError Readiness probe error: Get "https://10.129.0.36:8443/healthz": dial tcp 10.129.0.36:8443: connect: connection refused\nbody: \n
Nov 28 05:06:32.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-thq9r node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 reason/Unhealthy Liveness probe failed: Get "https://10.129.0.36:8443/healthz": dial tcp 10.129.0.36:8443: connect: connection refused
Nov 28 05:06:32.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-thq9r node/ci-op-9cpzqi4i-8caf8-jqfsr-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.129.0.36:8443/healthz": dial tcp 10.129.0.36:8443: connect: connection refused
#1597072568106356736build-log.txt.gz3 days ago
Nov 28 05:07:59.369 I ns/openshift-apiserver pod/apiserver-678c7d55cd-xfh8r node/ci-op-9cpzqi4i-8caf8-jqfsr-master-2 reason/GracefulDelete duration/70s
Nov 28 05:07:59.585 W ns/openshift-apiserver pod/apiserver-7c76f54d9c-nc4lt reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 28 05:07:59.594 I ns/openshift-apiserver pod/apiserver-7c76f54d9c-nc4lt node/ reason/Created
Nov 28 05:08:00.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Nov 28 05:08:00.933 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Nov 28 05:08:09.000 I ns/openshift-apiserver pod/apiserver-678c7d55cd-xfh8r node/apiserver-678c7d55cd-xfh8r reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 28 05:08:09.000 I ns/openshift-apiserver pod/apiserver-678c7d55cd-xfh8r node/apiserver-678c7d55cd-xfh8r reason/TerminationStoppedServing Server has stopped listening
Nov 28 05:08:10.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-xfh8r node/ci-op-9cpzqi4i-8caf8-jqfsr-master-2 reason/ProbeError Liveness probe error: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused\nbody: \n
Nov 28 05:08:10.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-xfh8r node/ci-op-9cpzqi4i-8caf8-jqfsr-master-2 reason/ProbeError Readiness probe error: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused\nbody: \n
Nov 28 05:08:10.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-xfh8r node/ci-op-9cpzqi4i-8caf8-jqfsr-master-2 reason/Unhealthy Liveness probe failed: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused
Nov 28 05:08:10.000 W ns/openshift-apiserver pod/apiserver-678c7d55cd-xfh8r node/ci-op-9cpzqi4i-8caf8-jqfsr-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused
periodic-ci-openshift-release-master-ci-4.9-upgrade-from-stable-4.8-e2e-gcp-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1596966618657722368build-log.txt.gz3 days ago
Nov 27 21:37:03.000 I ns/openshift-kube-apiserver pod/installer-9-ci-op-zy9374vz-5dcfc-tklc5-master-2 reason/StaticPodInstallerCompleted Successfully installed revision 9
Nov 27 21:37:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-zy9374vz-5dcfc-tklc5-master-2 node/ci-op-zy9374vz-5dcfc-tklc5-master-2 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 27 21:37:03.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-zy9374vz-5dcfc-tklc5-master-2 node/ci-op-zy9374vz-5dcfc-tklc5-master-2 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 27 21:37:04.798 I ns/openshift-kube-apiserver pod/installer-9-ci-op-zy9374vz-5dcfc-tklc5-master-2 node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/installer reason/ContainerExit code/0 cause/Completed
Nov 27 21:37:29.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.1309920833644105 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-zy9374vz-5dcfc-tklc5-master-1=0.007433333333333351,etcd-ci-op-zy9374vz-5dcfc-tklc5-master-0=0.008516339581036349,etcd-ci-op-zy9374vz-5dcfc-tklc5-master-2=0.009711999999999993. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 27 21:38:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-zy9374vz-5dcfc-tklc5-master-2 node/ci-op-zy9374vz-5dcfc-tklc5-master-2 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 27 21:38:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-zy9374vz-5dcfc-tklc5-master-2 node/ci-op-zy9374vz-5dcfc-tklc5-master-2 reason/TerminationStoppedServing Server has stopped listening
Nov 27 21:38:52.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-zy9374vz-5dcfc-tklc5-master-2 node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/kube-controller-manager-recovery-controller reason/Created
Nov 27 21:38:52.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-zy9374vz-5dcfc-tklc5-master-2 node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
Nov 27 21:38:52.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-zy9374vz-5dcfc-tklc5-master-2 node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/kube-controller-manager-recovery-controller reason/Started
Nov 27 21:38:52.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-zy9374vz-5dcfc-tklc5-master-2 node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/kube-scheduler-recovery-controller reason/Created
#1596966618657722368build-log.txt.gz3 days ago
Nov 27 21:40:51.254 I ns/openshift-marketplace pod/redhat-operators-nzxvs node/ci-op-zy9374vz-5dcfc-tklc5-worker-a-7pb4b reason/GracefulDelete duration/1s
Nov 27 21:40:52.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-zy9374vz-5dcfc-tklc5-master-0_1b17411a-d0e1-4112-8612-a6f13d117d18 became leader
Nov 27 21:40:52.517 I ns/openshift-marketplace pod/redhat-operators-nzxvs node/ci-op-zy9374vz-5dcfc-tklc5-worker-a-7pb4b container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 27 21:40:59.946 I ns/openshift-marketplace pod/redhat-operators-nzxvs node/ci-op-zy9374vz-5dcfc-tklc5-worker-a-7pb4b reason/Deleted
Nov 27 21:41:30.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (15 times)
Nov 27 21:41:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-zy9374vz-5dcfc-tklc5-master-1 node/ci-op-zy9374vz-5dcfc-tklc5-master-1 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 27 21:41:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-zy9374vz-5dcfc-tklc5-master-1 node/ci-op-zy9374vz-5dcfc-tklc5-master-1 reason/TerminationStoppedServing Server has stopped listening
Nov 27 21:42:03.119 I ns/openshift-marketplace pod/community-operators-d67r5 node/ci-op-zy9374vz-5dcfc-tklc5-worker-a-7pb4b reason/Scheduled
Nov 27 21:42:03.122 I ns/openshift-marketplace pod/community-operators-d67r5 node/ reason/Created
Nov 27 21:42:05.000 I ns/openshift-marketplace pod/community-operators-d67r5 node/ci-op-zy9374vz-5dcfc-tklc5-worker-a-7pb4b container/registry-server reason/Pulling image/registry.redhat.io/redhat/community-operator-index:v4.8
Nov 27 21:42:05.000 I ns/openshift-marketplace pod/community-operators-d67r5 reason/AddedInterface Add eth0 [10.131.0.35/23] from ovn-kubernetes
#1596966618657722368build-log.txt.gz3 days ago
Nov 27 21:45:01.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulDelete Deleted job ip-reconciler-27826425
Nov 27 21:45:01.770 I ns/openshift-multus pod/ip-reconciler-27826425-xzk92 node/ci-op-zy9374vz-5dcfc-tklc5-worker-c-vjxb9 container/whereabouts reason/ContainerExit code/0 cause/Completed
Nov 27 21:45:01.895 I ns/openshift-multus pod/ip-reconciler-27826425-xzk92 node/ci-op-zy9374vz-5dcfc-tklc5-worker-c-vjxb9 reason/DeletedAfterCompletion
Nov 27 21:45:33.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (32 times)
Nov 27 21:45:36.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (33 times)
Nov 27 21:45:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-zy9374vz-5dcfc-tklc5-master-0 node/ci-op-zy9374vz-5dcfc-tklc5-master-0 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 27 21:45:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-zy9374vz-5dcfc-tklc5-master-0 node/ci-op-zy9374vz-5dcfc-tklc5-master-0 reason/TerminationStoppedServing Server has stopped listening
Nov 27 21:46:32.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (34 times)
Nov 27 21:46:42.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-zy9374vz-5dcfc-tklc5-master-0 node/ci-op-zy9374vz-5dcfc-tklc5-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 27 21:46:47.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (35 times)
Nov 27 21:46:54.417 W ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-zy9374vz-5dcfc-tklc5-master-0 node/ci-op-zy9374vz-5dcfc-tklc5-master-0 invariant violation (bug): static pod should not transition Running->Pending with same UID
#1596966618657722368build-log.txt.gz3 days ago
Nov 27 21:52:21.050 I ns/openshift-machine-api pod/machine-api-operator-7fffdc9787-jmsj2 node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/kube-rbac-proxy reason/Ready
Nov 27 21:52:21.077 I ns/openshift-machine-api pod/machine-api-operator-58c455bf4f-mlpbb node/ci-op-zy9374vz-5dcfc-tklc5-master-1 reason/GracefulDelete duration/30s
Nov 27 21:52:22.175 I ns/openshift-machine-api pod/machine-api-operator-58c455bf4f-mlpbb node/ci-op-zy9374vz-5dcfc-tklc5-master-1 container/kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
Nov 27 21:52:22.175 E ns/openshift-machine-api pod/machine-api-operator-58c455bf4f-mlpbb node/ci-op-zy9374vz-5dcfc-tklc5-master-1 container/machine-api-operator reason/ContainerExit code/2 cause/Error
Nov 27 21:52:23.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 27 21:52:25.000 I ns/openshift-apiserver pod/apiserver-6df8f9c97d-d5wdq node/apiserver-6df8f9c97d-d5wdq reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 27 21:52:25.000 I ns/openshift-apiserver pod/apiserver-6df8f9c97d-d5wdq node/apiserver-6df8f9c97d-d5wdq reason/TerminationStoppedServing Server has stopped listening
Nov 27 21:52:29.706 I ns/openshift-machine-api pod/machine-api-operator-58c455bf4f-mlpbb node/ci-op-zy9374vz-5dcfc-tklc5-master-1 reason/Deleted
Nov 27 21:52:29.894 W ns/openshift-apiserver pod/apiserver-6596cf5969-2bj2c reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 27 21:52:29.894 W ns/openshift-apiserver pod/apiserver-6596cf5969-2bj2c reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 27 21:52:34.000 W ns/openshift-apiserver pod/apiserver-6df8f9c97d-d5wdq node/ci-op-zy9374vz-5dcfc-tklc5-master-2 reason/ProbeError Liveness probe error: Get "https://10.129.0.40:8443/healthz": dial tcp 10.129.0.40:8443: connect: connection refused\nbody: \n
#1596966618657722368build-log.txt.gz3 days ago
Nov 27 21:53:48.522 I ns/openshift-apiserver pod/apiserver-6596cf5969-2bj2c node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/openshift-apiserver reason/Ready
Nov 27 21:53:48.612 I ns/openshift-apiserver pod/apiserver-6df8f9c97d-tzpmx node/ci-op-zy9374vz-5dcfc-tklc5-master-1 reason/GracefulDelete duration/70s
Nov 27 21:53:48.717 W ns/openshift-apiserver pod/apiserver-6596cf5969-4pqs6 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 27 21:53:48.722 I ns/openshift-apiserver pod/apiserver-6596cf5969-4pqs6 node/ reason/Created
Nov 27 21:53:49.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 27 21:53:58.000 I ns/openshift-apiserver pod/apiserver-6df8f9c97d-tzpmx node/apiserver-6df8f9c97d-tzpmx reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 27 21:53:58.000 I ns/openshift-apiserver pod/apiserver-6df8f9c97d-tzpmx node/apiserver-6df8f9c97d-tzpmx reason/TerminationStoppedServing Server has stopped listening
Nov 27 21:54:01.000 W ns/openshift-apiserver pod/apiserver-6df8f9c97d-tzpmx node/ci-op-zy9374vz-5dcfc-tklc5-master-1 reason/ProbeError Liveness probe error: Get "https://10.130.0.36:8443/healthz": dial tcp 10.130.0.36:8443: connect: connection refused\nbody: \n
Nov 27 21:54:01.000 W ns/openshift-apiserver pod/apiserver-6df8f9c97d-tzpmx node/ci-op-zy9374vz-5dcfc-tklc5-master-1 reason/ProbeError Readiness probe error: Get "https://10.130.0.36:8443/healthz": dial tcp 10.130.0.36:8443: connect: connection refused\nbody: \n
Nov 27 21:54:01.000 W ns/openshift-apiserver pod/apiserver-6df8f9c97d-tzpmx node/ci-op-zy9374vz-5dcfc-tklc5-master-1 reason/Unhealthy Liveness probe failed: Get "https://10.130.0.36:8443/healthz": dial tcp 10.130.0.36:8443: connect: connection refused
Nov 27 21:54:01.000 W ns/openshift-apiserver pod/apiserver-6df8f9c97d-tzpmx node/ci-op-zy9374vz-5dcfc-tklc5-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.130.0.36:8443/healthz": dial tcp 10.130.0.36:8443: connect: connection refused
#1596966618657722368build-log.txt.gz3 days ago
Nov 27 21:55:41.654 I ns/openshift-machine-api pod/machine-api-controllers-c5c75568c-rb8xb node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/kube-rbac-proxy-machine-mtrc reason/Ready
Nov 27 21:55:41.654 I ns/openshift-machine-api pod/machine-api-controllers-c5c75568c-rb8xb node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/kube-rbac-proxy-mhc-mtrc reason/Ready
Nov 27 21:55:41.654 I ns/openshift-machine-api pod/machine-api-controllers-c5c75568c-rb8xb node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/kube-rbac-proxy-machineset-mtrc reason/Ready
Nov 27 21:55:41.654 I ns/openshift-machine-api pod/machine-api-controllers-c5c75568c-rb8xb node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/nodelink-controller reason/Ready
Nov 27 21:55:41.676 I ns/openshift-machine-api pod/machine-api-controllers-c5c75568c-rb8xb node/ci-op-zy9374vz-5dcfc-tklc5-master-2 container/machineset-controller reason/Ready
Nov 27 21:55:44.000 I ns/openshift-apiserver pod/apiserver-6df8f9c97d-msxpz node/apiserver-6df8f9c97d-msxpz reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 27 21:55:44.000 I ns/openshift-apiserver pod/apiserver-6df8f9c97d-msxpz node/apiserver-6df8f9c97d-msxpz reason/TerminationStoppedServing Server has stopped listening
Nov 27 21:55:46.000 I ns/openshift-machine-api pod/machine-api-controllers-7855b59bbd-vbq6l node/ci-op-zy9374vz-5dcfc-tklc5-master-1 container/kube-rbac-proxy-machine-mtrc reason/Killing
Nov 27 21:55:46.000 I ns/openshift-machine-api pod/machine-api-controllers-7855b59bbd-vbq6l node/ci-op-zy9374vz-5dcfc-tklc5-master-1 container/kube-rbac-proxy-machineset-mtrc reason/Killing
Nov 27 21:55:46.000 I ns/openshift-machine-api pod/machine-api-controllers-7855b59bbd-vbq6l node/ci-op-zy9374vz-5dcfc-tklc5-master-1 container/kube-rbac-proxy-mhc-mtrc reason/Killing
Nov 27 21:55:46.000 I ns/openshift-machine-api pod/machine-api-controllers-7855b59bbd-vbq6l node/ci-op-zy9374vz-5dcfc-tklc5-master-1 container/machine-controller reason/Killing
periodic-ci-openshift-release-master-ci-4.9-upgrade-from-stable-4.8-from-stable-4.7-e2e-aws-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1596772336290238464build-log.txt.gz4 days ago
Nov 27 08:28:17.417 I ns/openshift-marketplace pod/redhat-operators-w49wv node/ip-10-0-180-255.ec2.internal reason/ForceDelete mirrored/false
Nov 27 08:28:17.424 I ns/openshift-marketplace pod/redhat-operators-w49wv node/ip-10-0-180-255.ec2.internal reason/Deleted
Nov 27 08:28:18.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ip-10-0-140-231_2da94c78-05e2-48a2-a42f-27a8bf8bb0e1 became leader
Nov 27 08:29:08.000 W ns/openshift-machine-api machineset/ci-op-8h4227fw-19b1a-m6glj-worker-us-east-1a reason/ReconcileError unknown instance type: m6a.xlarge (7 times)
Nov 27 08:29:08.000 W ns/openshift-machine-api machineset/ci-op-8h4227fw-19b1a-m6glj-worker-us-east-1c reason/ReconcileError unknown instance type: m6a.xlarge (7 times)
Nov 27 08:29:21.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-206-81.ec2.internal node/ip-10-0-206-81 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 27 08:29:21.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-206-81.ec2.internal node/ip-10-0-206-81 reason/TerminationStoppedServing Server has stopped listening
Nov 27 08:30:05.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-206-81.ec2.internal node/ip-10-0-206-81.ec2.internal container/kube-scheduler-recovery-controller reason/Created
Nov 27 08:30:05.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-206-81.ec2.internal node/ip-10-0-206-81.ec2.internal container/kube-scheduler-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fc129f11e04565322e9a71a8ab4607190640ac940c332db546453d308c7d81a
Nov 27 08:30:05.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-206-81.ec2.internal node/ip-10-0-206-81.ec2.internal container/kube-scheduler-recovery-controller reason/Started
Nov 27 08:30:05.463 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-206-81.ec2.internal node/ip-10-0-206-81.ec2.internal container/kube-scheduler-recovery-controller reason/ContainerExit code/0 cause/Completed
#1596772336290238464build-log.txt.gz4 days ago
Nov 27 08:32:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-245.ec2.internal node/ip-10-0-143-245 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 27 08:32:49.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-245.ec2.internal node/ip-10-0-143-245 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 27 08:32:50.467 I ns/openshift-kube-apiserver pod/installer-8-ip-10-0-143-245.ec2.internal node/ip-10-0-143-245.ec2.internal container/installer reason/ContainerExit code/0 cause/Completed
Nov 27 08:32:53.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (18 times)
Nov 27 08:33:13.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (19 times)
Nov 27 08:33:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-245.ec2.internal node/ip-10-0-143-245 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 27 08:33:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-245.ec2.internal node/ip-10-0-143-245 reason/TerminationStoppedServing Server has stopped listening
Nov 27 08:34:13.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (20 times)
Nov 27 08:34:26.000 W ns/openshift-network-diagnostics node/ip-10-0-222-44.ec2.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-206-81: failed to establish a TCP connection to 10.0.206.81:6443: dial tcp 10.0.206.81:6443: connect: connection refused
Nov 27 08:34:26.000 I ns/openshift-network-diagnostics node/ip-10-0-222-44.ec2.internal reason/ConnectivityRestored roles/worker Connectivity restored after 2m0.000000886s: kubernetes-apiserver-endpoint-ip-10-0-206-81: tcp connection to 10.0.206.81:6443 succeeded
Nov 27 08:34:34.000 W ns/openshift-network-diagnostics node/ip-10-0-222-44.ec2.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-143-245: failed to establish a TCP connection to 10.0.143.245:6443: dial tcp 10.0.143.245:6443: connect: connection refused
#1596772336290238464build-log.txt.gz4 days ago
Nov 27 08:37:34.000 I ns/openshift-marketplace pod/redhat-marketplace-mg8bg node/ip-10-0-180-255.ec2.internal container/registry-server reason/Killing
Nov 27 08:37:34.000 I ns/openshift-marketplace pod/redhat-marketplace-mg8bg node/ip-10-0-180-255.ec2.internal container/registry-server reason/Killing
Nov 27 08:37:34.636 I ns/openshift-marketplace pod/redhat-marketplace-mg8bg node/ip-10-0-180-255.ec2.internal container/registry-server reason/Ready
Nov 27 08:37:34.657 I ns/openshift-marketplace pod/redhat-marketplace-mg8bg node/ip-10-0-180-255.ec2.internal reason/ForceDelete mirrored/false
Nov 27 08:37:34.664 I ns/openshift-marketplace pod/redhat-marketplace-mg8bg node/ip-10-0-180-255.ec2.internal reason/Deleted
Nov 27 08:38:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-231.ec2.internal node/ip-10-0-140-231 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 27 08:38:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-231.ec2.internal node/ip-10-0-140-231 reason/TerminationStoppedServing Server has stopped listening
Nov 27 08:38:13.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceecb8dccd60fc2aaa9e26959e8967fa85a556ac44262f757286dd3f3be6aaa1 (41 times)
Nov 27 08:38:18.846 I ns/openshift-marketplace pod/redhat-operators-w4z6w node/ reason/Created
Nov 27 08:38:18.857 I ns/openshift-marketplace pod/redhat-operators-w4z6w node/ip-10-0-180-255.ec2.internal reason/Scheduled
Nov 27 08:38:20.000 I ns/openshift-marketplace pod/redhat-operators-w4z6w node/ip-10-0-180-255.ec2.internal container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-operator-index:v4.7
#1596772336290238464build-log.txt.gz4 days ago
Nov 27 08:44:14.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7."
Nov 27 08:44:14.000 I ns/openshift-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-d8c6f8fdf to 0
Nov 27 08:44:14.000 I ns/openshift-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-6c9444dbbb to 1
Nov 27 08:44:14.000 I ns/openshift-apiserver replicaset/apiserver-6c9444dbbb reason/SuccessfulCreate Created pod: apiserver-6c9444dbbb-m8qnh
Nov 27 08:44:14.000 I ns/openshift-apiserver replicaset/apiserver-d8c6f8fdf reason/SuccessfulDelete Deleted pod: apiserver-d8c6f8fdf-69css
Nov 27 08:44:14.000 I ns/default namespace/kube-system node/apiserver-6988f4b764-snsxf reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3s finished
Nov 27 08:44:14.000 I ns/default namespace/kube-system node/apiserver-6988f4b764-snsxf reason/TerminationStoppedServing Server has stopped listening
Nov 27 08:44:14.000 W ns/openshift-apiserver pod/apiserver-6988f4b764-snsxf node/ip-10-0-140-231.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.131.0.36:8443/healthz": dial tcp 10.131.0.36:8443: connect: connection refused
Nov 27 08:44:14.610 W ns/openshift-apiserver pod/apiserver-d8c6f8fdf-69css reason/FailedScheduling skip schedule deleting pod: openshift-apiserver/apiserver-d8c6f8fdf-69css
Nov 27 08:44:14.628 I ns/openshift-apiserver pod/apiserver-d8c6f8fdf-69css node/ reason/DeletedBeforeScheduling
Nov 27 08:44:14.658 I ns/openshift-apiserver pod/apiserver-6c9444dbbb-m8qnh node/ reason/Created
#1596772336290238464build-log.txt.gz4 days ago
Nov 27 08:45:34.568 I ns/openshift-apiserver pod/apiserver-6c9444dbbb-scvpp node/ reason/Created
Nov 27 08:45:34.568 W ns/openshift-apiserver pod/apiserver-6c9444dbbb-scvpp reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod affinity/anti-affinity rules, 3 node(s) didn't match pod anti-affinity rules.
Nov 27 08:45:35.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 27 08:45:36.343 W ns/openshift-apiserver pod/apiserver-6c9444dbbb-scvpp reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod affinity/anti-affinity rules, 3 node(s) didn't match pod anti-affinity rules.
Nov 27 08:45:36.343 W ns/openshift-apiserver pod/apiserver-6c9444dbbb-scvpp reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod affinity/anti-affinity rules, 3 node(s) didn't match pod anti-affinity rules.
Nov 27 08:45:37.000 I ns/default namespace/kube-system node/apiserver-6988f4b764-h6brz reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3s finished
Nov 27 08:45:37.000 I ns/default namespace/kube-system node/apiserver-6988f4b764-h6brz reason/TerminationStoppedServing Server has stopped listening
Nov 27 08:45:41.000 W ns/openshift-apiserver pod/apiserver-6988f4b764-h6brz node/ip-10-0-206-81.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.76:8443/healthz": dial tcp 10.128.0.76:8443: connect: connection refused
Nov 27 08:45:45.000 W ns/openshift-apiserver pod/apiserver-6988f4b764-h6brz node/ip-10-0-206-81.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.76:8443/healthz": dial tcp 10.128.0.76:8443: connect: connection refused
Nov 27 08:45:49.000 I ns/openshift-machine-api deployment/machine-api-controllers reason/ScalingReplicaSet Scaled up replica set machine-api-controllers-566d46457f to 1
Nov 27 08:45:49.000 I clusteroperator/machine-api reason/Status upgrade Progressing towards operator: 4.8.53
#1596772336290238464build-log.txt.gz4 days ago
Nov 27 08:47:03.744 W ns/openshift-apiserver pod/apiserver-6c9444dbbb-8gwfr reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod affinity/anti-affinity rules, 3 node(s) didn't match pod anti-affinity rules.
Nov 27 08:47:05.394 W ns/openshift-apiserver pod/apiserver-6c9444dbbb-8gwfr reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod affinity/anti-affinity rules, 3 node(s) didn't match pod anti-affinity rules.
Nov 27 08:47:05.394 W ns/openshift-apiserver pod/apiserver-6c9444dbbb-8gwfr reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod affinity/anti-affinity rules, 3 node(s) didn't match pod anti-affinity rules.
Nov 27 08:47:05.394 W ns/openshift-apiserver pod/apiserver-6c9444dbbb-8gwfr reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod affinity/anti-affinity rules, 3 node(s) didn't match pod anti-affinity rules.
Nov 27 08:47:06.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Nov 27 08:47:06.000 I ns/default namespace/kube-system node/apiserver-6988f4b764-zzjsr reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3s finished
Nov 27 08:47:06.000 I ns/default namespace/kube-system node/apiserver-6988f4b764-zzjsr reason/TerminationStoppedServing Server has stopped listening
Nov 27 08:47:06.856 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Nov 27 08:47:13.000 W ns/openshift-apiserver pod/apiserver-6988f4b764-zzjsr node/ip-10-0-143-245.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.29:8443/healthz": dial tcp 10.129.0.29:8443: connect: connection refused
Nov 27 08:47:15.000 W ns/openshift-apiserver pod/apiserver-6988f4b764-zzjsr node/ip-10-0-143-245.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.29:8443/healthz": dial tcp 10.129.0.29:8443: connect: connection refused
Nov 27 08:47:23.000 W ns/openshift-apiserver pod/apiserver-6988f4b764-zzjsr node/ip-10-0-143-245.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.29:8443/healthz": dial tcp 10.129.0.29:8443: connect: connection refused (2 times)
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-azure-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1596734335342350336build-log.txt.gz4 days ago
Nov 27 06:03:18.739 I ns/openshift-etcd pod/installer-7-ci-op-02iwch05-238d8-7b9kr-master-2 node/ci-op-02iwch05-238d8-7b9kr-master-2 container/installer reason/ContainerExit code/0 cause/Completed
Nov 27 06:03:20.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (17 times)
Nov 27 06:03:20.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b893c8477dc61e394cce75aa632c34681959ba89ea2cefe91779b5ac6d7a0ec2,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (17 times)
Nov 27 06:03:23.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-vwp9w node/ci-op-02iwch05-238d8-7b9kr-master-2 reason/Unhealthy Readiness probe failed:  (2 times)
Nov 27 06:03:23.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-vwp9w node/ci-op-02iwch05-238d8-7b9kr-master-2 reason/Unhealthy Readiness probe failed:  (2 times)
Nov 27 06:03:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-2 node/ci-op-02iwch05-238d8-7b9kr-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 27 06:03:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-2 node/ci-op-02iwch05-238d8-7b9kr-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 27 06:03:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-2 node/ci-op-02iwch05-238d8-7b9kr-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 27 06:03:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-2 node/ci-op-02iwch05-238d8-7b9kr-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 27 06:03:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-2 node/ci-op-02iwch05-238d8-7b9kr-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 27 06:03:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-2 node/ci-op-02iwch05-238d8-7b9kr-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 27 06:03:28.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-vwp9w node/ci-op-02iwch05-238d8-7b9kr-master-2 reason/Unhealthy Readiness probe failed:  (3 times)
#1596734335342350336build-log.txt.gz4 days ago
Nov 27 06:06:17.031 - 59s   I alert/KubeContainerWaiting ns/openshift-etcd pod/etcd-ci-op-02iwch05-238d8-7b9kr-master-0 container/etcdctl ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="etcdctl", namespace="openshift-etcd", pod="etcd-ci-op-02iwch05-238d8-7b9kr-master-0", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 27 06:06:17.031 - 59s   I alert/KubePodNotReady ns/openshift-etcd pod/etcd-ci-op-02iwch05-238d8-7b9kr-master-0 ALERTS{alertname="KubePodNotReady", alertstate="pending", namespace="openshift-etcd", pod="etcd-ci-op-02iwch05-238d8-7b9kr-master-0", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 27 06:06:17.031 - 59s   I alert/PodDisruptionBudgetAtLimit ns/openshift-etcd ALERTS{alertname="PodDisruptionBudgetAtLimit", alertstate="pending", namespace="openshift-etcd", poddisruptionbudget="etcd-quorum-guard", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 27 06:06:18.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-02iwch05-238d8-7b9kr-master-0 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-02iwch05-238d8-7b9kr-master-0 is unhealthy"
Nov 27 06:06:21.043 I ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-g2tkq node/ci-op-02iwch05-238d8-7b9kr-master-0 container/guard reason/Ready
Nov 27 06:06:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-0 node/ci-op-02iwch05-238d8-7b9kr-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 27 06:06:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-0 node/ci-op-02iwch05-238d8-7b9kr-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 27 06:06:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-0 node/ci-op-02iwch05-238d8-7b9kr-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 27 06:06:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-0 node/ci-op-02iwch05-238d8-7b9kr-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 27 06:06:25.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "ci-op-02iwch05-238d8-7b9kr-master-0" from revision 6 to 7 because static pod is ready
Nov 27 06:06:25.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-02iwch05-238d8-7b9kr-master-0 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-02iwch05-238d8-7b9kr-master-0 is unhealthy"
#1596734335342350336build-log.txt.gz4 days ago
Nov 27 06:09:12.031 - 29s   I alert/etcdGRPCRequestsSlow node/10.0.0.7:9979 ns/openshift-etcd pod/etcd-ci-op-02iwch05-238d8-7b9kr-master-0 ALERTS{alertname="etcdGRPCRequestsSlow", alertstate="pending", endpoint="etcd-metrics", grpc_method="Range", grpc_service="etcdserverpb.KV", instance="10.0.0.7:9979", job="etcd", namespace="openshift-etcd", pod="etcd-ci-op-02iwch05-238d8-7b9kr-master-0", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
Nov 27 06:09:15.234 I ns/openshift-marketplace pod/certified-operators-m49nj node/ci-op-02iwch05-238d8-7b9kr-worker-centralus3-tp5qc reason/Scheduled
Nov 27 06:09:15.235 I ns/openshift-marketplace pod/certified-operators-m49nj node/ reason/Created
Nov 27 06:09:16.000 I ns/openshift-marketplace pod/certified-operators-m49nj node/ci-op-02iwch05-238d8-7b9kr-worker-centralus3-tp5qc container/registry-server reason/Pulling image/registry.redhat.io/redhat/certified-operator-index:v4.9
Nov 27 06:09:16.000 I ns/openshift-marketplace pod/certified-operators-m49nj reason/AddedInterface Add eth0 [10.129.2.23/23] from ovn-kubernetes
Nov 27 06:09:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-1 node/ci-op-02iwch05-238d8-7b9kr-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 27 06:09:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-1 node/ci-op-02iwch05-238d8-7b9kr-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 27 06:09:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-02iwch05-238d8-7b9kr-master-1 node/ci-op-02iwch05-238d8-7b9kr-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 27 06:09:16.611 I ns/openshift-marketplace pod/community-operators-cvgl9 node/ci-op-02iwch05-238d8-7b9kr-worker-centralus3-tp5qc container/registry-server reason/Ready
Nov 27 06:09:16.611 I ns/openshift-marketplace pod/community-operators-cvgl9 node/ci-op-02iwch05-238d8-7b9kr-worker-centralus3-tp5qc reason/GracefulDelete duration/1s
Nov 27 06:09:17.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-02iwch05-238d8-7b9kr-master-1 node/ci-op-02iwch05-238d8-7b9kr-master-1 reason/ProbeError Readiness probe error: Get "https://10.0.0.6:6443/healthz": dial tcp 10.0.0.6:6443: connect: connection refused\nbody: \n
#1596734335342350336build-log.txt.gz4 days ago
Nov 27 06:22:28.000 I ns/openshift-oauth-apiserver replicaset/apiserver-d8fdb4974 reason/SuccessfulCreate Created pod: apiserver-d8fdb4974-hr7w4
Nov 27 06:22:28.000 I ns/openshift-monitoring replicaset/grafana-784786f8b reason/SuccessfulCreate Created pod: grafana-784786f8b-kqspf
Nov 27 06:22:28.000 I ns/openshift-service-ca-operator replicaset/service-ca-operator-845ddcd969 reason/SuccessfulCreate Created pod: service-ca-operator-845ddcd969-9zz9s
Nov 27 06:22:28.000 I ns/openshift-oauth-apiserver replicaset/apiserver-5cd67bdcdd reason/SuccessfulDelete Deleted pod: apiserver-5cd67bdcdd-5x5vg
Nov 27 06:22:28.000 I ns/openshift-apiserver replicaset/apiserver-7f6b5d775d reason/SuccessfulDelete Deleted pod: apiserver-7f6b5d775d-965p6
Nov 27 06:22:28.000 I ns/default namespace/kube-system node/apiserver-5cd67bdcdd-5x5vg reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 27 06:22:28.000 I ns/default namespace/kube-system node/apiserver-5cd67bdcdd-5x5vg reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 27 06:22:28.000 I ns/default namespace/kube-system node/apiserver-5cd67bdcdd-5x5vg reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 27 06:22:28.000 I ns/default namespace/kube-system node/apiserver-5cd67bdcdd-5x5vg reason/TerminationStoppedServing Server has stopped listening
Nov 27 06:22:28.019 I ns/openshift-monitoring pod/openshift-state-metrics-749c85f87b-7cfzc node/ci-op-02iwch05-238d8-7b9kr-worker-centralus2-2sbcz container/kube-rbac-proxy-self reason/ContainerExit code/0 cause/Completed
Nov 27 06:22:28.019 I ns/openshift-monitoring pod/openshift-state-metrics-749c85f87b-7cfzc node/ci-op-02iwch05-238d8-7b9kr-worker-centralus2-2sbcz container/kube-rbac-proxy-main reason/ContainerExit code/0 cause/Completed
#1596734335342350336build-log.txt.gz4 days ago
Nov 27 06:22:39.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation"
Nov 27 06:22:39.000 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" (2 times)
Nov 27 06:22:39.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-d8fdb4974 to 0
Nov 27 06:22:39.000 I ns/openshift-operator-lifecycle-manager deployment/catalog-operator reason/ScalingReplicaSet Scaled down replica set catalog-operator-885c99fdf to 0
Nov 27 06:22:39.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-55c5ff557c to 1
Nov 27 06:22:39.000 I ns/openshift-apiserver pod/apiserver-7bdbc9c788-hx447 node/apiserver-7bdbc9c788-hx447 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 27 06:22:39.804 W clusteroperator/operator-lifecycle-manager-catalog condition/Progressing status/True changed: Deployed 0.19.0
Nov 27 06:22:39.804 I clusteroperator/operator-lifecycle-manager-catalog versions: operator 4.9.52 -> 4.10.0-0.ci-2022-11-25-123237, operator-lifecycle-manager 0.18.3 -> 0.19.0
Nov 27 06:22:39.804 - 305s  W clusteroperator/operator-lifecycle-manager-catalog condition/Progressing status/True reason/Deployed 0.19.0
Nov 27 06:22:39.840 I ns/openshift-ingress pod/router-default-76b54d7d65-nm2zh node/ci-op-02iwch05-238d8-7b9kr-worker-centralus2-2sbcz container/router reason/ContainerStart duration/14.00s
Nov 27 06:22:39.855 I clusteroperator/openshift-samples versions: operator 4.9.52 -> 4.10.0-0.ci-2022-11-25-123237
periodic-ci-shiftstack-shiftstack-ci-main-periodic-4.10-upgrade-from-stable-4.9-e2e-openstack-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1596624361022820352build-log.txt.gz4 days ago
Nov 26 23:13:35.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d676cb89158b637ee52ea222b76fac590c33ae677c026a3fadab60b842d5098,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:b891e1ea37e4e6d94a28b7f965df51e0fd540abf1ec0a77cb152ff4d68950fe2 (23 times)
Nov 26 23:13:35.491 - 59s   I alert/TargetDown ns/openshift-etcd ALERTS{alertname="TargetDown", alertstate="pending", job="etcd", namespace="openshift-etcd", prometheus="openshift-monitoring/k8s", service="etcd", severity="warning"}
Nov 26 23:13:37.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-2m5vf node/96rd5pmx-c88ed-7tbms-master-2 reason/Unhealthy Readiness probe failed:  (8 times)
Nov 26 23:13:37.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-2m5vf node/96rd5pmx-c88ed-7tbms-master-2 reason/Unhealthy Readiness probe failed:  (8 times)
Nov 26 23:13:37.000 W ns/openshift-etcd pod/etcd-quorum-guard-6b6fc478c4-2m5vf node/96rd5pmx-c88ed-7tbms-master-2 reason/Unhealthy Readiness probe failed:  (8 times)
Nov 26 23:13:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-1 node/96rd5pmx-c88ed-7tbms-master-1.novalocal reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 26 23:13:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-1 node/96rd5pmx-c88ed-7tbms-master-1.novalocal reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 26 23:13:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-1 node/96rd5pmx-c88ed-7tbms-master-1.novalocal reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 26 23:13:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-1 node/96rd5pmx-c88ed-7tbms-master-1.novalocal reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 26 23:13:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-1 node/96rd5pmx-c88ed-7tbms-master-1.novalocal reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 26 23:13:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-1 node/96rd5pmx-c88ed-7tbms-master-1.novalocal reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 26 23:13:40.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-1 node/96rd5pmx-c88ed-7tbms-master-1.novalocal reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 26 23:13:40.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-1 node/96rd5pmx-c88ed-7tbms-master-1.novalocal reason/HTTPServerStoppedListening HTTP Server has stopped listening
#1596624361022820352build-log.txt.gz4 days ago
Nov 26 23:16:15.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3aa585cf9899f1653f3076984ac77396e27d6e3073d9ed0f07f6d75298bb3d78,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (16 times)
Nov 26 23:16:15.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3aa585cf9899f1653f3076984ac77396e27d6e3073d9ed0f07f6d75298bb3d78,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (16 times)
Nov 26 23:16:16.830 I ns/openshift-etcd pod/installer-2-96rd5pmx-c88ed-7tbms-master-0 node/96rd5pmx-c88ed-7tbms-master-0 reason/DeletedAfterCompletion
Nov 26 23:16:23.491 - 59s   I alert/KubeDeploymentReplicasMismatch ns/openshift-etcd container/kube-rbac-proxy-main ALERTS{alertname="KubeDeploymentReplicasMismatch", alertstate="pending", container="kube-rbac-proxy-main", deployment="etcd-quorum-guard", endpoint="https-main", job="kube-state-metrics", namespace="openshift-etcd", prometheus="openshift-monitoring/k8s", service="kube-state-metrics", severity="warning"}
Nov 26 23:16:42.491 - 299s  I alert/etcdGRPCRequestsSlow node/10.0.0.63:9979 ns/openshift-etcd pod/etcd-96rd5pmx-c88ed-7tbms-master-1 ALERTS{alertname="etcdGRPCRequestsSlow", alertstate="pending", endpoint="etcd-metrics", grpc_method="LeaseGrant", grpc_service="etcdserverpb.Lease", instance="10.0.0.63:9979", job="etcd", namespace="openshift-etcd", pod="etcd-96rd5pmx-c88ed-7tbms-master-1", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
Nov 26 23:16:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-0 node/96rd5pmx-c88ed-7tbms-master-0.novalocal reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 26 23:16:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-0 node/96rd5pmx-c88ed-7tbms-master-0.novalocal reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 26 23:16:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-0 node/96rd5pmx-c88ed-7tbms-master-0.novalocal reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 26 23:16:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-0 node/96rd5pmx-c88ed-7tbms-master-0.novalocal reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 26 23:16:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-0 node/96rd5pmx-c88ed-7tbms-master-0.novalocal reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 26 23:16:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-0 node/96rd5pmx-c88ed-7tbms-master-0.novalocal reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 26 23:16:45.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-96rd5pmx-c88ed-7tbms-master-0 node/96rd5pmx-c88ed-7tbms-master-0 reason/ProbeError Readiness probe error: Get "https://10.0.2.235:6443/healthz": dial tcp 10.0.2.235:6443: connect: connection refused\nbody: \n
#1596624361022820352build-log.txt.gz4 days ago
Nov 26 23:18:50.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3aa585cf9899f1653f3076984ac77396e27d6e3073d9ed0f07f6d75298bb3d78,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (33 times)
Nov 26 23:18:58.000 I ns/openshift-kube-apiserver lease/cert-regeneration-controller-lock reason/LeaderElection 96rd5pmx-c88ed-7tbms-master-1.novalocal_f5554fb4-eb30-4518-a754-a8281516351e became leader
Nov 26 23:18:58.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection 96rd5pmx-c88ed-7tbms-master-1.novalocal_f5554fb4-eb30-4518-a754-a8281516351e became leader
Nov 26 23:19:13.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.22223045270538 over 5 minutes on "OpenStack"; disk metrics are: etcd-96rd5pmx-c88ed-7tbms-master-2=0.099876,etcd-96rd5pmx-c88ed-7tbms-master-1=0.123520,etcd-96rd5pmx-c88ed-7tbms-master-0=0.151680. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 26 23:19:14.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3aa585cf9899f1653f3076984ac77396e27d6e3073d9ed0f07f6d75298bb3d78,registry.ci.openshift.org/ocp/4.10-2022-11-25-123237@sha256:8cbfd92a07536e5bb6e78096f29f19a112472194d76f7f24088eff21530b67ea (34 times)
Nov 26 23:19:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-2 node/96rd5pmx-c88ed-7tbms-master-2.novalocal reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 26 23:19:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-2 node/96rd5pmx-c88ed-7tbms-master-2.novalocal reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 26 23:19:54.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-2 node/96rd5pmx-c88ed-7tbms-master-2.novalocal reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 26 23:19:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-96rd5pmx-c88ed-7tbms-master-2 node/96rd5pmx-c88ed-7tbms-master-2.novalocal reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 26 23:19:58.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-96rd5pmx-c88ed-7tbms-master-2 node/96rd5pmx-c88ed-7tbms-master-2 reason/ProbeError Readiness probe error: Get "https://10.0.0.213:6443/healthz": dial tcp 10.0.0.213:6443: connect: connection refused\nbody: \n
Nov 26 23:19:58.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-96rd5pmx-c88ed-7tbms-master-2 node/96rd5pmx-c88ed-7tbms-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.213:6443/healthz": dial tcp 10.0.0.213:6443: connect: connection refused
periodic-ci-shiftstack-shiftstack-ci-main-periodic-4.9-upgrade-from-stable-4.8-e2e-openstack-kuryr-upgrade (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1596345771127476224build-log.txt.gz5 days ago
Nov 26 04:26:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-n6v9wfgl-179bd-fthhc-master-1 node/n6v9wfgl-179bd-fthhc-master-1 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 26 04:26:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-n6v9wfgl-179bd-fthhc-master-1 node/n6v9wfgl-179bd-fthhc-master-1 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 26 04:26:46.323 I ns/openshift-kube-apiserver pod/installer-11-n6v9wfgl-179bd-fthhc-master-1 node/n6v9wfgl-179bd-fthhc-master-1 container/installer reason/ContainerExit code/0 cause/Completed
Nov 26 04:26:52.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection n6v9wfgl-179bd-fthhc-master-2_58fefcd7-4ad1-415e-a42d-e49681aabf60 became leader
Nov 26 04:27:36.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.2222222222222223 over 5 minutes on "OpenStack"; disk metrics are: etcd-n6v9wfgl-179bd-fthhc-master-1=0.006175999999999973,etcd-n6v9wfgl-179bd-fthhc-master-0=0.009509088000000068,etcd-n6v9wfgl-179bd-fthhc-master-2=0.01752000000000004. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Nov 26 04:27:55.000 I ns/openshift-kube-apiserver pod/kube-apiserver-n6v9wfgl-179bd-fthhc-master-1 node/n6v9wfgl-179bd-fthhc-master-1 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 26 04:27:55.000 I ns/openshift-kube-apiserver pod/kube-apiserver-n6v9wfgl-179bd-fthhc-master-1 node/n6v9wfgl-179bd-fthhc-master-1 reason/TerminationStoppedServing Server has stopped listening
Nov 26 04:28:35.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-n6v9wfgl-179bd-fthhc-master-1 node/n6v9wfgl-179bd-fthhc-master-1 container/kube-controller-manager-recovery-controller reason/Created
Nov 26 04:28:35.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-n6v9wfgl-179bd-fthhc-master-1 node/n6v9wfgl-179bd-fthhc-master-1 container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5d8a040f711995c1d951872df4267fe592f395af664c5fe638ec023407f0f65
Nov 26 04:28:35.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-n6v9wfgl-179bd-fthhc-master-1 node/n6v9wfgl-179bd-fthhc-master-1 container/kube-controller-manager-recovery-controller reason/Started
Nov 26 04:28:35.765 I ns/openshift-kube-controller-manager pod/kube-controller-manager-n6v9wfgl-179bd-fthhc-master-1 node/n6v9wfgl-179bd-fthhc-master-1 container/kube-controller-manager-recovery-controller reason/ContainerExit code/0 cause/Completed
#1596345771127476224build-log.txt.gz5 days ago
Nov 26 04:30:19.000 I ns/openshift-kube-apiserver pod/kube-apiserver-n6v9wfgl-179bd-fthhc-master-0 node/n6v9wfgl-179bd-fthhc-master-0 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 26 04:30:20.872 I ns/openshift-kube-apiserver pod/installer-11-n6v9wfgl-179bd-fthhc-master-0 node/n6v9wfgl-179bd-fthhc-master-0 container/installer reason/ContainerExit code/0 cause/Completed
Nov 26 04:30:22.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (13 times)
Nov 26 04:30:24.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (14 times)
Nov 26 04:30:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (15 times)
Nov 26 04:31:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-n6v9wfgl-179bd-fthhc-master-0 node/n6v9wfgl-179bd-fthhc-master-0 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 26 04:31:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-n6v9wfgl-179bd-fthhc-master-0 node/n6v9wfgl-179bd-fthhc-master-0 reason/TerminationStoppedServing Server has stopped listening
Nov 26 04:31:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (16 times)
Nov 26 04:31:32.000 W ns/openshift-network-diagnostics node/n6v9wfgl-179bd-fthhc-worker-0-kcqjz reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-n6v9wfgl-179bd-fthhc-master-1: failed to establish a TCP connection to 10.0.0.244:6443: dial tcp 10.0.0.244:6443: connect: connection refused
Nov 26 04:31:32.000 W ns/openshift-network-diagnostics node/n6v9wfgl-179bd-fthhc-worker-0-kcqjz reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.79.71:443: dial tcp 172.30.79.71:443: connect: connection refused
Nov 26 04:31:32.000 I ns/openshift-network-diagnostics node/n6v9wfgl-179bd-fthhc-worker-0-kcqjz reason/ConnectivityRestored roles/worker Connectivity restored after 59.9994351s: kubernetes-apiserver-endpoint-n6v9wfgl-179bd-fthhc-master-1: tcp connection to 10.0.0.244:6443 succeeded
#1596345771127476224build-log.txt.gz5 days ago
Nov 26 04:34:07.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection n6v9wfgl-179bd-fthhc-master-0_cc5ff49c-ffb7-4356-9385-fa5b2d8bd76f became leader
Nov 26 04:34:07.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (32 times)
Nov 26 04:34:07.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (32 times)
Nov 26 04:34:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (33 times)
Nov 26 04:34:31.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:877214b8c6c2edd19ab06f00e01c18752eb32da16334dc84ff6e518d51dc6e25,registry.ci.openshift.org/ocp/4.9-2022-11-23-004235@sha256:de2be3cf4a7c08f0a249c66890dee2ed056ea5f6d3ae2afa1d5d2aac3b93ec18 (33 times)
Nov 26 04:35:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-n6v9wfgl-179bd-fthhc-master-2 node/n6v9wfgl-179bd-fthhc-master-2 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 26 04:35:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-n6v9wfgl-179bd-fthhc-master-2 node/n6v9wfgl-179bd-fthhc-master-2 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Nov 26 04:35:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-n6v9wfgl-179bd-fthhc-master-2 node/n6v9wfgl-179bd-fthhc-master-2 reason/TerminationStoppedServing Server has stopped listening
Nov 26 04:35:10.000 I ns/openshift-kube-apiserver pod/kube-apiserver-n6v9wfgl-179bd-fthhc-master-2 node/n6v9wfgl-179bd-fthhc-master-2 reason/TerminationStoppedServing Server has stopped listening
Nov 26 04:35:12.186 I ns/openshift-marketplace pod/community-operators-fl8m8 node/n6v9wfgl-179bd-fthhc-worker-0-l8rwc reason/Scheduled
Nov 26 04:35:12.186 I ns/openshift-marketplace pod/community-operators-fl8m8 node/n6v9wfgl-179bd-fthhc-worker-0-l8rwc reason/Scheduled
Nov 26 04:35:12.200 I ns/openshift-marketplace pod/community-operators-fl8m8 node/ reason/Created
#1596345771127476224build-log.txt.gz5 days ago
Nov 26 04:42:23.061 I ns/openshift-machine-api pod/machine-api-operator-58c455bf4f-vl9c8 node/n6v9wfgl-179bd-fthhc-master-0 reason/GracefulDelete duration/30s
Nov 26 04:42:24.248 I ns/openshift-machine-api pod/machine-api-operator-58c455bf4f-vl9c8 node/n6v9wfgl-179bd-fthhc-master-0 container/kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
Nov 26 04:42:24.248 E ns/openshift-machine-api pod/machine-api-operator-58c455bf4f-vl9c8 node/n6v9wfgl-179bd-fthhc-master-0 container/machine-api-operator reason/ContainerExit code/2 cause/Error
Nov 26 04:42:25.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 7, desired generation is 8." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Nov 26 04:42:25.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 7, desired generation is 8.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 7, desired generation is 8."
Nov 26 04:42:27.000 I ns/openshift-apiserver pod/apiserver-f4d6b4c9d-hgzsn node/apiserver-f4d6b4c9d-hgzsn reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 26 04:42:27.000 I ns/openshift-apiserver pod/apiserver-f4d6b4c9d-hgzsn node/apiserver-f4d6b4c9d-hgzsn reason/TerminationStoppedServing Server has stopped listening
Nov 26 04:42:34.148 W ns/openshift-apiserver pod/apiserver-c85bb7d7f-fkzvd reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 26 04:42:34.233 I ns/openshift-machine-api pod/machine-api-operator-58c455bf4f-vl9c8 node/n6v9wfgl-179bd-fthhc-master-0 reason/Deleted
Nov 26 04:42:37.000 W ns/openshift-apiserver pod/apiserver-f4d6b4c9d-hgzsn node/n6v9wfgl-179bd-fthhc-master-2 reason/ProbeError Liveness probe error: Get "https://10.128.100.172:8443/healthz": dial tcp 10.128.100.172:8443: connect: connection refused\nbody: \n
Nov 26 04:42:37.000 W ns/openshift-apiserver pod/apiserver-f4d6b4c9d-hgzsn node/n6v9wfgl-179bd-fthhc-master-2 reason/ProbeError Readiness probe error: Get "https://10.128.100.172:8443/healthz": dial tcp 10.128.100.172:8443: connect: connection refused\nbody: \n
#1596345771127476224build-log.txt.gz5 days ago
Nov 26 04:43:50.229 I ns/openshift-apiserver pod/apiserver-c85bb7d7f-fkzvd node/n6v9wfgl-179bd-fthhc-master-2 container/openshift-apiserver reason/Ready
Nov 26 04:43:50.287 I ns/openshift-apiserver pod/apiserver-f4d6b4c9d-qntxc node/n6v9wfgl-179bd-fthhc-master-1 reason/GracefulDelete duration/70s
Nov 26 04:43:50.315 W ns/openshift-apiserver pod/apiserver-c85bb7d7f-n74lz reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Nov 26 04:43:50.334 I ns/openshift-apiserver pod/apiserver-c85bb7d7f-n74lz node/ reason/Created
Nov 26 04:43:51.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Nov 26 04:44:00.000 I ns/openshift-apiserver pod/apiserver-f4d6b4c9d-qntxc node/apiserver-f4d6b4c9d-qntxc reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Nov 26 04:44:00.000 I ns/openshift-apiserver pod/apiserver-f4d6b4c9d-qntxc node/apiserver-f4d6b4c9d-qntxc reason/TerminationStoppedServing Server has stopped listening
Nov 26 04:44:05.000 W ns/openshift-apiserver pod/apiserver-f4d6b4c9d-qntxc node/n6v9wfgl-179bd-fthhc-master-1 reason/ProbeError Liveness probe error: Get "https://10.128.101.157:8443/healthz": dial tcp 10.128.101.157:8443: connect: connection refused\nbody: \n
Nov 26 04:44:05.000 W ns/openshift-apiserver pod/apiserver-f4d6b4c9d-qntxc node/n6v9wfgl-179bd-fthhc-master-1 reason/ProbeError Readiness probe error: Get "https://10.128.101.157:8443/healthz": dial tcp 10.128.101.157:8443: connect: connection refused\nbody: \n
Nov 26 04:44:05.000 W ns/openshift-apiserver pod/apiserver-f4d6b4c9d-qntxc node/n6v9wfgl-179bd-fthhc-master-1 reason/Unhealthy Liveness probe failed: Get "https://10.128.101.157:8443/healthz": dial tcp 10.128.101.157:8443: connect: connection refused
Nov 26 04:44:05.000 W ns/openshift-apiserver pod/apiserver-f4d6b4c9d-qntxc node/n6v9wfgl-179bd-fthhc-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.128.101.157:8443/healthz": dial tcp 10.128.101.157:8443: connect: connection refused
periodic-ci-openshift-release-master-nightly-4.10-upgrade-from-stable-4.9-e2e-metal-ipi-bm-upgrade (all) - 4 runs, 75% failed, 33% of failures match = 25% impact
#1596156601897586688build-log.txt.gz6 days ago
Nov 25 16:45:32.179 I ns/openshift-marketplace pod/redhat-marketplace-th27l node/host6.cluster16.ocpci.eng.rdu2.redhat.com container/registry-server reason/ContainerExit code/0 cause/Completed
Nov 25 16:45:32.188 I ns/openshift-marketplace pod/redhat-marketplace-th27l node/host6.cluster16.ocpci.eng.rdu2.redhat.com reason/Deleted
Nov 25 16:45:33.000 I ns/openshift-marketplace pod/redhat-marketplace-th27l node/host6.cluster16.ocpci.eng.rdu2.redhat.com container/registry-server reason/Killing
Nov 25 16:45:33.000 I ns/openshift-marketplace pod/redhat-marketplace-th27l node/host6.cluster16.ocpci.eng.rdu2.redhat.com container/registry-server reason/Killing
Nov 25 16:45:33.000 I ns/openshift-marketplace pod/redhat-marketplace-th27l node/host6.cluster16.ocpci.eng.rdu2.redhat.com container/registry-server reason/Killing
Nov 25 16:45:34.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host4.cluster16.ocpci.eng.rdu2.redhat.com node/host4.cluster16.ocpci.eng.rdu2.redhat.com reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 16:45:34.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host4.cluster16.ocpci.eng.rdu2.redhat.com node/host4.cluster16.ocpci.eng.rdu2.redhat.com reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 16:45:34.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host4.cluster16.ocpci.eng.rdu2.redhat.com node/host4.cluster16.ocpci.eng.rdu2.redhat.com reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 16:45:34.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host4.cluster16.ocpci.eng.rdu2.redhat.com node/host4.cluster16.ocpci.eng.rdu2.redhat.com reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:45:34.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host4.cluster16.ocpci.eng.rdu2.redhat.com node/host4.cluster16.ocpci.eng.rdu2.redhat.com reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:45:34.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host4.cluster16.ocpci.eng.rdu2.redhat.com node/host4.cluster16.ocpci.eng.rdu2.redhat.com reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:45:34.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host4.cluster16.ocpci.eng.rdu2.redhat.com node/host4.cluster16.ocpci.eng.rdu2.redhat.com reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 16:45:34.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host4.cluster16.ocpci.eng.rdu2.redhat.com node/host4.cluster16.ocpci.eng.rdu2.redhat.com reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
#1596156601897586688build-log.txt.gz6 days ago
Nov 25 16:48:33.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "host2.cluster16.ocpci.eng.rdu2.redhat.com" from revision 6 to 7 because static pod is ready
Nov 25 16:48:33.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "host2.cluster16.ocpci.eng.rdu2.redhat.com" from revision 6 to 7 because static pod is ready
Nov 25 16:48:33.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available"
Nov 25 16:48:33.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available"
Nov 25 16:48:33.142 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found
Nov 25 16:48:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host2.cluster16.ocpci.eng.rdu2.redhat.com node/host2.cluster16.ocpci.eng.rdu2.redhat.com reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 16:48:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host2.cluster16.ocpci.eng.rdu2.redhat.com node/host2.cluster16.ocpci.eng.rdu2.redhat.com reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 16:48:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host2.cluster16.ocpci.eng.rdu2.redhat.com node/host2.cluster16.ocpci.eng.rdu2.redhat.com reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:48:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host2.cluster16.ocpci.eng.rdu2.redhat.com node/host2.cluster16.ocpci.eng.rdu2.redhat.com reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:48:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host2.cluster16.ocpci.eng.rdu2.redhat.com node/host2.cluster16.ocpci.eng.rdu2.redhat.com reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 16:48:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host2.cluster16.ocpci.eng.rdu2.redhat.com node/host2.cluster16.ocpci.eng.rdu2.redhat.com reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 16:48:39.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-host2.cluster16.ocpci.eng.rdu2.redhat.com node/host2.cluster16.ocpci.eng.rdu2.redhat.com reason/ProbeError Readiness probe error: Get "https://10.10.129.131:6443/healthz": dial tcp 10.10.129.131:6443: connect: connection refused\nbody: \n
#1596156601897586688build-log.txt.gz6 days ago
Nov 25 16:50:42.000 W ns/openshift-network-diagnostics node/host6.cluster16.ocpci.eng.rdu2.redhat.com reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.33.252:443: dial tcp 172.30.33.252:443: connect: connection refused
Nov 25 16:50:42.000 I ns/openshift-network-diagnostics node/host6.cluster16.ocpci.eng.rdu2.redhat.com reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000034939s: kubernetes-apiserver-endpoint-host2: tcp connection to 10.10.129.131:6443 succeeded
Nov 25 16:50:42.000 I ns/openshift-network-diagnostics node/host6.cluster16.ocpci.eng.rdu2.redhat.com reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000298239s: kubernetes-apiserver-endpoint-host4: tcp connection to 10.10.129.133:6443 succeeded
Nov 25 16:50:42.000 I ns/openshift-network-diagnostics node/host6.cluster16.ocpci.eng.rdu2.redhat.com reason/ConnectivityRestored roles/worker Connectivity restored after 59.999249025s: kubernetes-apiserver-service-cluster: tcp connection to 172.30.33.252:443 succeeded
Nov 25 16:51:30.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2467d18855faaab4015f62229cdaa407613cededf90d496f2f29090c06e239b5,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa (34 times)
Nov 25 16:51:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host3.cluster16.ocpci.eng.rdu2.redhat.com node/host3.cluster16.ocpci.eng.rdu2.redhat.com reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 16:51:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host3.cluster16.ocpci.eng.rdu2.redhat.com node/host3.cluster16.ocpci.eng.rdu2.redhat.com reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 16:51:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host3.cluster16.ocpci.eng.rdu2.redhat.com node/host3.cluster16.ocpci.eng.rdu2.redhat.com reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 16:51:35.000 I ns/openshift-kube-apiserver pod/kube-apiserver-host3.cluster16.ocpci.eng.rdu2.redhat.com node/host3.cluster16.ocpci.eng.rdu2.redhat.com reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 25 16:51:36.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-host3.cluster16.ocpci.eng.rdu2.redhat.com node/host3.cluster16.ocpci.eng.rdu2.redhat.com reason/ProbeError Readiness probe error: Get "https://10.10.129.132:6443/healthz": dial tcp 10.10.129.132:6443: connect: connection refused\nbody: \n
Nov 25 16:51:36.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-host3.cluster16.ocpci.eng.rdu2.redhat.com node/host3.cluster16.ocpci.eng.rdu2.redhat.com reason/Unhealthy Readiness probe failed: Get "https://10.10.129.132:6443/healthz": dial tcp 10.10.129.132:6443: connect: connection refused
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-gcp-ovn-upgrade (all) - 8 runs, 25% failed, 50% of failures match = 13% impact
#1596156629458358272build-log.txt.gz6 days ago
Nov 25 15:37:12.136 - 59s   I alert/etcdMembersDown ns/openshift-etcd pod/etcd-ci-op-b4qhzsdm-82914-dc726-master-2 ALERTS{alertname="etcdMembersDown", alertstate="pending", job="etcd", namespace="openshift-etcd", pod="etcd-ci-op-b4qhzsdm-82914-dc726-master-2", prometheus="openshift-monitoring/k8s", service="etcd", severity="critical"}
Nov 25 15:37:16.136 - 59s   I alert/PodDisruptionBudgetAtLimit ns/openshift-etcd ALERTS{alertname="PodDisruptionBudgetAtLimit", alertstate="pending", namespace="openshift-etcd", poddisruptionbudget="etcd-quorum-guard", prometheus="openshift-monitoring/k8s", severity="warning"}
Nov 25 15:37:17.000 W ns/openshift-etcd pod/etcd-quorum-guard-56bd8cc5f8-bsgpq node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/Unhealthy Readiness probe failed:  (7 times)
Nov 25 15:37:20.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ci-op-b4qhzsdm-82914-dc726-master-2 (2 times)
Nov 25 15:37:22.000 W ns/openshift-etcd pod/etcd-quorum-guard-56bd8cc5f8-bsgpq node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/Unhealthy Readiness probe failed:  (8 times)
Nov 25 15:37:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-0 node/ci-op-b4qhzsdm-82914-dc726-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 15:37:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-0 node/ci-op-b4qhzsdm-82914-dc726-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 15:37:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-0 node/ci-op-b4qhzsdm-82914-dc726-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 15:37:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-0 node/ci-op-b4qhzsdm-82914-dc726-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Nov 25 15:37:27.000 W ns/openshift-etcd pod/etcd-quorum-guard-56bd8cc5f8-bsgpq node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/Unhealthy Readiness probe failed:  (9 times)
Nov 25 15:37:28.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-b4qhzsdm-82914-dc726-master-0 node/ci-op-b4qhzsdm-82914-dc726-master-0 reason/ProbeError Readiness probe error: Get "https://10.0.0.4:6443/healthz": dial tcp 10.0.0.4:6443: connect: connection refused\nbody: \n
#1596156629458358272build-log.txt.gz6 days ago
Nov 25 15:39:56.220 I ns/openshift-etcd pod/etcd-ci-op-b4qhzsdm-82914-dc726-master-1 node/ci-op-b4qhzsdm-82914-dc726-master-1 container/etcd reason/Ready
Nov 25 15:39:59.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/NodeCurrentRevisionChanged Updated node "ci-op-b4qhzsdm-82914-dc726-master-1" from revision 6 to 7 because static pod is ready
Nov 25 15:39:59.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available"
Nov 25 15:39:59.588 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found
Nov 25 15:40:04.059 I ns/openshift-etcd pod/installer-2-ci-op-b4qhzsdm-82914-dc726-master-0 node/ci-op-b4qhzsdm-82914-dc726-master-0 reason/DeletedAfterCompletion
Nov 25 15:40:30.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-2 node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 15:40:31.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-2 node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 15:40:31.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-2 node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 15:40:31.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-b4qhzsdm-82914-dc726-master-2 node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/ProbeError Readiness probe error: Get "https://10.0.0.3:6443/healthz": dial tcp 10.0.0.3:6443: connect: connection refused\nbody: \n
Nov 25 15:40:31.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-b4qhzsdm-82914-dc726-master-2 node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.3:6443/healthz": dial tcp 10.0.0.3:6443: connect: connection refused
Nov 25 15:40:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-2 node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
#1596156629458358272build-log.txt.gz6 days ago
Nov 25 15:42:38.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2467d18855faaab4015f62229cdaa407613cededf90d496f2f29090c06e239b5,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa (32 times)
Nov 25 15:42:41.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2467d18855faaab4015f62229cdaa407613cededf90d496f2f29090c06e239b5,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa (33 times)
Nov 25 15:42:47.000 I ns/openshift-kube-apiserver lease/cert-regeneration-controller-lock reason/LeaderElection ci-op-b4qhzsdm-82914-dc726-master-2_24a09957-f886-415f-a770-801f185bc8c2 became leader
Nov 25 15:42:47.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ci-op-b4qhzsdm-82914-dc726-master-2_24a09957-f886-415f-a770-801f185bc8c2 became leader
Nov 25 15:42:52.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2467d18855faaab4015f62229cdaa407613cededf90d496f2f29090c06e239b5,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9a19915f1892dedacc3b4ab5ea582c180d1c402f544407643c737d59ddd0fa (34 times)
Nov 25 15:43:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-1 node/ci-op-b4qhzsdm-82914-dc726-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Nov 25 15:43:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-1 node/ci-op-b4qhzsdm-82914-dc726-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Nov 25 15:43:44.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-1 node/ci-op-b4qhzsdm-82914-dc726-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Nov 25 15:43:45.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-b4qhzsdm-82914-dc726-master-1 node/ci-op-b4qhzsdm-82914-dc726-master-1 reason/ProbeError Readiness probe error: Get "https://10.0.0.5:6443/healthz": dial tcp 10.0.0.5:6443: connect: connection refused\nbody: \n
Nov 25 15:43:45.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ci-op-b4qhzsdm-82914-dc726-master-1 node/ci-op-b4qhzsdm-82914-dc726-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.0.0.5:6443/healthz": dial tcp 10.0.0.5:6443: connect: connection refused
Nov 25 15:43:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-b4qhzsdm-82914-dc726-master-1 node/ci-op-b4qhzsdm-82914-dc726-master-1 reason/TerminationGracefulTerminationFinished All pending requests processed
#1596156629458358272build-log.txt.gz6 days ago
Nov 25 15:56:46.000 W ns/openshift-authentication-operator deployment/authentication-operator reason/FastControllerResync Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling
Nov 25 15:56:46.000 W ns/openshift-authentication-operator deployment/authentication-operator reason/FastControllerResync Controller "WebhookAuthenticatorCertApprover_OpenShiftAuthenticator" resync interval is set to 0s which might lead to client request throttling
Nov 25 15:56:46.000 W ns/openshift-authentication-operator deployment/authentication-operator reason/FastControllerResync Controller "WebhookAuthenticatorController" resync interval is set to 30s which might lead to client request throttling
Nov 25 15:56:46.000 W ns/openshift-authentication-operator deployment/authentication-operator reason/FastControllerResync Controller "WellKnownReadyController" resync interval is set to 30s which might lead to client request throttling
Nov 25 15:56:46.000 W ns/openshift-authentication-operator deployment/authentication-operator reason/FastControllerResync Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling
Nov 25 15:56:46.000 I ns/openshift-apiserver pod/apiserver-5d8694fd5c-tfm2t node/apiserver-5d8694fd5c-tfm2t reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Nov 25 15:56:46.000 I ns/openshift-apiserver pod/apiserver-5d8694fd5c-tfm2t node/apiserver-5d8694fd5c-tfm2t reason/TerminationStoppedServing Server has stopped listening
Nov 25 15:56:46.967 I ns/openshift-monitoring pod/prometheus-operator-5685df9747-9ppsg node/ci-op-b4qhzsdm-82914-dc726-master-1 container/prometheus-operator reason/ContainerStart duration/6.00s
Nov 25 15:56:46.967 I ns/openshift-monitoring pod/prometheus-operator-5685df9747-9ppsg node/ci-op-b4qhzsdm-82914-dc726-master-1 container/kube-rbac-proxy reason/ContainerStart duration/7.00s
Nov 25 15:56:46.967 I ns/openshift-monitoring pod/prometheus-operator-5685df9747-9ppsg node/ci-op-b4qhzsdm-82914-dc726-master-1 container/kube-rbac-proxy reason/Ready
Nov 25 15:56:46.967 I ns/openshift-monitoring pod/prometheus-operator-5685df9747-9ppsg node/ci-op-b4qhzsdm-82914-dc726-master-1 container/prometheus-operator reason/Ready
#1596156629458358272build-log.txt.gz6 days ago
Nov 25 15:56:58.000 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-webhook reason/ScalingReplicaSet Scaled up replica set csi-snapshot-webhook-8579d59b4d to 2
Nov 25 15:56:58.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-controller-7f8d96d74f reason/SuccessfulCreate Created pod: csi-snapshot-controller-7f8d96d74f-xz78r
Nov 25 15:56:58.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-webhook-8579d59b4d reason/SuccessfulCreate Created pod: csi-snapshot-webhook-8579d59b4d-xb62b
Nov 25 15:56:58.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-controller-7b89dcf465 reason/SuccessfulDelete Deleted pod: csi-snapshot-controller-7b89dcf465-c8rnq
Nov 25 15:56:58.000 I ns/openshift-cluster-storage-operator replicaset/csi-snapshot-webhook-8576896f76 reason/SuccessfulDelete Deleted pod: csi-snapshot-webhook-8576896f76-525rh
Nov 25 15:56:58.000 I ns/default namespace/kube-system node/apiserver-865bffb6b5-mw26s reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 25 15:56:58.000 I ns/default namespace/kube-system node/apiserver-865bffb6b5-mw26s reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 15:56:58.000 I ns/default namespace/kube-system node/apiserver-865bffb6b5-mw26s reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 15:56:58.000 I ns/default namespace/kube-system node/apiserver-865bffb6b5-mw26s reason/TerminationStoppedServing Server has stopped listening
Nov 25 15:56:58.000 W ns/openshift-marketplace pod/marketplace-operator-749fdddffd-q47gn node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/Unhealthy Readiness probe failed: Get "http://10.129.0.71:8080/healthz": dial tcp 10.129.0.71:8080: connect: connection refused (2 times)
Nov 25 15:56:58.116 I ns/openshift-monitoring pod/thanos-querier-79b8f87dbd-frp7m node/ci-op-b4qhzsdm-82914-dc726-worker-a-tw9pz container/thanos-query reason/ContainerExit code/0 cause/Completed
#1596156629458358272build-log.txt.gz6 days ago
Nov 25 15:58:23.000 W ns/openshift-oauth-apiserver pod/apiserver-6fb8fc6d58-gpwx8 node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/ProbeError Readiness probe error: Get "https://10.129.0.78:8443/readyz": context deadline exceeded\nbody: \n
Nov 25 15:58:23.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled down replica set apiserver-865bffb6b5 to 1
Nov 25 15:58:23.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-6fb8fc6d58 to 2
Nov 25 15:58:23.000 I ns/openshift-oauth-apiserver replicaset/apiserver-6fb8fc6d58 reason/SuccessfulCreate Created pod: apiserver-6fb8fc6d58-ss6gp
Nov 25 15:58:23.000 I ns/openshift-oauth-apiserver replicaset/apiserver-865bffb6b5 reason/SuccessfulDelete Deleted pod: apiserver-865bffb6b5-nsmq5
Nov 25 15:58:23.000 I ns/default namespace/kube-system node/apiserver-865bffb6b5-nsmq5 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Nov 25 15:58:23.000 I ns/default namespace/kube-system node/apiserver-865bffb6b5-nsmq5 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Nov 25 15:58:23.000 I ns/default namespace/kube-system node/apiserver-865bffb6b5-nsmq5 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Nov 25 15:58:23.000 I ns/default namespace/kube-system node/apiserver-865bffb6b5-nsmq5 reason/TerminationStoppedServing Server has stopped listening
Nov 25 15:58:23.000 W ns/openshift-oauth-apiserver pod/apiserver-6fb8fc6d58-gpwx8 node/ci-op-b4qhzsdm-82914-dc726-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.129.0.78:8443/readyz": context deadline exceeded
Nov 25 15:58:23.303 I ns/openshift-oauth-apiserver pod/apiserver-6fb8fc6d58-gpwx8 node/ci-op-b4qhzsdm-82914-dc726-master-2 container/oauth-apiserver reason/Ready

Found in 2.57% of runs (20.26% of failures) across 2412 total runs and 185 jobs (12.69% failed) in 389ms - clear search | chart view - source code located on github