Job:
#OCPBUGS-32517issue40 hours agoMissing worker nodes on metal Verified
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[12603]: Unpause all baremetal hosts
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.630867   18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.631351   18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused

... 4 lines not shown

#OCPBUGS-27755issue9 days agoopenshift-kube-apiserver down and is not being restarted New
Issue 15736514: openshift-kube-apiserver down and is not being restarted
Description: Description of problem:
 {code:none}
 SNO cluster, this is the second time that the issue happens. 
 
 Error like the following are reported:
 
 ~~~
 failed to fetch token: Post "https://api-int.<cluster>:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp <ip>:6443: connect: connection refused
 ~~~
 
 Checking the pods logs, kube-apiserver pod is terminated and is not being restarted again:
 
 ~~~
 2024-01-13T09:41:40.931716166Z I0113 09:41:40.931584       1 main.go:213] Received signal terminated. Forwarding to sub-process "hyperkube".
 ~~~{code}
 Version-Release number of selected component (if applicable):
 {code:none}
    4.13.13 {code}
 How reproducible:
 {code:none}
     Not reproducible but has happened twice{code}
 Steps to Reproduce:
 {code:none}
     1.
     2.
     3.
     {code}
 Actual results:
 {code:none}
     API is not available and kube-apiserver is not being restarted{code}
 Expected results:
 {code:none}
     We would expect to see kube-apiserver restarts{code}
 Additional info:
 {code:none}
    {code}
Status: New
#OCPBUGS-30631issue2 weeks agoSNO (RT kernel) sosreport crash the SNO node CLOSED
Issue 15865131: SNO (RT kernel) sosreport crash the SNO node
Description: Description of problem:
 {code:none}
 sosreport collection causes SNO XR11 node crash.
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 - RHOCP    : 4.12.30
 - kernel   : 4.18.0-372.69.1.rt7.227.el8_6.x86_64
 - platform : x86_64{code}
 How reproducible:
 {code:none}
 sh-4.4# chrt -rr 99 toolbox
 .toolboxrc file detected, overriding defaults...
 Checking if there is a newer version of ocpdalmirror.xxx.yyy:8443/rhel8/support-tools-zzz-feb available...
 Container 'toolbox-root' already exists. Trying to start...
 (To remove the container and start with a fresh toolbox, run: sudo podman rm 'toolbox-root')
 toolbox-root
 Container started successfully. To exit, type 'exit'.
 [root@node /]# which sos
 /usr/sbin/sos
 logger: socket /dev/log: No such file or directory
 [root@node /]# taskset -c 29-31,61-63 sos report --batch -n networking,kernel,processor -k crio.all=on -k crio.logs=on -k podman.all=on -kpodman.logs=on
 
 sosreport (version 4.5.6)
 
 This command will collect diagnostic and configuration information from
 this Red Hat CoreOS system.
 
 An archive containing the collected information will be generated in
 /host/var/tmp/sos.c09e4f7z and may be provided to a Red Hat support
 representative.
 
 Any information provided to Red Hat will be treated in accordance with
 the published support policies at:
 
         Distribution Website : https://www.redhat.com/
         Commercial Support   : https://access.redhat.com/
 
 The generated archive may contain data considered sensitive and its
 content should be reviewed by the originating organization before being
 passed to any third party.
 
 No changes will be made to system configuration.
 
 
  Setting up archive ...
  Setting up plugins ...
 [plugin:auditd] Could not open conf file /etc/audit/auditd.conf: [Errno 2] No such file or directory: '/etc/audit/auditd.conf'
 caught exception in plugin method "system.setup()"
 writing traceback to sos_logs/system-plugin-errors.txt
 [plugin:systemd] skipped command 'resolvectl status': required services missing: systemd-resolved.
 [plugin:systemd] skipped command 'resolvectl statistics': required services missing: systemd-resolved.
  Running plugins. Please wait ...
 
   Starting 1/91  alternatives    [Running: alternatives]
   Starting 2/91  atomichost      [Running: alternatives atomichost]
   Starting 3/91  auditd          [Running: alternatives atomichost auditd]
   Starting 4/91  block           [Running: alternatives atomichost auditd block]
   Starting 5/91  boot            [Running: alternatives auditd block boot]
   Starting 6/91  cgroups         [Running: auditd block boot cgroups]
   Starting 7/91  chrony          [Running: auditd block cgroups chrony]
   Starting 8/91  cifs            [Running: auditd block cgroups cifs]
   Starting 9/91  conntrack       [Running: auditd block cgroups conntrack]
   Starting 10/91 console         [Running: block cgroups conntrack console]
   Starting 11/91 container_log   [Running: block cgroups conntrack container_log]
   Starting 12/91 containers_common [Running: block cgroups conntrack containers_common]
   Starting 13/91 crio            [Running: block cgroups conntrack crio]
   Starting 14/91 crypto          [Running: cgroups conntrack crio crypto]
   Starting 15/91 date            [Running: cgroups conntrack crio date]
   Starting 16/91 dbus            [Running: cgroups conntrack crio dbus]
   Starting 17/91 devicemapper    [Running: cgroups conntrack crio devicemapper]
   Starting 18/91 devices         [Running: cgroups conntrack crio devices]
   Starting 19/91 dracut          [Running: cgroups conntrack crio dracut]
   Starting 20/91 ebpf            [Running: cgroups conntrack crio ebpf]
   Starting 21/91 etcd            [Running: cgroups crio ebpf etcd]
   Starting 22/91 filesys         [Running: cgroups crio ebpf filesys]
   Starting 23/91 firewall_tables [Running: cgroups crio filesys firewall_tables]
   Starting 24/91 fwupd           [Running: cgroups crio filesys fwupd]
   Starting 25/91 gluster         [Running: cgroups crio filesys gluster]
   Starting 26/91 grub2           [Running: cgroups crio filesys grub2]
   Starting 27/91 gssproxy        [Running: cgroups crio grub2 gssproxy]
   Starting 28/91 hardware        [Running: cgroups crio grub2 hardware]
   Starting 29/91 host            [Running: cgroups crio hardware host]
   Starting 30/91 hts             [Running: cgroups crio hardware hts]
   Starting 31/91 i18n            [Running: cgroups crio hardware i18n]
   Starting 32/91 iscsi           [Running: cgroups crio hardware iscsi]
   Starting 33/91 jars            [Running: cgroups crio hardware jars]
   Starting 34/91 kdump           [Running: cgroups crio hardware kdump]
   Starting 35/91 kernelrt        [Running: cgroups crio hardware kernelrt]
   Starting 36/91 keyutils        [Running: cgroups crio hardware keyutils]
   Starting 37/91 krb5            [Running: cgroups crio hardware krb5]
   Starting 38/91 kvm             [Running: cgroups crio hardware kvm]
   Starting 39/91 ldap            [Running: cgroups crio kvm ldap]
   Starting 40/91 libraries       [Running: cgroups crio kvm libraries]
   Starting 41/91 libvirt         [Running: cgroups crio kvm libvirt]
   Starting 42/91 login           [Running: cgroups crio kvm login]
   Starting 43/91 logrotate       [Running: cgroups crio kvm logrotate]
   Starting 44/91 logs            [Running: cgroups crio kvm logs]
   Starting 45/91 lvm2            [Running: cgroups crio logs lvm2]
   Starting 46/91 md              [Running: cgroups crio logs md]
   Starting 47/91 memory          [Running: cgroups crio logs memory]
   Starting 48/91 microshift_ovn  [Running: cgroups crio logs microshift_ovn]
   Starting 49/91 multipath       [Running: cgroups crio logs multipath]
   Starting 50/91 networkmanager  [Running: cgroups crio logs networkmanager]
 
 Removing debug pod ...
 error: unable to delete the debug pod "ransno1ransnomavdallabcom-debug": Delete "https://api.ransno.mavdallab.com:6443/api/v1/namespaces/openshift-debug-mt82m/pods/ransno1ransnomavdallabcom-debug": dial tcp 10.71.136.144:6443: connect: connection refused
 {code}
 Steps to Reproduce:
 {code:none}
 Launch a debug pod and the procedure above and it crash the node{code}
 Actual results:
 {code:none}
 Node crash{code}
 Expected results:
 {code:none}
 Node does not crash{code}
 Additional info:
 {code:none}
 We have two vmcore on the associated SFDC ticket.
 This system use a RT kernel.
 Using an out of tree ice driver 1.13.7 (probably from 22 dec 2023)
 
 [  103.681608] ice: module unloaded
 [  103.830535] ice: loading out-of-tree module taints kernel.
 [  103.831106] ice: module verification failed: signature and/or required key missing - tainting kernel
 [  103.841005] ice: Intel(R) Ethernet Connection E800 Series Linux Driver - version 1.13.7
 [  103.841017] ice: Copyright (C) 2018-2023 Intel Corporation
 
 
 With the following kernel command line 
 
 Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/vmlinuz-4.18.0-372.69.1.rt7.227.el8_6.x86_64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/0 root=UUID=3e8bda80-5cf4-4c46-b139-4c84cb006354 rw rootflags=prjquota boot=UUID=1d0512c2-3f92-42c5-b26d-709ff9350b81 intel_iommu=on iommu=pt firmware_class.path=/var/lib/firmware skew_tick=1 nohz=on rcu_nocbs=3-31,35-63 tuned.non_isolcpus=00000007,00000007 systemd.cpu_affinity=0,1,2,32,33,34 intel_iommu=on iommu=pt isolcpus=managed_irq,3-31,35-63 nohz_full=3-31,35-63 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off rcutree.kthread_prio=11 default_hugepagesz=1G rcupdate.rcu_normal_after_boot=0 efi=runtime module_blacklist=irdma intel_pstate=passive intel_idle.max_cstate=0 crashkernel=256M
 
 
 
 vmcore1 show issue with the ice driver 
 
 crash vmcore tmp/vmlinux
 
 
       KERNEL: tmp/vmlinux  [TAINTED]
     DUMPFILE: vmcore  [PARTIAL DUMP]
         CPUS: 64
         DATE: Thu Mar  7 17:16:57 CET 2024
       UPTIME: 02:44:28
 LOAD AVERAGE: 24.97, 25.47, 25.46
        TASKS: 5324
     NODENAME: aaa.bbb.ccc
      RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64
      VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023
      MACHINE: x86_64  (1500 Mhz)
       MEMORY: 127.3 GB
        PANIC: "Kernel panic - not syncing:"
          PID: 693
      COMMAND: "khungtaskd"
         TASK: ff4d1890260d4000  [THREAD_INFO: ff4d1890260d4000]
          CPU: 0
        STATE: TASK_RUNNING (PANIC)
 
 crash> ps|grep sos                                                                                                                                                                                                                                                                                                           
   449071  363440  31  ff4d189005f68000  IN   0.2  506428 314484  sos                                                                                                                                                                                                                                                         
   451043  363440  63  ff4d188943a9c000  IN   0.2  506428 314484  sos                                                                                                                                                                                                                                                         
   494099  363440  29  ff4d187f941f4000  UN   0.2  506428 314484  sos     
 
  8457.517696] ------------[ cut here ]------------
 [ 8457.517698] NETDEV WATCHDOG: ens3f1 (ice): transmit queue 35 timed out
 [ 8457.517711] WARNING: CPU: 33 PID: 349 at net/sched/sch_generic.c:472 dev_watchdog+0x270/0x300
 [ 8457.517718] Modules linked in: binfmt_misc macvlan pci_pf_stub iavf vfio_pci vfio_virqfd vfio_iommu_type1 vfio vhost_net vhost vhost_iotlb tap tun xt_addrtype nf_conntrack_netlink ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_nat xt_CT tcp_diag inet_diag ip6t_MASQUERADE xt_mark ice(OE) xt_conntrack ipt_MASQUERADE nft_counter xt_comment nft_compat veth nft_chain_nat nf_tables overlay bridge 8021q garp mrp stp llc nfnetlink_cttimeout nfnetlink openvswitch nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ext4 mbcache jbd2 intel_rapl_msr iTCO_wdt iTCO_vendor_support dell_smbios wmi_bmof dell_wmi_descriptor dcdbas kvm_intel kvm irqbypass intel_rapl_common i10nm_edac nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp rapl ipmi_ssif intel_cstate intel_uncore dm_thin_pool pcspkr isst_if_mbox_pci dm_persistent_data dm_bio_prison dm_bufio isst_if_mmio isst_if_common mei_me i2c_i801 joydev mei intel_pmt wmi acpi_ipmi ipmi_si acpi_power_meter sctp ip6_udp_tunnel
 [ 8457.517770]  udp_tunnel ip_tables xfs libcrc32c i40e sd_mod t10_pi sg bnxt_re ib_uverbs ib_core crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel bnxt_en ahci libahci libata dm_multipath dm_mirror dm_region_hash dm_log dm_mod ipmi_devintf ipmi_msghandler fuse [last unloaded: ice]
 [ 8457.517784] Red Hat flags: eBPF/rawtrace
 [ 8457.517787] CPU: 33 PID: 349 Comm: ktimers/33 Kdump: loaded Tainted: G           OE    --------- -  - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1
 [ 8457.517789] Hardware name: Dell Inc. PowerEdge XR11/0P2RNT, BIOS 1.12.1 09/13/2023
 [ 8457.517790] RIP: 0010:dev_watchdog+0x270/0x300
 [ 8457.517793] Code: 17 00 e9 f0 fe ff ff 4c 89 e7 c6 05 c6 03 34 01 01 e8 14 43 fa ff 89 d9 4c 89 e6 48 c7 c7 90 37 98 9a 48 89 c2 e8 1d be 88 ff <0f> 0b eb ad 65 8b 05 05 13 fb 65 89 c0 48 0f a3 05 1b ab 36 01 73
 [ 8457.517795] RSP: 0018:ff7aeb55c73c7d78 EFLAGS: 00010286
 [ 8457.517797] RAX: 0000000000000000 RBX: 0000000000000023 RCX: 0000000000000001
 [ 8457.517798] RDX: 0000000000000000 RSI: ffffffff9a908557 RDI: 00000000ffffffff
 [ 8457.517799] RBP: 0000000000000021 R08: ffffffff9ae6b3a0 R09: 00080000000000ff
 [ 8457.517800] R10: 000000006443a462 R11: 0000000000000036 R12: ff4d187f4d1f4000
 [ 8457.517801] R13: ff4d187f4d20df00 R14: ff4d187f4d1f44a0 R15: 0000000000000080
 [ 8457.517803] FS:  0000000000000000(0000) GS:ff4d18967a040000(0000) knlGS:0000000000000000
 [ 8457.517804] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 [ 8457.517805] CR2: 00007fc47c649974 CR3: 00000019a441a005 CR4: 0000000000771ea0
 [ 8457.517806] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
 [ 8457.517807] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
 [ 8457.517808] PKRU: 55555554
 [ 8457.517810] Call Trace:
 [ 8457.517813]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517816]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517818]  call_timer_fn+0x32/0x1d0
 [ 8457.517822]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517825]  run_timer_softirq+0x1fc/0x640
 [ 8457.517828]  ? _raw_spin_unlock_irq+0x1d/0x60
 [ 8457.517833]  ? finish_task_switch+0xea/0x320
 [ 8457.517836]  ? __switch_to+0x10c/0x4d0
 [ 8457.517840]  __do_softirq+0xa5/0x33f
 [ 8457.517844]  run_timersd+0x61/0xb0
 [ 8457.517848]  smpboot_thread_fn+0x1c1/0x2b0
 [ 8457.517851]  ? smpboot_register_percpu_thread_cpumask+0x140/0x140
 [ 8457.517853]  kthread+0x151/0x170
 [ 8457.517856]  ? set_kthread_struct+0x50/0x50
 [ 8457.517858]  ret_from_fork+0x1f/0x40
 [ 8457.517861] ---[ end trace 0000000000000002 ]---
 [ 8458.520445] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0x99, HW_HEAD: 0x14, NTU: 0x15, INT: 0x0
 [ 8458.520451] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35
 [ 8506.139246] ice 0000:8a:00.1: PTP reset successful
 [ 8506.437047] ice 0000:8a:00.1: VSI rebuilt. VSI index 0, type ICE_VSI_PF
 [ 8506.445482] ice 0000:8a:00.1: VSI rebuilt. VSI index 1, type ICE_VSI_CTRL
 [ 8540.459707] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0xe3, HW_HEAD: 0xe7, NTU: 0xe8, INT: 0x0
 [ 8540.459714] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35
 [ 8563.891356] ice 0000:8a:00.1: PTP reset successful
 ~~~
 
 Second vmcore on the same node show issue with the SSD drive
 
 $ crash vmcore-2 tmp/vmlinux
 
       KERNEL: tmp/vmlinux  [TAINTED]
     DUMPFILE: vmcore-2  [PARTIAL DUMP]
         CPUS: 64
         DATE: Thu Mar  7 14:29:31 CET 2024
       UPTIME: 1 days, 07:19:52
 LOAD AVERAGE: 25.55, 26.42, 28.30
        TASKS: 5409
     NODENAME: aaa.bbb.ccc
      RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64
      VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023
      MACHINE: x86_64  (1500 Mhz)
       MEMORY: 127.3 GB
        PANIC: "Kernel panic - not syncing:"
          PID: 696
      COMMAND: "khungtaskd"
         TASK: ff2b35ed48d30000  [THREAD_INFO: ff2b35ed48d30000]
          CPU: 34
        STATE: TASK_RUNNING (PANIC)
 
 crash> ps |grep sos
   719784  718369  62  ff2b35ff00830000  IN   0.4 1215636 563388  sos
   721740  718369  61  ff2b3605579f8000  IN   0.4 1215636 563388  sos
   721742  718369  63  ff2b35fa5eb9c000  IN   0.4 1215636 563388  sos
   721744  718369  30  ff2b3603367fc000  IN   0.4 1215636 563388  sos
   721746  718369  29  ff2b360557944000  IN   0.4 1215636 563388  sos
   743356  718369  62  ff2b36042c8e0000  IN   0.4 1215636 563388  sos
   743818  718369  29  ff2b35f6186d0000  IN   0.4 1215636 563388  sos
   748518  718369  61  ff2b3602cfb84000  IN   0.4 1215636 563388  sos
   748884  718369  62  ff2b360713418000  UN   0.4 1215636 563388  sos
 
 crash> dmesg
 
 [111871.309883] ata3.00: exception Emask 0x0 SAct 0x3ff8 SErr 0x0 action 0x6 frozen
 [111871.309889] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309891] ata3.00: cmd 61/40:18:28:47:4b/00:00:00:00:00/40 tag 3 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309895] ata3.00: status: { DRDY }
 [111871.309897] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309904] ata3.00: cmd 61/40:20:68:47:4b/00:00:00:00:00/40 tag 4 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309908] ata3.00: status: { DRDY }
 [111871.309909] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309910] ata3.00: cmd 61/40:28:a8:47:4b/00:00:00:00:00/40 tag 5 ncq dma 32768 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309913] ata3.00: status: { DRDY }
 [111871.309914] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309915] ata3.00: cmd 61/40:30:e8:47:4b/00:00:00:00:00/40 tag 6 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309918] ata3.00: status: { DRDY }
 [111871.309919] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309919] ata3.00: cmd 61/70:38:48:37:2b/00:00:1c:00:00/40 tag 7 ncq dma 57344 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309922] ata3.00: status: { DRDY }
 [111871.309923] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309924] ata3.00: cmd 61/20:40:78:29:0c/00:00:19:00:00/40 tag 8 ncq dma 16384 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309927] ata3.00: status: { DRDY }
 [111871.309928] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309929] ata3.00: cmd 61/08:48:08:0c:c0/00:00:1c:00:00/40 tag 9 ncq dma 4096 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309932] ata3.00: status: { DRDY }
 [111871.309933] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309934] ata3.00: cmd 61/40:50:28:48:4b/00:00:00:00:00/40 tag 10 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309937] ata3.00: status: { DRDY }
 [111871.309938] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309939] ata3.00: cmd 61/40:58:68:48:4b/00:00:00:00:00/40 tag 11 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309942] ata3.00: status: { DRDY }
 [111871.309943] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309944] ata3.00: cmd 61/40:60:a8:48:4b/00:00:00:00:00/40 tag 12 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309946] ata3.00: status: { DRDY }
 [111871.309947] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309948] ata3.00: cmd 61/40:68:e8:48:4b/00:00:00:00:00/40 tag 13 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309951] ata3.00: status: { DRDY }
 [111871.309953] ata3: hard resetting link
 ...
 ...
 ...
 [112789.787310] INFO: task sos:748884 blocked for more than 600 seconds.                                                                                                                                                                                                                                                     
 [112789.787314]       Tainted: G           OE    --------- -  - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1                                                                                                                                                                                                                      
 [112789.787316] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.                                                                                                                                                                                                                                    
 [112789.787316] task:sos             state:D stack:    0 pid:748884 ppid:718369 flags:0x00084080                                                                                                                                                                                                                             
 [112789.787320] Call Trace:                                                                                                                                                                                                                                                                                                  
 [112789.787323]  __schedule+0x37b/0x8e0                                                                                                                                                                                                                                                                                      
 [112789.787330]  schedule+0x6c/0x120                                                                                                                                                                                                                                                                                         
 [112789.787333]  schedule_timeout+0x2b7/0x410                                                                                                                                                                                                                                                                                
 [112789.787336]  ? enqueue_entity+0x130/0x790                                                                                                                                                                                                                                                                                
 [112789.787340]  wait_for_completion+0x84/0xf0                                                                                                                                                                                                                                                                               
 [112789.787343]  flush_work+0x120/0x1d0                                                                                                                                                                                                                                                                                      
 [112789.787347]  ? flush_workqueue_prep_pwqs+0x130/0x130                                                                                                                                                                                                                                                                     
 [112789.787350]  schedule_on_each_cpu+0xa7/0xe0                                                                                                                                                                                                                                                                              
 [112789.787353]  vmstat_refresh+0x22/0xa0                                                                                                                                                                                                                                                                                    
 [112789.787357]  proc_sys_call_handler+0x174/0x1d0                                                                                                                                                                                                                                                                           
 [112789.787361]  vfs_read+0x91/0x150                                                                                                                                                                                                                                                                                         
 [112789.787364]  ksys_read+0x52/0xc0                                                                                                                                                                                                                                                                                         
 [112789.787366]  do_syscall_64+0x87/0x1b0                                                                                                                                                                                                                                                                                    
 [112789.787369]  entry_SYSCALL_64_after_hwframe+0x61/0xc6                                                                                                                                                                                                                                                                    
 [112789.787372] RIP: 0033:0x7f2dca8c2ab4                                                                                                                                                                                                                                                                                     
 [112789.787378] Code: Unable to access opcode bytes at RIP 0x7f2dca8c2a8a.                                                                                                                                                                                                                                                   
 [112789.787378] RSP: 002b:00007f2dbbffc5e0 EFLAGS: 00000246 ORIG_RAX: 0000000000000000                                                                                                                                                                                                                                       
 [112789.787380] RAX: ffffffffffffffda RBX: 0000000000000008 RCX: 00007f2dca8c2ab4                                                                                                                                                                                                                                            
 [112789.787382] RDX: 0000000000004000 RSI: 00007f2db402b5a0 RDI: 0000000000000008                                                                                                                                                                                                                                            
 [112789.787383] RBP: 00007f2db402b5a0 R08: 0000000000000000 R09: 00007f2dcace27bb                                                                                                                                                                                                                                            
 [112789.787383] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000004000                                                                                                                                                                                                                                            
 [112789.787384] R13: 0000000000000008 R14: 00007f2db402b5a0 R15: 00007f2da4001a90                                                                                                                                                                                                                                            
 [112789.787418] NMI backtrace for cpu 34    {code}
Status: CLOSED
#OCPBUGS-33157issue40 hours agoIPv6 metal-ipi jobs: master-bmh-update loosing access to API Verified
Issue 15978085: IPv6 metal-ipi jobs: master-bmh-update loosing access to API
Description: The last 4 IPv6 jobs are failing on the same error
 
 https://prow.ci.openshift.org/job-history/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6
 master-bmh-update.log looses access to the the API when trying to get/update the BMH details
 
 https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6/1785492737169035264
 
 
 
 {noformat}
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[4663]: Waiting for 3 masters to become provisioned
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531242   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531808   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533281   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533630   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.535180   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: The connection to the server api-int.ostest.test.metalkube.org:6443 was refused - did you specify the right host or port?
 {noformat}
Status: Verified
{noformat}
May 01 02:49:40 localhost.localdomain master-bmh-update.sh[12448]: E0501 02:49:40.429468   12448 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
{noformat}
#OCPBUGS-32375issue10 days agoUnsuccessful cluster installation with 4.15 nightlies on s390x using ABI CLOSED
Issue 15945005: Unsuccessful cluster installation with 4.15 nightlies on s390x using ABI
Description: When used the latest s390x release builds in 4.15 nightly stream for Agent Based Installation of SNO on IBM Z KVM, installation is failing at the end while watching cluster operators even though the DNS and HA Proxy configurations are perfect as the same setup is working with 4.15.x stable release image builds 
 
 Below is the error encountered multiple times when used "release:s390x-latest" image while booting the cluster. This image is used during the boot through OPENSHIFT_INSATLL_RELEASE_IMAGE_OVERRIDE while the binary is fetched using the latest stable builds from here : [https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/latest/] for which the version would be around 4.15.x 
 
 *release-image:*
 {code:java}
 registry.build01.ci.openshift.org/ci-op-cdkdqnqn/release@sha256:c6eb4affa5c44d2ad220d7064e92270a30df5f26d221e35664f4d5547a835617
 {code}
  ** 
 
 *PROW CI Build :* [https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_release/47965/rehearse-47965-periodic-ci-openshift-multiarch-master-nightly-4.15-e2e-agent-ibmz-sno/1780162365824700416] 
 
 *Error:* 
 {code:java}
 '/root/agent-sno/openshift-install wait-for install-complete --dir /root/agent-sno/ --log-level debug'
 Warning: Permanently added '128.168.142.71' (ED25519) to the list of known hosts.
 level=debug msg=OpenShift Installer 4.15.8
 level=debug msg=Built from commit f4f5d0ee0f7591fd9ddf03ac337c804608102919
 level=debug msg=Loading Install Config...
 level=debug msg=  Loading SSH Key...
 level=debug msg=  Loading Base Domain...
 level=debug msg=    Loading Platform...
 level=debug msg=  Loading Cluster Name...
 level=debug msg=    Loading Base Domain...
 level=debug msg=    Loading Platform...
 level=debug msg=  Loading Pull Secret...
 level=debug msg=  Loading Platform...
 level=debug msg=Loading Agent Config...
 level=debug msg=Using Agent Config loaded from state file
 level=warning msg=An agent configuration was detected but this command is not the agent wait-for command
 level=info msg=Waiting up to 40m0s (until 10:15AM UTC) for the cluster at https://api.agent-sno.abi-ci.com:6443 to initialize...
 W0416 09:35:51.793770    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:51.793827    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:35:53.127917    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:53.127946    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:35:54.760896    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:54.761058    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:00.790136    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:00.790175    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:08.516333    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:08.516445    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:31.442291    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:31.442336    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:37:03.033971    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:37:03.034049    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:37:42.025487    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:37:42.025538    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:38:32.148607    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:38:32.148677    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:39:27.680156    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:39:27.680194    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:40:23.290839    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:40:23.290988    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:41:22.298200    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:41:22.298338    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:42:01.197417    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:42:01.197465    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:42:36.739577    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:42:36.739937    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:43:07.331029    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:43:07.331154    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:44:04.008310    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:44:04.008381    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:44:40.882938    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:44:40.882973    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:45:18.975189    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:45:18.975307    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:45:49.753584    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:45:49.753614    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:46:41.148207    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:46:41.148347    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:47:12.882965    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:47:12.883075    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:47:53.636491    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:47:53.636538    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:48:31.792077    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:48:31.792165    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:49:29.117579    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:49:29.117657    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:50:02.802033    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:50:02.802167    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:50:33.826705    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:50:33.826859    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:51:16.045403    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:51:16.045447    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:51:53.795710    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:51:53.795745    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:52:52.741141    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:52:52.741289    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:53:52.621642    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:53:52.621687    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:54:35.809906    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:54:35.810054    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:55:24.249298    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:55:24.249418    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:56:12.717328    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:56:12.717372    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:56:51.172375    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:56:51.172439    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:57:42.242226    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:57:42.242292    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:58:17.663810    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:58:17.663849    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:59:13.319754    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:59:13.319889    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:00:03.188117    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:00:03.188166    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:00:54.590362    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:00:54.590494    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:01:35.673592    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:01:35.673633    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:02:11.552079    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:02:11.552133    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:02:51.110525    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:02:51.110663    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:03:31.251376    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:03:31.251494    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:04:21.566895    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:04:21.566931    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:04:52.754047    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:04:52.754221    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:05:24.673675    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:05:24.673724    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:06:17.608482    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:06:17.608598    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:06:58.215116    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:06:58.215262    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:07:46.578262    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:07:46.578392    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:08:18.239710    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:08:18.239830    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:09:06.947178    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:09:06.947239    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:10:00.261401    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:10:00.261486    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:10:59.363041    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:10:59.363113    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:11:32.205551    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:11:32.205612    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:12:24.956052    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:12:24.956147    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:12:55.353860    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:12:55.354004    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:13:39.223095    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:13:39.223170    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:14:25.018278    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:14:25.018404    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:15:17.227351    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:15:17.227424    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 level=error msg=Attempted to gather ClusterOperator status after wait failure: listing ClusterOperator objects: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 10.244.64.4:6443: connect: connection refused
 level=error msg=Cluster initialization failed because one or more operators are not functioning properly.
 level=error msg=The cluster should be accessible for troubleshooting as detailed in the documentation linked below,
 level=error msg=https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html
 level=error msg=The 'wait-for install-complete' subcommand can then be used to continue the installation
 level=error msg=failed to initialize the cluster: timed out waiting for the condition
 {"component":"entrypoint","error":"wrapped process failed: exit status 6","file":"k8s.io/test-infra/prow/entrypoint/run.go:84","func":"k8s.io/test-infra/prow/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-04-16T10:15:51Z"}
 error: failed to execute wrapped command: exit status 6 {code}
Status: CLOSED
#OCPBUGS-31763issue10 days agogcp install cluster creation fails after 30-40 minutes New
Issue 15921939: gcp install cluster creation fails after 30-40 minutes
Description: Component Readiness has found a potential regression in install should succeed: overall.  I see this on various different platforms, but I started digging into GCP failures.  No installer log bundle is created, which seriously hinders my ability to dig further.
 
 Bootstrap succeeds, and then 30 minutes after waiting for cluster creation, it dies.
 
 From [https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-gcp-sdn-serial/1775871000018161664]
 
 search.ci tells me this affects nearly 10% of jobs on GCP:
 
 [https://search.dptools.openshift.org/?search=Attempted+to+gather+ClusterOperator+status+after+installation+failure%3A+listing+ClusterOperator+objects.*connection+refused&maxAge=168h&context=1&type=bug%2Bissue%2Bjunit&name=.*4.16.*gcp.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job]
 
  
 {code:java}
 time="2024-04-04T13:27:50Z" level=info msg="Waiting up to 40m0s (until 2:07PM UTC) for the cluster at https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443 to initialize..."
 time="2024-04-04T14:07:50Z" level=error msg="Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get \"https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/config.openshift.io/v1/clusteroperators\": dial tcp 35.238.130.20:6443: connect: connection refused"
 time="2024-04-04T14:07:50Z" level=error msg="Cluster initialization failed because one or more operators are not functioning properly.\nThe cluster should be accessible for troubleshooting as detailed in the documentation linked below,\nhttps://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html\nThe 'wait-for install-complete' subcommand can then be used to continue the installation"
 time="2024-04-04T14:07:50Z" level=error msg="failed to initialize the cluster: timed out waiting for the condition" {code}
  
 
 Probability of significant regression: 99.44%
 
 Sample (being evaluated) Release: 4.16
 Start Time: 2024-03-29T00:00:00Z
 End Time: 2024-04-04T23:59:59Z
 Success Rate: 68.75%
 Successes: 11
 Failures: 5
 Flakes: 0
 
 Base (historical) Release: 4.15
 Start Time: 2024-02-01T00:00:00Z
 End Time: 2024-02-28T23:59:59Z
 Success Rate: 96.30%
 Successes: 52
 Failures: 2
 Flakes: 0
 
 View the test details report at [https://sippy.dptools.openshift.org/sippy-ng/component_readiness/test_details?arch=amd64&arch=amd64&baseEndTime=2024-02-28%2023%3A59%3A59&baseRelease=4.15&baseStartTime=2024-02-01%2000%3A00%3A00&capability=Other&component=Installer%20%2F%20openshift-installer&confidence=95&environment=sdn%20upgrade-micro%20amd64%20gcp%20standard&excludeArches=arm64%2Cheterogeneous%2Cppc64le%2Cs390x&excludeClouds=openstack%2Cibmcloud%2Clibvirt%2Covirt%2Cunknown&excludeVariants=hypershift%2Cosd%2Cmicroshift%2Ctechpreview%2Csingle-node%2Cassisted%2Ccompact&groupBy=cloud%2Carch%2Cnetwork&ignoreDisruption=true&ignoreMissing=false&minFail=3&network=sdn&network=sdn&pity=5&platform=gcp&platform=gcp&sampleEndTime=2024-04-04%2023%3A59%3A59&sampleRelease=4.16&sampleStartTime=2024-03-29%2000%3A00%3A00&testId=cluster%20install%3A0cb1bb27e418491b1ffdacab58c5c8c0&testName=install%20should%20succeed%3A%20overall&upgrade=upgrade-micro&upgrade=upgrade-micro&variant=standard&variant=standard]
Status: New
#OCPBUGS-17183issue2 days ago[BUG] Assisted installer fails to create bond with active backup for single node installation New
Issue 15401516: [BUG] Assisted installer fails to create bond with active backup for single node installation
Description: Description of problem:
 {code:none}
 The assisted installer will always fail to create bond with active backup using nmstate yaml and the errors are : 
 
 ~~~ 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_URL's https endpoint at https://xx.xx.32.40:6443/version
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Checking validity of <hostname> of type API_INT_URL 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Successfully resolved API_INT_URL <hostname> 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_INT_URL's https endpoint at https://xx.xx.32.40:6443/versionJul 26 07:12:23 <hostname> bootkube.sh[12960]: Still waiting for the Kubernetes API: 
 Get "https://localhost:6443/readyz": dial tcp [::1]:6443: connect: connection refusedJul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? 
 Jul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? 
  ~~~ 
 
 Where, <hostname> is the actual hostname of the node. 
 
 Adding sosreport and nmstate yaml file here : https://drive.google.com/drive/u/0/folders/19dNzKUPIMmnUls2pT_stuJxr2Dxdi5eb{code}
 Version-Release number of selected component (if applicable):
 {code:none}
 4.12 
 Dell 16g Poweredge R660{code}
 How reproducible:
 {code:none}
 Always at customer side{code}
 Steps to Reproduce:
 {code:none}
 1. Open Assisted installer UI (console.redhat.com -> assisted installer) 
 2. Add the network configs as below for host1  
 
 -----------
 interfaces:
 - name: bond99
   type: bond
   state: up
   ipv4:
     address:
     - ip: xx.xx.32.40
       prefix-length: 24
     enabled: true
   link-aggregation:
     mode: active-backup
     options:
       miimon: '140'
     port:
     - eno12399
     - eno12409
 dns-resolver:
   config:
     search:
     - xxxx
     server:
     - xx.xx.xx.xx
 routes:
   config:
     - destination: 0.0.0.0/0
       metric: 150
       next-hop-address: xx.xx.xx.xx
       next-hop-interface: bond99
       table-id: 254    
 -----------
 
 3. Enter the mac addresses of interfaces in the fields. 
 4. Generate the iso and boot the node. The node will not be able to ping/ssh. This happen everytime and reproducible.
 5. As there was no way to check (due to ssh not working) what is happening on the node, we reset root password and can see that ip address was present on bond, still ping/ssh does not work.
 6. After multiple reboots, customer was able to ssh/ping and provided sosreport and we could see above mentioned error in the journal logs in sosreport.  
  {code}
 Actual results:
 {code:none}
 Fails to install. Seems there is some issue with networking.{code}
 Expected results:
 {code:none}
 Able to proceed with installation without above mentioned issues{code}
 Additional info:
 {code:none}
 - The installation works with round robbin bond mode in 4.12. 
 - Also, the installation works with active-backup 4.10. 
 - Active-backup bond with 4.12 is failing.{code}
Status: New
#OCPBUGS-32091issue4 weeks agoCAPI-Installer leaks processes during unsuccessful installs MODIFIED
ERROR Attempted to gather debug logs after installation failure: failed to create SSH client: ssh: handshake failed: ssh: disconnect, reason 2: Too many authentication failures
ERROR Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.134.9.157:6443: connect: connection refused
ERROR Bootstrap failed to complete: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/version": dial tcp 18.222.8.23:6443: connect: connection refused

... 1 lines not shown

periodic-ci-openshift-multiarch-master-nightly-4.15-upgrade-from-stable-4.14-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 196 runs, 11% failed, 762% of failures match = 82% impact
#1791942997986775040junitAbout an hour ago
May 18 23:03:14.329 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-51-129.us-west-1.compute.internal" not ready since 2024-05-18 23:03:13 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 23:03:47.503 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-51-129.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 23:03:37.449198       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 23:03:37.449528       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716073417 cert, and key in /tmp/serving-cert-3408908902/serving-signer.crt, /tmp/serving-cert-3408908902/serving-signer.key\nStaticPodsDegraded: I0518 23:03:38.110380       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 23:03:38.126127       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-51-129.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 23:03:38.126247       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 23:03:38.140668       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3408908902/tls.crt::/tmp/serving-cert-3408908902/tls.key"\nStaticPodsDegraded: F0518 23:03:38.405427       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 23:09:26.991 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-77-87.us-west-1.compute.internal" not ready since 2024-05-18 23:07:26 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791920616039780352junit3 hours ago
May 18 21:41:24.826 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-0-183.us-east-2.compute.internal" not ready since 2024-05-18 21:41:13 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 21:41:33.278 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-0-183.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 21:41:25.790595       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 21:41:25.790972       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716068485 cert, and key in /tmp/serving-cert-2156789065/serving-signer.crt, /tmp/serving-cert-2156789065/serving-signer.key\nStaticPodsDegraded: I0518 21:41:26.103339       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 21:41:26.121285       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-0-183.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 21:41:26.121403       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 21:41:26.146119       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2156789065/tls.crt::/tmp/serving-cert-2156789065/tls.key"\nStaticPodsDegraded: F0518 21:41:26.394224       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791905650578558976junit4 hours ago
May 18 20:27:40.494 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-51-233.us-west-1.compute.internal" not ready since 2024-05-18 20:25:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 20:28:10.392 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-51-233.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 20:28:01.431959       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 20:28:01.432336       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716064081 cert, and key in /tmp/serving-cert-4011770785/serving-signer.crt, /tmp/serving-cert-4011770785/serving-signer.key\nStaticPodsDegraded: I0518 20:28:01.907120       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 20:28:01.916400       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-51-233.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 20:28:01.916541       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 20:28:01.928789       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4011770785/tls.crt::/tmp/serving-cert-4011770785/tls.key"\nStaticPodsDegraded: F0518 20:28:02.239593       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 18 20:33:47.504 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-75-242.us-west-1.compute.internal" not ready since 2024-05-18 20:31:47 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791905650578558976junit4 hours ago
May 18 20:40:15.582 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-106-106.us-west-1.compute.internal" not ready since 2024-05-18 20:39:56 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 20:40:32.477 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-106-106.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 20:40:23.649971       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 20:40:23.650169       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716064823 cert, and key in /tmp/serving-cert-1811220386/serving-signer.crt, /tmp/serving-cert-1811220386/serving-signer.key\nStaticPodsDegraded: I0518 20:40:24.065967       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 20:40:24.067530       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-106-106.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 20:40:24.067672       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 20:40:24.068362       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1811220386/tls.crt::/tmp/serving-cert-1811220386/tls.key"\nStaticPodsDegraded: F0518 20:40:24.322555       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1791950738889379840junit55 minutes ago
May 18 23:38:02.521 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-58-21.us-west-1.compute.internal" not ready since 2024-05-18 23:37:51 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 23:38:16.772 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-58-21.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 23:38:06.794737       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 23:38:06.794999       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716075486 cert, and key in /tmp/serving-cert-3507528311/serving-signer.crt, /tmp/serving-cert-3507528311/serving-signer.key\nStaticPodsDegraded: I0518 23:38:07.374071       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 23:38:07.389248       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-58-21.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 23:38:07.389354       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 23:38:07.403012       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3507528311/tls.crt::/tmp/serving-cert-3507528311/tls.key"\nStaticPodsDegraded: F0518 23:38:07.816922       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 18 23:44:23.532 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-90-118.us-west-1.compute.internal" not ready since 2024-05-18 23:44:15 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791890533308698624junit5 hours ago
May 18 19:41:59.129 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-49-137.us-west-2.compute.internal" not ready since 2024-05-18 19:41:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 19:42:12.942 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-49-137.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 19:42:07.879286       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 19:42:07.892441       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716061327 cert, and key in /tmp/serving-cert-384084712/serving-signer.crt, /tmp/serving-cert-384084712/serving-signer.key\nStaticPodsDegraded: I0518 19:42:08.318982       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 19:42:08.334164       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-49-137.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 19:42:08.334331       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 19:42:08.355905       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-384084712/tls.crt::/tmp/serving-cert-384084712/tls.key"\nStaticPodsDegraded: F0518 19:42:08.681087       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 18 19:48:09.141 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-138.us-west-2.compute.internal" not ready since 2024-05-18 19:47:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791890533308698624junit5 hours ago
May 18 19:54:17.423 - 19s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-76-197.us-west-2.compute.internal" not ready since 2024-05-18 19:54:02 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 19:54:36.693 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-76-197.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 19:54:26.828638       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 19:54:26.829030       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716062066 cert, and key in /tmp/serving-cert-3745505787/serving-signer.crt, /tmp/serving-cert-3745505787/serving-signer.key\nStaticPodsDegraded: I0518 19:54:27.386647       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 19:54:27.392402       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-76-197.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 19:54:27.392560       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 19:54:27.409906       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3745505787/tls.crt::/tmp/serving-cert-3745505787/tls.key"\nStaticPodsDegraded: F0518 19:54:27.686093       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1791936008485343232junit2 hours ago
May 18 22:33:56.612 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-30.ec2.internal" not ready since 2024-05-18 22:31:56 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 22:34:28.590 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-100-30.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 22:34:19.465138       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 22:34:19.468795       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716071659 cert, and key in /tmp/serving-cert-894628068/serving-signer.crt, /tmp/serving-cert-894628068/serving-signer.key\nStaticPodsDegraded: I0518 22:34:19.787069       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 22:34:19.798357       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-30.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 22:34:19.798518       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 22:34:19.811121       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-894628068/tls.crt::/tmp/serving-cert-894628068/tls.key"\nStaticPodsDegraded: F0518 22:34:20.003776       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 18 22:40:07.093 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-64-153.ec2.internal" not ready since 2024-05-18 22:39:54 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791867683315126272junit7 hours ago
May 18 18:07:08.654 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-116-38.us-west-2.compute.internal" not ready since 2024-05-18 18:06:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 18:07:21.984 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-116-38.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 18:07:12.121704       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 18:07:12.122264       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716055632 cert, and key in /tmp/serving-cert-448925304/serving-signer.crt, /tmp/serving-cert-448925304/serving-signer.key\nStaticPodsDegraded: I0518 18:07:12.688867       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 18:07:12.701568       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-116-38.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 18:07:12.701666       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 18:07:12.719658       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-448925304/tls.crt::/tmp/serving-cert-448925304/tls.key"\nStaticPodsDegraded: F0518 18:07:13.044792       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 18:12:50.303 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-119-108.us-west-2.compute.internal" not ready since 2024-05-18 18:10:50 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791912983476047872junit4 hours ago
May 18 21:03:41.231 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-11-83.us-west-1.compute.internal" not ready since 2024-05-18 21:03:34 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 21:03:57.829 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-11-83.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 21:03:48.073575       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 21:03:48.073950       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716066228 cert, and key in /tmp/serving-cert-2012026414/serving-signer.crt, /tmp/serving-cert-2012026414/serving-signer.key\nStaticPodsDegraded: I0518 21:03:48.485263       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 21:03:48.494883       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-11-83.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 21:03:48.495068       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 21:03:48.507103       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2012026414/tls.crt::/tmp/serving-cert-2012026414/tls.key"\nStaticPodsDegraded: F0518 21:03:48.705313       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 21:09:52.169 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-74-231.us-west-1.compute.internal" not ready since 2024-05-18 21:09:36 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791912983476047872junit4 hours ago
May 18 21:16:03.382 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-12-187.us-west-1.compute.internal" not ready since 2024-05-18 21:15:52 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 21:16:18.397 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-12-187.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 21:16:07.167189       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 21:16:07.167654       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716066967 cert, and key in /tmp/serving-cert-1852525625/serving-signer.crt, /tmp/serving-cert-1852525625/serving-signer.key\nStaticPodsDegraded: I0518 21:16:07.655746       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 21:16:07.676849       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-12-187.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 21:16:07.676947       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 21:16:07.703468       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1852525625/tls.crt::/tmp/serving-cert-1852525625/tls.key"\nStaticPodsDegraded: F0518 21:16:07.887183       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791852722463444992junit7 hours ago
May 18 17:05:29.566 - 22s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-43-84.ec2.internal" not ready since 2024-05-18 17:03:29 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 17:05:52.311 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-43-84.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 17:05:43.112889       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 17:05:43.114335       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716051943 cert, and key in /tmp/serving-cert-3644573898/serving-signer.crt, /tmp/serving-cert-3644573898/serving-signer.key\nStaticPodsDegraded: I0518 17:05:43.535773       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 17:05:43.547126       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-43-84.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 17:05:43.547254       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 17:05:43.569560       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3644573898/tls.crt::/tmp/serving-cert-3644573898/tls.key"\nStaticPodsDegraded: F0518 17:05:44.230958       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 17:11:22.398 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-53-105.ec2.internal" not ready since 2024-05-18 17:10:59 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791898435188690944junit4 hours ago
May 18 20:08:48.208 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-22-238.us-west-2.compute.internal" not ready since 2024-05-18 20:08:39 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 20:09:04.189 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-22-238.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 20:08:53.822606       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 20:08:53.823161       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716062933 cert, and key in /tmp/serving-cert-3703413004/serving-signer.crt, /tmp/serving-cert-3703413004/serving-signer.key\nStaticPodsDegraded: I0518 20:08:54.131366       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 20:08:54.142680       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-238.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 20:08:54.142812       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 20:08:54.166060       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3703413004/tls.crt::/tmp/serving-cert-3703413004/tls.key"\nStaticPodsDegraded: F0518 20:08:54.479248       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 20:14:54.631 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-92-127.us-west-2.compute.internal" not ready since 2024-05-18 20:14:43 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791898435188690944junit4 hours ago
May 18 20:20:47.004 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-88-166.us-west-2.compute.internal" not ready since 2024-05-18 20:18:46 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 20:21:19.440 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-88-166.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 20:21:08.681247       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 20:21:08.681528       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716063668 cert, and key in /tmp/serving-cert-945564556/serving-signer.crt, /tmp/serving-cert-945564556/serving-signer.key\nStaticPodsDegraded: I0518 20:21:09.241555       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 20:21:09.266168       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-88-166.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 20:21:09.266286       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 20:21:09.286233       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-945564556/tls.crt::/tmp/serving-cert-945564556/tls.key"\nStaticPodsDegraded: F0518 20:21:09.468869       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791875418542837760junit6 hours ago
May 18 18:34:36.452 - 43s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-77-104.us-east-2.compute.internal" not ready since 2024-05-18 18:32:36 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 18:35:20.106 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-77-104.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 18:35:10.488363       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 18:35:10.488666       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716057310 cert, and key in /tmp/serving-cert-658077141/serving-signer.crt, /tmp/serving-cert-658077141/serving-signer.key\nStaticPodsDegraded: I0518 18:35:10.812183       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 18:35:10.829975       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-77-104.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 18:35:10.830167       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 18:35:10.848803       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-658077141/tls.crt::/tmp/serving-cert-658077141/tls.key"\nStaticPodsDegraded: F0518 18:35:11.046425       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 18:41:13.409 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-17-128.us-east-2.compute.internal" not ready since 2024-05-18 18:40:59 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791829933408915456junit9 hours ago
May 18 15:35:08.212 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-43-212.us-east-2.compute.internal" not ready since 2024-05-18 15:33:08 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 15:35:39.796 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-43-212.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 15:35:29.863497       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 15:35:29.863842       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716046529 cert, and key in /tmp/serving-cert-902421706/serving-signer.crt, /tmp/serving-cert-902421706/serving-signer.key\nStaticPodsDegraded: I0518 15:35:30.355273       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 15:35:30.366168       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-43-212.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 15:35:30.366277       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 15:35:30.377748       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-902421706/tls.crt::/tmp/serving-cert-902421706/tls.key"\nStaticPodsDegraded: F0518 15:35:30.577702       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 15:41:10.743 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-13-227.us-east-2.compute.internal" not ready since 2024-05-18 15:41:10 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791829933408915456junit9 hours ago
May 18 15:47:16.094 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-125-138.us-east-2.compute.internal" not ready since 2024-05-18 15:47:08 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 15:47:30.783 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-125-138.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 15:47:20.608764       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 15:47:20.609194       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716047240 cert, and key in /tmp/serving-cert-3870507128/serving-signer.crt, /tmp/serving-cert-3870507128/serving-signer.key\nStaticPodsDegraded: I0518 15:47:21.497293       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 15:47:21.510500       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-125-138.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 15:47:21.510672       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 15:47:21.526190       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3870507128/tls.crt::/tmp/serving-cert-3870507128/tls.key"\nStaticPodsDegraded: F0518 15:47:21.684448       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791846880477450240junit8 hours ago
May 18 16:35:07.584 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-54-180.us-east-2.compute.internal" not ready since 2024-05-18 16:33:07 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 16:35:41.558 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-54-180.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 16:35:29.754503       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 16:35:29.754869       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716050129 cert, and key in /tmp/serving-cert-211705028/serving-signer.crt, /tmp/serving-cert-211705028/serving-signer.key\nStaticPodsDegraded: I0518 16:35:30.441409       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 16:35:30.456977       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-54-180.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 16:35:30.457059       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 16:35:30.483780       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-211705028/tls.crt::/tmp/serving-cert-211705028/tls.key"\nStaticPodsDegraded: F0518 16:35:30.578396       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 18 16:41:15.063 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-74-221.us-east-2.compute.internal" not ready since 2024-05-18 16:41:05 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791846880477450240junit8 hours ago
May 18 16:47:00.241 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-90-250.us-east-2.compute.internal" not ready since 2024-05-18 16:46:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 16:47:14.448 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-90-250.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 16:47:03.283617       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 16:47:03.284027       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716050823 cert, and key in /tmp/serving-cert-2708874670/serving-signer.crt, /tmp/serving-cert-2708874670/serving-signer.key\nStaticPodsDegraded: I0518 16:47:03.806328       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 16:47:03.826501       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-90-250.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 16:47:03.826601       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 16:47:03.854364       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2708874670/tls.crt::/tmp/serving-cert-2708874670/tls.key"\nStaticPodsDegraded: F0518 16:47:03.963555       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1791823722030567424junit9 hours ago
May 18 15:25:38.189 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-57-180.us-west-2.compute.internal" not ready since 2024-05-18 15:25:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 15:25:53.046 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-57-180.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 15:25:43.673081       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 15:25:43.673306       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716045943 cert, and key in /tmp/serving-cert-3923951901/serving-signer.crt, /tmp/serving-cert-3923951901/serving-signer.key\nStaticPodsDegraded: I0518 15:25:44.462931       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 15:25:44.486289       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-57-180.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 15:25:44.486462       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 15:25:44.512100       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3923951901/tls.crt::/tmp/serving-cert-3923951901/tls.key"\nStaticPodsDegraded: F0518 15:25:44.801461       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1791772333891915776junit13 hours ago
May 18 11:45:22.103 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-105-176.us-east-2.compute.internal" not ready since 2024-05-18 11:45:10 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 11:45:28.936 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-105-176.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 11:45:21.682819       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 11:45:21.683107       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716032721 cert, and key in /tmp/serving-cert-2025929145/serving-signer.crt, /tmp/serving-cert-2025929145/serving-signer.key\nStaticPodsDegraded: I0518 11:45:22.245868       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 11:45:22.257917       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-176.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 11:45:22.258076       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 11:45:22.285105       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2025929145/tls.crt::/tmp/serving-cert-2025929145/tls.key"\nStaticPodsDegraded: F0518 11:45:22.442222       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 18 11:50:45.828 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-86-98.us-east-2.compute.internal" not ready since 2024-05-18 11:50:40 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791679262827220992junit19 hours ago
May 18 05:40:11.267 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-50-55.us-west-1.compute.internal" not ready since 2024-05-18 05:38:11 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 05:40:43.489 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-50-55.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 05:40:33.255005       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 05:40:33.255388       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716010833 cert, and key in /tmp/serving-cert-351657478/serving-signer.crt, /tmp/serving-cert-351657478/serving-signer.key\nStaticPodsDegraded: I0518 05:40:33.943466       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 05:40:33.954938       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-50-55.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 05:40:33.955054       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 05:40:33.974670       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-351657478/tls.crt::/tmp/serving-cert-351657478/tls.key"\nStaticPodsDegraded: F0518 05:40:34.192678       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 18 05:46:13.283 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-103-159.us-west-1.compute.internal" not ready since 2024-05-18 05:44:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791815366490460160junit10 hours ago
May 18 14:36:35.750 - 30s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-91-4.us-west-1.compute.internal" not ready since 2024-05-18 14:34:35 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 14:37:06.555 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-91-4.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 14:36:56.467595       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 14:36:56.468559       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716043016 cert, and key in /tmp/serving-cert-1863888054/serving-signer.crt, /tmp/serving-cert-1863888054/serving-signer.key\nStaticPodsDegraded: I0518 14:36:57.336451       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 14:36:57.344413       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-91-4.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 14:36:57.344522       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 14:36:57.357855       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1863888054/tls.crt::/tmp/serving-cert-1863888054/tls.key"\nStaticPodsDegraded: F0518 14:36:57.568949       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 18 14:42:59.241 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-89-159.us-west-1.compute.internal" not ready since 2024-05-18 14:42:37 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791807315138056192junit10 hours ago
May 18 14:08:36.431 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-29-195.us-west-1.compute.internal" not ready since 2024-05-18 14:08:23 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 14:08:48.061 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-29-195.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 14:08:38.908395       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 14:08:38.909128       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716041318 cert, and key in /tmp/serving-cert-2295729553/serving-signer.crt, /tmp/serving-cert-2295729553/serving-signer.key\nStaticPodsDegraded: I0518 14:08:39.504627       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 14:08:39.527928       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-29-195.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 14:08:39.528065       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 14:08:39.550455       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2295729553/tls.crt::/tmp/serving-cert-2295729553/tls.key"\nStaticPodsDegraded: F0518 14:08:39.958301       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 14:14:36.266 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-74-166.us-west-1.compute.internal" not ready since 2024-05-18 14:14:27 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791656453874913280junit20 hours ago
May 18 04:05:20.046 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-43-208.us-west-1.compute.internal" not ready since 2024-05-18 04:03:19 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 04:05:54.077 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-43-208.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 04:05:44.559405       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 04:05:44.559801       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716005144 cert, and key in /tmp/serving-cert-3988948920/serving-signer.crt, /tmp/serving-cert-3988948920/serving-signer.key\nStaticPodsDegraded: I0518 04:05:45.165164       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 04:05:45.174990       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-43-208.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 04:05:45.175102       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 04:05:45.190988       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3988948920/tls.crt::/tmp/serving-cert-3988948920/tls.key"\nStaticPodsDegraded: F0518 04:05:45.476978       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 04:11:41.201 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-20-230.us-west-1.compute.internal" not ready since 2024-05-18 04:09:41 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791664091270483968junit20 hours ago
May 18 04:37:41.280 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-45-187.ec2.internal" not ready since 2024-05-18 04:37:39 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 04:38:09.572 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-45-187.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 04:38:01.961490       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 04:38:01.961702       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716007081 cert, and key in /tmp/serving-cert-4141632507/serving-signer.crt, /tmp/serving-cert-4141632507/serving-signer.key\nStaticPodsDegraded: I0518 04:38:02.454856       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 04:38:02.463165       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-45-187.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 04:38:02.463284       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 04:38:02.471635       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4141632507/tls.crt::/tmp/serving-cert-4141632507/tls.key"\nStaticPodsDegraded: F0518 04:38:02.681760       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 04:43:23.053 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-65-239.ec2.internal" not ready since 2024-05-18 04:41:23 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791664091270483968junit20 hours ago
May 18 04:49:41.094 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-119-12.ec2.internal" not ready since 2024-05-18 04:49:22 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 04:49:51.850 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-119-12.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 04:49:44.696305       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 04:49:44.696751       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716007784 cert, and key in /tmp/serving-cert-721216952/serving-signer.crt, /tmp/serving-cert-721216952/serving-signer.key\nStaticPodsDegraded: I0518 04:49:45.068390       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 04:49:45.082519       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-119-12.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 04:49:45.082709       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 04:49:45.093740       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-721216952/tls.crt::/tmp/serving-cert-721216952/tls.key"\nStaticPodsDegraded: F0518 04:49:45.456199       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791671581823471616junit20 hours ago
May 18 05:11:16.597 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-4-29.us-east-2.compute.internal" not ready since 2024-05-18 05:09:16 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 05:11:50.795 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-4-29.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 05:11:39.454671       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 05:11:39.455315       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716009099 cert, and key in /tmp/serving-cert-1771246231/serving-signer.crt, /tmp/serving-cert-1771246231/serving-signer.key\nStaticPodsDegraded: I0518 05:11:40.155097       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 05:11:40.176439       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-29.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 05:11:40.176550       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 05:11:40.202554       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1771246231/tls.crt::/tmp/serving-cert-1771246231/tls.key"\nStaticPodsDegraded: F0518 05:11:40.483681       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 05:17:41.859 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-90-177.us-east-2.compute.internal" not ready since 2024-05-18 05:17:20 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791649487408599040junit21 hours ago
May 18 03:34:31.377 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-124-56.us-east-2.compute.internal" not ready since 2024-05-18 03:34:11 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 03:34:44.832 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-124-56.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 03:34:34.437724       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 03:34:34.438073       1 crypto.go:601] Generating new CA for check-endpoints-signer@1716003274 cert, and key in /tmp/serving-cert-278829254/serving-signer.crt, /tmp/serving-cert-278829254/serving-signer.key\nStaticPodsDegraded: I0518 03:34:35.081515       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 03:34:35.097408       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-124-56.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 03:34:35.097647       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 03:34:35.128272       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-278829254/tls.crt::/tmp/serving-cert-278829254/tls.key"\nStaticPodsDegraded: F0518 03:34:35.357971       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 03:40:31.154 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-70-171.us-east-2.compute.internal" not ready since 2024-05-18 03:40:25 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791626298704007168junit23 hours ago
May 18 02:08:41.540 - 41s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-34-146.us-east-2.compute.internal" not ready since 2024-05-18 02:06:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 02:09:22.894 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-34-146.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 02:09:11.195850       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 02:09:11.196179       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715998151 cert, and key in /tmp/serving-cert-1483240692/serving-signer.crt, /tmp/serving-cert-1483240692/serving-signer.key\nStaticPodsDegraded: I0518 02:09:12.049792       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 02:09:12.064956       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-34-146.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 02:09:12.065074       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 02:09:12.090932       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1483240692/tls.crt::/tmp/serving-cert-1483240692/tls.key"\nStaticPodsDegraded: F0518 02:09:12.413045       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 18 02:14:55.457 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-91-241.us-east-2.compute.internal" not ready since 2024-05-18 02:14:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791611130955698176junit24 hours ago
May 18 00:58:20.702 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-28-220.ec2.internal" not ready since 2024-05-18 00:56:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 00:58:50.273 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-28-220.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 00:58:39.692407       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 00:58:39.692672       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715993919 cert, and key in /tmp/serving-cert-650201576/serving-signer.crt, /tmp/serving-cert-650201576/serving-signer.key\nStaticPodsDegraded: I0518 00:58:40.227219       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 00:58:40.249297       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-28-220.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 00:58:40.249434       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 00:58:40.277043       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-650201576/tls.crt::/tmp/serving-cert-650201576/tls.key"\nStaticPodsDegraded: F0518 00:58:40.546765       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 01:04:24.352 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-34-171.ec2.internal" not ready since 2024-05-18 01:04:14 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791596209689858048junit24 hours ago
May 18 00:08:40.270 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-67-205.us-west-1.compute.internal" not ready since 2024-05-18 00:06:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 18 00:09:17.239 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-67-205.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 00:09:09.031139       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 00:09:09.031510       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715990949 cert, and key in /tmp/serving-cert-4108890764/serving-signer.crt, /tmp/serving-cert-4108890764/serving-signer.key\nStaticPodsDegraded: I0518 00:09:09.400223       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 00:09:09.401804       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-67-205.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 00:09:09.401921       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0518 00:09:09.402573       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4108890764/tls.crt::/tmp/serving-cert-4108890764/tls.key"\nStaticPodsDegraded: F0518 00:09:09.643474       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 18 00:15:16.209 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-46-142.us-west-1.compute.internal" not ready since 2024-05-18 00:15:07 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791581012438814720junit25 hours ago
May 17 23:07:13.666 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-4-91.us-west-2.compute.internal" not ready since 2024-05-17 23:05:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 23:07:49.257 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-4-91.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 23:07:39.403627       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 23:07:39.403904       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715987259 cert, and key in /tmp/serving-cert-3970466664/serving-signer.crt, /tmp/serving-cert-3970466664/serving-signer.key\nStaticPodsDegraded: I0517 23:07:40.004254       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 23:07:40.013484       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-91.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 23:07:40.013577       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 23:07:40.030829       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3970466664/tls.crt::/tmp/serving-cert-3970466664/tls.key"\nStaticPodsDegraded: F0517 23:07:40.164524       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 23:13:49.654 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-105-90.us-west-2.compute.internal" not ready since 2024-05-17 23:13:29 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791550496364826624junit27 hours ago
May 17 21:07:12.225 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-59-18.us-east-2.compute.internal" not ready since 2024-05-17 21:05:12 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 21:07:47.502 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-59-18.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 21:07:38.191961       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 21:07:38.204126       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715980058 cert, and key in /tmp/serving-cert-3519591561/serving-signer.crt, /tmp/serving-cert-3519591561/serving-signer.key\nStaticPodsDegraded: I0517 21:07:38.936161       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 21:07:38.949195       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-59-18.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 21:07:38.949314       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 21:07:38.961733       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3519591561/tls.crt::/tmp/serving-cert-3519591561/tls.key"\nStaticPodsDegraded: F0517 21:07:39.333613       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 21:13:19.232 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-93-226.us-east-2.compute.internal" not ready since 2024-05-17 21:12:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791558472202981376junit27 hours ago
May 17 21:41:42.165 - 93s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-84-97.ec2.internal" not ready since 2024-05-17 21:39:42 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 21:43:15.390 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-84-97.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 21:43:07.191600       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 21:43:07.191913       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715982187 cert, and key in /tmp/serving-cert-920246013/serving-signer.crt, /tmp/serving-cert-920246013/serving-signer.key\nStaticPodsDegraded: I0517 21:43:07.667337       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 21:43:07.674005       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-84-97.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 21:43:07.674084       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 21:43:07.683547       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-920246013/tls.crt::/tmp/serving-cert-920246013/tls.key"\nStaticPodsDegraded: F0517 21:43:07.906392       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 17 21:48:53.720 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-40-23.ec2.internal" not ready since 2024-05-17 21:48:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791536611360509952junit28 hours ago
May 17 20:07:35.202 - 30s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-118.us-east-2.compute.internal" not ready since 2024-05-17 20:05:35 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 20:08:06.172 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-100-118.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 20:07:55.406442       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 20:07:55.407091       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715976475 cert, and key in /tmp/serving-cert-971674571/serving-signer.crt, /tmp/serving-cert-971674571/serving-signer.key\nStaticPodsDegraded: I0517 20:07:55.978916       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 20:07:55.994813       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-118.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 20:07:55.994973       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 20:07:56.009569       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-971674571/tls.crt::/tmp/serving-cert-971674571/tls.key"\nStaticPodsDegraded: F0517 20:07:56.301846       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 20:13:27.184 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-91-90.us-east-2.compute.internal" not ready since 2024-05-17 20:11:27 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791566300112228352junit26 hours ago
May 17 22:12:32.043 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-68-21.us-east-2.compute.internal" not ready since 2024-05-17 22:12:25 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 22:12:46.288 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-68-21.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 22:12:37.314973       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 22:12:37.315389       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715983957 cert, and key in /tmp/serving-cert-4104966216/serving-signer.crt, /tmp/serving-cert-4104966216/serving-signer.key\nStaticPodsDegraded: I0517 22:12:37.966611       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 22:12:37.986185       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-68-21.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 22:12:37.986305       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 22:12:37.996968       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4104966216/tls.crt::/tmp/serving-cert-4104966216/tls.key"\nStaticPodsDegraded: F0517 22:12:38.262887       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 22:17:54.945 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-87-115.us-east-2.compute.internal" not ready since 2024-05-17 22:15:54 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791529475280736256junit29 hours ago
May 17 19:43:08.054 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-55-3.us-east-2.compute.internal" not ready since 2024-05-17 19:42:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 19:43:20.365 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-55-3.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 19:43:09.906882       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 19:43:09.907369       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715974989 cert, and key in /tmp/serving-cert-1118868947/serving-signer.crt, /tmp/serving-cert-1118868947/serving-signer.key\nStaticPodsDegraded: I0517 19:43:10.530410       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 19:43:10.541205       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-55-3.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 19:43:10.541394       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 19:43:10.549568       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1118868947/tls.crt::/tmp/serving-cert-1118868947/tls.key"\nStaticPodsDegraded: F0517 19:43:10.955248       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 17 19:48:30.052 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-115-202.us-east-2.compute.internal" not ready since 2024-05-17 19:46:30 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791573615787905024junit26 hours ago
May 17 22:50:57.061 - 17s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-39-62.us-west-2.compute.internal" not ready since 2024-05-17 22:50:41 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 22:51:14.734 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-39-62.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 22:51:06.206603       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 22:51:06.206963       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715986266 cert, and key in /tmp/serving-cert-279272602/serving-signer.crt, /tmp/serving-cert-279272602/serving-signer.key\nStaticPodsDegraded: I0517 22:51:06.860428       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 22:51:06.870296       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-39-62.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 22:51:06.870427       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 22:51:06.892413       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-279272602/tls.crt::/tmp/serving-cert-279272602/tls.key"\nStaticPodsDegraded: F0517 22:51:07.114612       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791543362625474560junit28 hours ago
May 17 20:41:07.163 - 17s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-121-159.us-west-2.compute.internal" not ready since 2024-05-17 20:40:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 20:41:24.818 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-121-159.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 20:41:13.489592       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 20:41:13.489960       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715978473 cert, and key in /tmp/serving-cert-2432749720/serving-signer.crt, /tmp/serving-cert-2432749720/serving-signer.key\nStaticPodsDegraded: I0517 20:41:14.183718       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 20:41:14.198809       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-159.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 20:41:14.198905       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 20:41:14.225749       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2432749720/tls.crt::/tmp/serving-cert-2432749720/tls.key"\nStaticPodsDegraded: F0517 20:41:14.395153       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 20:47:20.482 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-232.us-west-2.compute.internal" not ready since 2024-05-17 20:45:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791514242608795648junit30 hours ago
May 17 18:45:35.276 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-97-156.us-east-2.compute.internal" not ready since 2024-05-17 18:45:25 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 18:45:49.198 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-97-156.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 18:45:37.721450       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 18:45:37.721805       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715971537 cert, and key in /tmp/serving-cert-651240943/serving-signer.crt, /tmp/serving-cert-651240943/serving-signer.key\nStaticPodsDegraded: I0517 18:45:38.328982       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 18:45:38.340245       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-97-156.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 18:45:38.340363       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 18:45:38.363319       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-651240943/tls.crt::/tmp/serving-cert-651240943/tls.key"\nStaticPodsDegraded: F0517 18:45:38.834314       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791521355259187200junit29 hours ago
May 17 19:22:27.323 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-54-77.ec2.internal" not ready since 2024-05-17 19:20:27 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 19:22:58.731 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-54-77.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 19:22:50.878482       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 19:22:50.878903       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715973770 cert, and key in /tmp/serving-cert-3616326665/serving-signer.crt, /tmp/serving-cert-3616326665/serving-signer.key\nStaticPodsDegraded: I0517 19:22:51.390921       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 19:22:51.410081       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-54-77.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 19:22:51.410203       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 19:22:51.427243       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3616326665/tls.crt::/tmp/serving-cert-3616326665/tls.key"\nStaticPodsDegraded: F0517 19:22:51.712412       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791505920052695040junit30 hours ago
May 17 18:12:17.051 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-27-231.us-west-2.compute.internal" not ready since 2024-05-17 18:10:17 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 18:12:53.722 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-27-231.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 18:12:43.348874       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 18:12:43.349163       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715969563 cert, and key in /tmp/serving-cert-2282377212/serving-signer.crt, /tmp/serving-cert-2282377212/serving-signer.key\nStaticPodsDegraded: I0517 18:12:43.829838       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 18:12:43.848648       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-231.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 18:12:43.848782       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 18:12:43.862371       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2282377212/tls.crt::/tmp/serving-cert-2282377212/tls.key"\nStaticPodsDegraded: F0517 18:12:44.079996       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 18:18:49.134 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-94-41.us-west-2.compute.internal" not ready since 2024-05-17 18:16:49 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791498208136925184junit31 hours ago
May 17 17:30:08.020 - 30s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-15-5.ec2.internal" not ready since 2024-05-17 17:28:08 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 17:30:38.838 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-15-5.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 17:30:30.287103       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 17:30:30.299524       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715967030 cert, and key in /tmp/serving-cert-1236226126/serving-signer.crt, /tmp/serving-cert-1236226126/serving-signer.key\nStaticPodsDegraded: I0517 17:30:30.809825       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 17:30:30.818055       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-15-5.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 17:30:30.818175       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 17:30:30.829954       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1236226126/tls.crt::/tmp/serving-cert-1236226126/tls.key"\nStaticPodsDegraded: F0517 17:30:30.966696       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 17 17:36:10.021 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-94-164.ec2.internal" not ready since 2024-05-17 17:35:56 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791483607634677760junit32 hours ago
May 17 16:31:04.878 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-109-207.us-east-2.compute.internal" not ready since 2024-05-17 16:29:04 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 16:31:39.586 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-109-207.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 16:31:29.041112       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 16:31:29.041462       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715963489 cert, and key in /tmp/serving-cert-3331080421/serving-signer.crt, /tmp/serving-cert-3331080421/serving-signer.key\nStaticPodsDegraded: I0517 16:31:29.658279       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 16:31:29.678734       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-109-207.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 16:31:29.678901       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 16:31:29.696253       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3331080421/tls.crt::/tmp/serving-cert-3331080421/tls.key"\nStaticPodsDegraded: F0517 16:31:29.918926       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 16:37:17.132 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-110-164.us-east-2.compute.internal" not ready since 2024-05-17 16:37:06 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791460444620197888junit33 hours ago
May 17 15:36:41.348 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-114-32.us-west-2.compute.internal" not ready since 2024-05-17 15:34:41 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 15:37:17.628 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-114-32.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 15:37:06.002907       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 15:37:06.003212       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715960226 cert, and key in /tmp/serving-cert-1095516368/serving-signer.crt, /tmp/serving-cert-1095516368/serving-signer.key\nStaticPodsDegraded: I0517 15:37:06.620314       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 15:37:06.640032       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-114-32.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 15:37:06.640151       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 15:37:06.665741       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1095516368/tls.crt::/tmp/serving-cert-1095516368/tls.key"\nStaticPodsDegraded: F0517 15:37:07.029942       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1791467552321310720junit33 hours ago
May 17 15:42:48.216 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-2-140.us-west-2.compute.internal" not ready since 2024-05-17 15:40:48 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 15:43:23.455 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-2-140.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 15:43:12.907962       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 15:43:12.914614       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715960592 cert, and key in /tmp/serving-cert-1108094721/serving-signer.crt, /tmp/serving-cert-1108094721/serving-signer.key\nStaticPodsDegraded: I0517 15:43:13.265330       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 15:43:13.279528       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-2-140.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 15:43:13.279673       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 15:43:13.295377       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1108094721/tls.crt::/tmp/serving-cert-1108094721/tls.key"\nStaticPodsDegraded: F0517 15:43:13.452052       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 15:49:10.227 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-113-36.us-west-2.compute.internal" not ready since 2024-05-17 15:47:10 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791437542441095168junit35 hours ago
May 17 13:47:38.594 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-60-8.ec2.internal" not ready since 2024-05-17 13:47:29 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 13:47:49.461 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-60-8.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 13:47:41.134107       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 13:47:41.134450       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715953661 cert, and key in /tmp/serving-cert-4294126889/serving-signer.crt, /tmp/serving-cert-4294126889/serving-signer.key\nStaticPodsDegraded: I0517 13:47:41.462861       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 13:47:41.477194       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-60-8.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 13:47:41.477311       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 13:47:41.500142       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4294126889/tls.crt::/tmp/serving-cert-4294126889/tls.key"\nStaticPodsDegraded: F0517 13:47:41.871720       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791445945477500928junit34 hours ago
May 17 14:31:42.273 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-20-254.us-west-2.compute.internal" not ready since 2024-05-17 14:29:42 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 14:32:19.795 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-20-254.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 14:32:09.275982       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 14:32:09.276217       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715956329 cert, and key in /tmp/serving-cert-1926451590/serving-signer.crt, /tmp/serving-cert-1926451590/serving-signer.key\nStaticPodsDegraded: I0517 14:32:09.951419       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 14:32:09.966008       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-20-254.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 14:32:09.966114       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 14:32:10.005059       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1926451590/tls.crt::/tmp/serving-cert-1926451590/tls.key"\nStaticPodsDegraded: F0517 14:32:10.286654       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 14:38:32.265 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-68-83.us-west-2.compute.internal" not ready since 2024-05-17 14:38:24 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791445945477500928junit34 hours ago
May 17 14:44:20.362 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-117-209.us-west-2.compute.internal" not ready since 2024-05-17 14:42:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 14:44:53.471 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-117-209.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 14:44:44.665161       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 14:44:44.665493       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715957084 cert, and key in /tmp/serving-cert-570054641/serving-signer.crt, /tmp/serving-cert-570054641/serving-signer.key\nStaticPodsDegraded: I0517 14:44:45.207062       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 14:44:45.234178       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-117-209.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 14:44:45.234340       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 14:44:45.258101       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-570054641/tls.crt::/tmp/serving-cert-570054641/tls.key"\nStaticPodsDegraded: F0517 14:44:45.705341       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791430547529011200junit35 hours ago
May 17 13:13:45.506 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-117-152.us-west-1.compute.internal" not ready since 2024-05-17 13:11:45 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 13:14:18.240 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-117-152.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 13:14:08.158605       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 13:14:08.159372       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715951648 cert, and key in /tmp/serving-cert-3476686713/serving-signer.crt, /tmp/serving-cert-3476686713/serving-signer.key\nStaticPodsDegraded: I0517 13:14:08.615727       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 13:14:08.626682       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-117-152.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 13:14:08.626799       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 13:14:08.648261       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3476686713/tls.crt::/tmp/serving-cert-3476686713/tls.key"\nStaticPodsDegraded: F0517 13:14:09.099204       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 13:20:06.484 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-23-245.us-west-1.compute.internal" not ready since 2024-05-17 13:19:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791423619398635520junit36 hours ago
May 17 12:37:52.732 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-52-198.us-east-2.compute.internal" not ready since 2024-05-17 12:35:52 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 12:38:29.961 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-52-198.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 12:38:19.908947       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 12:38:19.909236       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715949499 cert, and key in /tmp/serving-cert-3613510667/serving-signer.crt, /tmp/serving-cert-3613510667/serving-signer.key\nStaticPodsDegraded: I0517 12:38:20.529477       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 12:38:20.549345       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-52-198.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 12:38:20.549450       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 12:38:20.574692       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3613510667/tls.crt::/tmp/serving-cert-3613510667/tls.key"\nStaticPodsDegraded: F0517 12:38:20.758680       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 12:44:08.430 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-124-75.us-east-2.compute.internal" not ready since 2024-05-17 12:43:55 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791415349623656448junit36 hours ago
May 17 12:14:41.377 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-83-210.ec2.internal" not ready since 2024-05-17 12:12:41 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 12:15:12.899 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-83-210.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 12:15:03.168645       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 12:15:03.169075       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715948103 cert, and key in /tmp/serving-cert-1251238727/serving-signer.crt, /tmp/serving-cert-1251238727/serving-signer.key\nStaticPodsDegraded: I0517 12:15:03.557451       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 12:15:03.565588       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-83-210.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 12:15:03.565685       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 12:15:03.580137       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1251238727/tls.crt::/tmp/serving-cert-1251238727/tls.key"\nStaticPodsDegraded: F0517 12:15:03.863689       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 12:20:44.036 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-1-80.ec2.internal" not ready since 2024-05-17 12:20:39 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791407544313319424junit37 hours ago
May 17 11:39:26.850 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-3-62.us-west-1.compute.internal" not ready since 2024-05-17 11:39:14 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 11:39:38.639 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-3-62.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 11:39:28.980683       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 11:39:28.981066       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715945968 cert, and key in /tmp/serving-cert-831720704/serving-signer.crt, /tmp/serving-cert-831720704/serving-signer.key\nStaticPodsDegraded: I0517 11:39:29.422726       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 11:39:29.438386       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-3-62.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 11:39:29.438584       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 11:39:29.458925       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-831720704/tls.crt::/tmp/serving-cert-831720704/tls.key"\nStaticPodsDegraded: F0517 11:39:29.826299       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 11:45:14.876 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-85-243.us-west-1.compute.internal" not ready since 2024-05-17 11:43:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791385125884268544junit38 hours ago
May 17 10:09:31.235 - 27s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-28-246.ec2.internal" not ready since 2024-05-17 10:07:31 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 10:09:58.796 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-28-246.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 10:09:54.236336       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 10:09:54.236742       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715940594 cert, and key in /tmp/serving-cert-3504069281/serving-signer.crt, /tmp/serving-cert-3504069281/serving-signer.key\nStaticPodsDegraded: I0517 10:09:54.432278       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 10:09:54.443543       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-28-246.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 10:09:54.443670       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 10:09:54.459922       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3504069281/tls.crt::/tmp/serving-cert-3504069281/tls.key"\nStaticPodsDegraded: F0517 10:09:54.740271       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 10:15:39.607 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-113-51.ec2.internal" not ready since 2024-05-17 10:15:27 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791392951809609728junit38 hours ago
May 17 10:41:52.129 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-125-114.us-east-2.compute.internal" not ready since 2024-05-17 10:41:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 10:42:04.686 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-125-114.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 10:41:54.650378       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 10:41:54.650744       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715942514 cert, and key in /tmp/serving-cert-3595292739/serving-signer.crt, /tmp/serving-cert-3595292739/serving-signer.key\nStaticPodsDegraded: I0517 10:41:55.159862       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 10:41:55.172171       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-125-114.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 10:41:55.172299       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 10:41:55.191874       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3595292739/tls.crt::/tmp/serving-cert-3595292739/tls.key"\nStaticPodsDegraded: F0517 10:41:55.427838       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 10:47:41.485 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-112-59.us-east-2.compute.internal" not ready since 2024-05-17 10:47:33 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791400699293077504junit37 hours ago
May 17 11:17:08.248 - 7s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-90-221.us-west-2.compute.internal" not ready since 2024-05-17 11:16:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 11:17:15.904 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-90-221.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 11:17:11.028409       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 11:17:11.028661       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715944631 cert, and key in /tmp/serving-cert-2992090857/serving-signer.crt, /tmp/serving-cert-2992090857/serving-signer.key\nStaticPodsDegraded: I0517 11:17:11.523171       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 11:17:11.545959       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-90-221.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 11:17:11.546091       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 11:17:11.578246       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2992090857/tls.crt::/tmp/serving-cert-2992090857/tls.key"\nStaticPodsDegraded: F0517 11:17:11.790740       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 17 11:23:08.228 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-35-160.us-west-2.compute.internal" not ready since 2024-05-17 11:22:49 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1791354205470986240junit41 hours ago
May 17 07:59:57.550 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-57-221.us-west-1.compute.internal" not ready since 2024-05-17 07:57:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 08:00:35.018 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-57-221.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 08:00:25.554743       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 08:00:25.555013       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715932825 cert, and key in /tmp/serving-cert-4141558370/serving-signer.crt, /tmp/serving-cert-4141558370/serving-signer.key\nStaticPodsDegraded: I0517 08:00:26.244027       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 08:00:26.255702       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-57-221.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 08:00:26.255847       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 08:00:26.277573       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4141558370/tls.crt::/tmp/serving-cert-4141558370/tls.key"\nStaticPodsDegraded: F0517 08:00:26.858395       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 17 08:05:49.493 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-65-135.us-west-1.compute.internal" not ready since 2024-05-17 08:03:49 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791369825377849344junit40 hours ago
May 17 09:05:17.579 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-98-234.us-east-2.compute.internal" not ready since 2024-05-17 09:04:56 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 09:05:28.619 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-98-234.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 09:05:18.977538       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 09:05:18.977806       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715936718 cert, and key in /tmp/serving-cert-970812404/serving-signer.crt, /tmp/serving-cert-970812404/serving-signer.key\nStaticPodsDegraded: I0517 09:05:19.498470       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 09:05:19.511974       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-98-234.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 09:05:19.512105       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 09:05:19.528632       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-970812404/tls.crt::/tmp/serving-cert-970812404/tls.key"\nStaticPodsDegraded: F0517 09:05:19.658716       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 17 09:11:07.118 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-42-218.us-east-2.compute.internal" not ready since 2024-05-17 09:10:58 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791361851326468096junit40 hours ago
May 17 08:46:03.081 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-32-239.us-west-2.compute.internal" not ready since 2024-05-17 08:45:55 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 08:46:18.158 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-32-239.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 08:46:08.662482       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 08:46:08.662969       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715935568 cert, and key in /tmp/serving-cert-653699960/serving-signer.crt, /tmp/serving-cert-653699960/serving-signer.key\nStaticPodsDegraded: I0517 08:46:09.128232       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 08:46:09.140150       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-32-239.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 08:46:09.140284       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 08:46:09.153963       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-653699960/tls.crt::/tmp/serving-cert-653699960/tls.key"\nStaticPodsDegraded: F0517 08:46:09.513329       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791345478957273088junit41 hours ago
May 17 07:23:13.622 - 78s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-6-2.us-east-2.compute.internal" not ready since 2024-05-17 07:21:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 07:24:32.552 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-6-2.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 07:24:22.929389       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 07:24:22.929711       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715930662 cert, and key in /tmp/serving-cert-1633647144/serving-signer.crt, /tmp/serving-cert-1633647144/serving-signer.key\nStaticPodsDegraded: I0517 07:24:23.327639       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 07:24:23.345591       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-6-2.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 07:24:23.345709       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 07:24:23.362119       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1633647144/tls.crt::/tmp/serving-cert-1633647144/tls.key"\nStaticPodsDegraded: F0517 07:24:23.630688       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 17 07:30:01.668 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-11-213.us-east-2.compute.internal" not ready since 2024-05-17 07:29:51 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1791200824492298240junit2 days ago
May 16 22:27:33.218 - 18s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-18-206.us-west-2.compute.internal" not ready since 2024-05-16 22:27:26 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 16 22:27:51.286 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-18-206.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0516 22:27:42.324591       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0516 22:27:42.325058       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715898462 cert, and key in /tmp/serving-cert-1470530637/serving-signer.crt, /tmp/serving-cert-1470530637/serving-signer.key\nStaticPodsDegraded: I0516 22:27:42.935109       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0516 22:27:42.953946       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-18-206.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0516 22:27:42.954182       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0516 22:27:42.974129       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1470530637/tls.crt::/tmp/serving-cert-1470530637/tls.key"\nStaticPodsDegraded: F0516 22:27:43.333529       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1791309428360744960junit44 hours ago
May 17 05:01:21.361 - 20s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-60-205.us-west-1.compute.internal" not ready since 2024-05-17 05:01:15 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 05:01:41.955 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-60-205.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 05:01:31.567643       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 05:01:31.567993       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715922091 cert, and key in /tmp/serving-cert-2717638750/serving-signer.crt, /tmp/serving-cert-2717638750/serving-signer.key\nStaticPodsDegraded: I0517 05:01:32.393961       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 05:01:32.396164       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-60-205.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 05:01:32.396308       1 builder.go:299] check-endpoints version 4.15.0-202405161507.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0517 05:01:32.396906       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2717638750/tls.crt::/tmp/serving-cert-2717638750/tls.key"\nStaticPodsDegraded: F0517 05:01:32.773628       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 17 05:07:14.390 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-64-254.us-west-1.compute.internal" not ready since 2024-05-17 05:05:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790758382110511104junit3 days ago
May 15 16:31:53.460 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-95-29.us-west-2.compute.internal" not ready since 2024-05-15 16:31:43 +0000 UTC because KubeletNotReady ([PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 16:32:07.479 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-95-29.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 16:31:58.211618       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 16:31:58.212110       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715790718 cert, and key in /tmp/serving-cert-2057696882/serving-signer.crt, /tmp/serving-cert-2057696882/serving-signer.key\nStaticPodsDegraded: I0515 16:31:58.827160       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 16:31:58.836923       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-95-29.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 16:31:58.837008       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 16:31:58.852404       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2057696882/tls.crt::/tmp/serving-cert-2057696882/tls.key"\nStaticPodsDegraded: F0515 16:31:59.003614       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 16:38:28.557 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-105-30.us-west-2.compute.internal" not ready since 2024-05-15 16:38:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790758382110511104junit3 days ago
May 15 16:44:37.097 - 19s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-14-218.us-west-2.compute.internal" not ready since 2024-05-15 16:44:19 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 16:44:56.216 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-14-218.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 16:44:44.870550       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 16:44:44.870927       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715791484 cert, and key in /tmp/serving-cert-2554311497/serving-signer.crt, /tmp/serving-cert-2554311497/serving-signer.key\nStaticPodsDegraded: I0515 16:44:45.340022       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 16:44:45.352829       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-14-218.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 16:44:45.352940       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 16:44:45.371484       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2554311497/tls.crt::/tmp/serving-cert-2554311497/tls.key"\nStaticPodsDegraded: F0515 16:44:45.545940       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1790750900587859968junit3 days ago
May 15 16:02:32.729 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-32-44.us-west-2.compute.internal" not ready since 2024-05-15 16:02:13 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 16:02:48.989 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-32-44.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 16:02:38.571883       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 16:02:38.572231       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715788958 cert, and key in /tmp/serving-cert-140331806/serving-signer.crt, /tmp/serving-cert-140331806/serving-signer.key\nStaticPodsDegraded: I0515 16:02:39.173261       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 16:02:39.185328       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-32-44.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 16:02:39.185442       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 16:02:39.199481       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-140331806/tls.crt::/tmp/serving-cert-140331806/tls.key"\nStaticPodsDegraded: F0515 16:02:39.580159       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 15 16:08:29.740 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-115-61.us-west-2.compute.internal" not ready since 2024-05-15 16:06:29 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790750900587859968junit3 days ago
May 15 16:15:18.523 - 17s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-109-238.us-west-2.compute.internal" not ready since 2024-05-15 16:15:00 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 16:15:35.853 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-109-238.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 16:15:25.781576       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 16:15:25.782115       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715789725 cert, and key in /tmp/serving-cert-3084684551/serving-signer.crt, /tmp/serving-cert-3084684551/serving-signer.key\nStaticPodsDegraded: I0515 16:15:26.369006       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 16:15:26.386594       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-109-238.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 16:15:26.386803       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 16:15:26.414286       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3084684551/tls.crt::/tmp/serving-cert-3084684551/tls.key"\nStaticPodsDegraded: F0515 16:15:26.525962       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1790767083957522432junit3 days ago
May 15 17:23:29.707 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-108-115.ec2.internal" not ready since 2024-05-15 17:23:19 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 17:23:36.588 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-108-115.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 17:23:31.095015       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 17:23:31.095358       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715793811 cert, and key in /tmp/serving-cert-1885955715/serving-signer.crt, /tmp/serving-cert-1885955715/serving-signer.key\nStaticPodsDegraded: I0515 17:23:31.665674       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 17:23:31.671976       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-115.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 17:23:31.672130       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 17:23:31.687034       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1885955715/tls.crt::/tmp/serving-cert-1885955715/tls.key"\nStaticPodsDegraded: F0515 17:23:31.910626       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1790728680117374976junit3 days ago
May 15 14:33:18.399 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-119-151.us-east-2.compute.internal" not ready since 2024-05-15 14:31:18 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 14:33:50.562 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-119-151.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 14:33:41.077152       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 14:33:41.077715       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715783621 cert, and key in /tmp/serving-cert-644537201/serving-signer.crt, /tmp/serving-cert-644537201/serving-signer.key\nStaticPodsDegraded: I0515 14:33:41.569329       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 14:33:41.579229       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-119-151.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 14:33:41.579399       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 14:33:41.589576       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-644537201/tls.crt::/tmp/serving-cert-644537201/tls.key"\nStaticPodsDegraded: F0515 14:33:41.899961       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 14:39:45.256 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-123-106.us-east-2.compute.internal" not ready since 2024-05-15 14:39:21 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790728680117374976junit3 days ago
May 15 14:39:56.206 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-123-106.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-123-106.us-east-2.compute.internal_openshift-kube-apiserver(10a33d79496a78c250fa5489320744f5) (exception: Degraded=False is the happy case)
May 15 14:45:33.206 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-52-209.us-east-2.compute.internal" not ready since 2024-05-15 14:45:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-52-209.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 14:45:29.712990       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 14:45:29.713203       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715784329 cert, and key in /tmp/serving-cert-2063431399/serving-signer.crt, /tmp/serving-cert-2063431399/serving-signer.key\nStaticPodsDegraded: I0515 14:45:30.296031       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 14:45:30.306426       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-52-209.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 14:45:30.306575       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 14:45:30.317822       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2063431399/tls.crt::/tmp/serving-cert-2063431399/tls.key"\nStaticPodsDegraded: F0515 14:45:30.531444       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 14:45:33.206 - 5s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-52-209.us-east-2.compute.internal" not ready since 2024-05-15 14:45:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-52-209.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 14:45:29.712990       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 14:45:29.713203       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715784329 cert, and key in /tmp/serving-cert-2063431399/serving-signer.crt, /tmp/serving-cert-2063431399/serving-signer.key\nStaticPodsDegraded: I0515 14:45:30.296031       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 14:45:30.306426       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-52-209.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 14:45:30.306575       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 14:45:30.317822       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2063431399/tls.crt::/tmp/serving-cert-2063431399/tls.key"\nStaticPodsDegraded: F0515 14:45:30.531444       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 1 lines not shown

#1790721727387406336junit3 days ago
May 15 14:11:24.360 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-4-253.ec2.internal" not ready since 2024-05-15 14:11:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 14:11:33.580 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-4-253.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 14:11:29.306666       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 14:11:29.307011       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715782289 cert, and key in /tmp/serving-cert-2792449602/serving-signer.crt, /tmp/serving-cert-2792449602/serving-signer.key\nStaticPodsDegraded: I0515 14:11:29.833284       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 14:11:29.842749       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-253.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 14:11:29.842912       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 14:11:29.854902       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2792449602/tls.crt::/tmp/serving-cert-2792449602/tls.key"\nStaticPodsDegraded: F0515 14:11:30.304651       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 14:16:46.499 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-62-160.ec2.internal" not ready since 2024-05-15 14:14:46 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790706852581871616junit3 days ago
May 15 13:08:04.012 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-32-215.us-east-2.compute.internal" not ready since 2024-05-15 13:07:52 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 13:08:14.633 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-32-215.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 13:08:04.326910       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 13:08:04.327305       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715778484 cert, and key in /tmp/serving-cert-2122280962/serving-signer.crt, /tmp/serving-cert-2122280962/serving-signer.key\nStaticPodsDegraded: I0515 13:08:04.894929       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 13:08:04.906180       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-32-215.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 13:08:04.906279       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 13:08:04.920674       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2122280962/tls.crt::/tmp/serving-cert-2122280962/tls.key"\nStaticPodsDegraded: F0515 13:08:05.221864       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 13:14:04.340 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-65-213.us-east-2.compute.internal" not ready since 2024-05-15 13:13:55 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790706852581871616junit3 days ago
May 15 13:19:54.351 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-65-94.us-east-2.compute.internal" not ready since 2024-05-15 13:19:45 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 13:20:07.798 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-65-94.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 13:19:58.115621       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 13:19:58.115994       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715779198 cert, and key in /tmp/serving-cert-2025937517/serving-signer.crt, /tmp/serving-cert-2025937517/serving-signer.key\nStaticPodsDegraded: I0515 13:19:58.591511       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 13:19:58.605556       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-65-94.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 13:19:58.605728       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 13:19:58.623617       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2025937517/tls.crt::/tmp/serving-cert-2025937517/tls.key"\nStaticPodsDegraded: F0515 13:19:58.758223       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1790713722461753344junit3 days ago
May 15 13:44:00.398 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-29-62.us-west-1.compute.internal" not ready since 2024-05-15 13:43:58 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 13:44:33.531 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-29-62.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 13:44:23.200446       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 13:44:23.200668       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715780663 cert, and key in /tmp/serving-cert-1540378001/serving-signer.crt, /tmp/serving-cert-1540378001/serving-signer.key\nStaticPodsDegraded: I0515 13:44:23.404038       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 13:44:23.405848       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-29-62.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 13:44:23.406046       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 13:44:23.406881       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1540378001/tls.crt::/tmp/serving-cert-1540378001/tls.key"\nStaticPodsDegraded: W0515 13:44:26.282448       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nStaticPodsDegraded: F0515 13:44:26.282553       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 13:50:31.928 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-122-38.us-west-1.compute.internal" not ready since 2024-05-15 13:50:21 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790699729328279552junit3 days ago
May 15 12:54:55.438 - 40s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-94-189.us-west-2.compute.internal" not ready since 2024-05-15 12:52:55 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 12:55:35.539 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-94-189.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 12:55:25.405828       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 12:55:25.420829       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715777725 cert, and key in /tmp/serving-cert-3275367798/serving-signer.crt, /tmp/serving-cert-3275367798/serving-signer.key\nStaticPodsDegraded: I0515 12:55:25.869462       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 12:55:25.878410       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-94-189.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 12:55:25.878537       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 12:55:25.890429       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3275367798/tls.crt::/tmp/serving-cert-3275367798/tls.key"\nStaticPodsDegraded: F0515 12:55:26.058931       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 13:01:17.437 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-102-87.us-west-2.compute.internal" not ready since 2024-05-15 12:59:17 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790699729328279552junit3 days ago
May 15 13:08:01.462 - 18s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-13-201.us-west-2.compute.internal" not ready since 2024-05-15 13:07:45 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 13:08:20.403 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-13-201.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 13:08:09.907980       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 13:08:09.908370       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715778489 cert, and key in /tmp/serving-cert-1425285820/serving-signer.crt, /tmp/serving-cert-1425285820/serving-signer.key\nStaticPodsDegraded: I0515 13:08:10.626624       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 13:08:10.634779       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-13-201.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 13:08:10.634870       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 13:08:10.647690       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1425285820/tls.crt::/tmp/serving-cert-1425285820/tls.key"\nStaticPodsDegraded: F0515 13:08:10.946191       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1790674986390786048junit3 days ago
May 15 11:03:28.833 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-92-104.ec2.internal" not ready since 2024-05-15 11:03:26 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 11:03:58.406 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-92-104.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 11:03:49.106048       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 11:03:49.106350       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715771029 cert, and key in /tmp/serving-cert-2310838111/serving-signer.crt, /tmp/serving-cert-2310838111/serving-signer.key\nStaticPodsDegraded: I0515 11:03:49.539986       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 11:03:49.556117       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-92-104.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 11:03:49.556230       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 11:03:49.568222       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2310838111/tls.crt::/tmp/serving-cert-2310838111/tls.key"\nStaticPodsDegraded: F0515 11:03:49.740453       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 11:09:35.281 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-11-69.ec2.internal" not ready since 2024-05-15 11:09:25 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790690957620940800junit3 days ago
May 15 12:07:44.990 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-4-171.us-east-2.compute.internal" not ready since 2024-05-15 12:05:44 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 12:08:17.091 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-4-171.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 12:08:07.758830       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 12:08:07.759285       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715774887 cert, and key in /tmp/serving-cert-2985359283/serving-signer.crt, /tmp/serving-cert-2985359283/serving-signer.key\nStaticPodsDegraded: I0515 12:08:08.272224       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 12:08:08.289719       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-171.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 12:08:08.289868       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 12:08:08.304340       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2985359283/tls.crt::/tmp/serving-cert-2985359283/tls.key"\nStaticPodsDegraded: F0515 12:08:08.610539       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 12:13:26.036 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-29-237.us-east-2.compute.internal" not ready since 2024-05-15 12:11:26 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790682968025468928junit3 days ago
May 15 11:41:33.422 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-201.us-west-1.compute.internal" not ready since 2024-05-15 11:39:33 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 11:42:10.329 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-100-201.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 11:42:01.768471       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 11:42:01.776217       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715773321 cert, and key in /tmp/serving-cert-4096349545/serving-signer.crt, /tmp/serving-cert-4096349545/serving-signer.key\nStaticPodsDegraded: I0515 11:42:02.112164       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 11:42:02.125414       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-201.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 11:42:02.125524       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 11:42:02.139234       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4096349545/tls.crt::/tmp/serving-cert-4096349545/tls.key"\nStaticPodsDegraded: F0515 11:42:02.326565       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 11:48:30.112 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-22-133.us-west-1.compute.internal" not ready since 2024-05-15 11:48:16 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790645511091392512junit3 days ago
May 15 09:08:55.642 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-13-209.us-east-2.compute.internal" not ready since 2024-05-15 09:08:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 09:09:05.149 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-13-209.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 09:08:55.172314       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 09:08:55.172722       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715764135 cert, and key in /tmp/serving-cert-560156994/serving-signer.crt, /tmp/serving-cert-560156994/serving-signer.key\nStaticPodsDegraded: I0515 09:08:55.860139       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 09:08:55.873848       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-13-209.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 09:08:55.873997       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 09:08:55.886028       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-560156994/tls.crt::/tmp/serving-cert-560156994/tls.key"\nStaticPodsDegraded: F0515 09:08:56.096182       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 09:14:27.653 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-36-204.us-east-2.compute.internal" not ready since 2024-05-15 09:12:27 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790652666896977920junit3 days ago
May 15 09:30:15.977 - 22s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-57-238.ec2.internal" not ready since 2024-05-15 09:30:10 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 09:30:38.914 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-57-238.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 09:30:31.775087       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 09:30:31.778952       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715765431 cert, and key in /tmp/serving-cert-3967743367/serving-signer.crt, /tmp/serving-cert-3967743367/serving-signer.key\nStaticPodsDegraded: I0515 09:30:32.095990       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 09:30:32.108491       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-57-238.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 09:30:32.108601       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 09:30:32.121426       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3967743367/tls.crt::/tmp/serving-cert-3967743367/tls.key"\nStaticPodsDegraded: F0515 09:30:32.478104       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 09:35:57.017 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-67-201.ec2.internal" not ready since 2024-05-15 09:33:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790667424572379136junit3 days ago
May 15 10:41:39.094 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-81-18.us-west-2.compute.internal" not ready since 2024-05-15 10:41:32 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 10:41:54.267 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-81-18.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 10:41:45.927346       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 10:41:45.927702       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715769705 cert, and key in /tmp/serving-cert-46844928/serving-signer.crt, /tmp/serving-cert-46844928/serving-signer.key\nStaticPodsDegraded: I0515 10:41:46.257367       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 10:41:46.271522       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-81-18.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 10:41:46.271709       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 10:41:46.284447       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-46844928/tls.crt::/tmp/serving-cert-46844928/tls.key"\nStaticPodsDegraded: F0515 10:41:46.425288       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 10:47:26.249 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-33-191.us-west-2.compute.internal" not ready since 2024-05-15 10:45:26 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790660407866691584junit3 days ago
May 15 10:01:07.630 - 22s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-19-151.us-east-2.compute.internal" not ready since 2024-05-15 10:00:57 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 10:01:30.259 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-19-151.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 10:01:20.681039       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 10:01:20.681523       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715767280 cert, and key in /tmp/serving-cert-128054141/serving-signer.crt, /tmp/serving-cert-128054141/serving-signer.key\nStaticPodsDegraded: I0515 10:01:21.210431       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 10:01:21.221848       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-19-151.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 10:01:21.221957       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 10:01:21.235851       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-128054141/tls.crt::/tmp/serving-cert-128054141/tls.key"\nStaticPodsDegraded: F0515 10:01:21.745335       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 10:06:58.532 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-105-248.us-east-2.compute.internal" not ready since 2024-05-15 10:06:49 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790637232269299712junit3 days ago
May 15 08:27:50.433 - 23s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-19-83.ec2.internal" not ready since 2024-05-15 08:27:44 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 08:28:13.686 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-19-83.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 08:28:06.650108       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 08:28:06.650435       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715761686 cert, and key in /tmp/serving-cert-2594028768/serving-signer.crt, /tmp/serving-cert-2594028768/serving-signer.key\nStaticPodsDegraded: I0515 08:28:06.987870       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 08:28:07.011011       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-19-83.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 08:28:07.011150       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 08:28:07.038185       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2594028768/tls.crt::/tmp/serving-cert-2594028768/tls.key"\nStaticPodsDegraded: F0515 08:28:07.251838       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 08:33:48.435 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-70-43.ec2.internal" not ready since 2024-05-15 08:31:48 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790629460714721280junit3 days ago
May 15 08:06:47.059 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-118-70.us-east-2.compute.internal" not ready since 2024-05-15 08:06:28 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 08:07:02.932 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-118-70.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 08:06:51.294799       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 08:06:51.295007       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715760411 cert, and key in /tmp/serving-cert-2998678961/serving-signer.crt, /tmp/serving-cert-2998678961/serving-signer.key\nStaticPodsDegraded: I0515 08:06:51.864107       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 08:06:51.885608       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-70.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 08:06:51.885800       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0515 08:06:51.913457       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2998678961/tls.crt::/tmp/serving-cert-2998678961/tls.key"\nStaticPodsDegraded: F0515 08:06:52.385208       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 08:12:33.394 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-41-238.us-east-2.compute.internal" not ready since 2024-05-15 08:12:15 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790395835360481280junit4 days ago
May 14 16:40:54.916 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-64-245.us-east-2.compute.internal" not ready since 2024-05-14 16:40:50 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 16:41:24.818 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-64-245.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 16:41:13.334525       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 16:41:13.335265       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715704873 cert, and key in /tmp/serving-cert-1394152193/serving-signer.crt, /tmp/serving-cert-1394152193/serving-signer.key\nStaticPodsDegraded: I0514 16:41:13.989955       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 16:41:14.007795       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-64-245.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 16:41:14.007890       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 16:41:14.024864       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1394152193/tls.crt::/tmp/serving-cert-1394152193/tls.key"\nStaticPodsDegraded: F0514 16:41:14.224440       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 16:47:01.513 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-57-161.us-east-2.compute.internal" not ready since 2024-05-14 16:46:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790358662947016704junit4 days ago
May 14 14:12:37.005 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-161.us-west-2.compute.internal" not ready since 2024-05-14 14:10:36 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 14:13:06.308 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-10-161.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 14:12:57.480949       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 14:12:57.481313       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715695977 cert, and key in /tmp/serving-cert-503720792/serving-signer.crt, /tmp/serving-cert-503720792/serving-signer.key\nStaticPodsDegraded: I0514 14:12:57.829723       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 14:12:57.837377       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-161.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 14:12:57.837524       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 14:12:57.853246       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-503720792/tls.crt::/tmp/serving-cert-503720792/tls.key"\nStaticPodsDegraded: F0514 14:12:58.028705       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 14 14:18:49.026 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-94-95.us-west-2.compute.internal" not ready since 2024-05-14 14:16:48 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790365598958489600junit4 days ago
May 14 14:31:22.723 - 27s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-50-195.ec2.internal" not ready since 2024-05-14 14:29:22 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 14:31:49.985 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-50-195.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 14:31:42.454703       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 14:31:42.455072       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715697102 cert, and key in /tmp/serving-cert-1288881136/serving-signer.crt, /tmp/serving-cert-1288881136/serving-signer.key\nStaticPodsDegraded: I0514 14:31:42.871547       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 14:31:42.882518       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-50-195.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 14:31:42.882714       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 14:31:42.898988       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1288881136/tls.crt::/tmp/serving-cert-1288881136/tls.key"\nStaticPodsDegraded: F0514 14:31:43.144750       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 14 14:37:40.205 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-110-145.ec2.internal" not ready since 2024-05-14 14:37:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790336378949603328junit4 days ago
May 14 12:50:26.149 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-53-61.us-east-2.compute.internal" not ready since 2024-05-14 12:48:26 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 12:50:54.747 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-53-61.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 12:50:50.223590       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 12:50:50.223938       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715691050 cert, and key in /tmp/serving-cert-2557384088/serving-signer.crt, /tmp/serving-cert-2557384088/serving-signer.key\nStaticPodsDegraded: I0514 12:50:50.615802       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 12:50:50.624954       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-53-61.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 12:50:50.625089       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 12:50:50.635002       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2557384088/tls.crt::/tmp/serving-cert-2557384088/tls.key"\nStaticPodsDegraded: F0514 12:50:50.893179       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 12:56:46.703 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-69-239.us-east-2.compute.internal" not ready since 2024-05-14 12:56:36 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790351791045480448junit4 days ago
May 14 13:57:31.405 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-123-62.us-west-1.compute.internal" not ready since 2024-05-14 13:57:27 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 13:57:48.220 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-123-62.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 13:57:41.837744       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 13:57:41.846199       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715695061 cert, and key in /tmp/serving-cert-3775246790/serving-signer.crt, /tmp/serving-cert-3775246790/serving-signer.key\nStaticPodsDegraded: I0514 13:57:42.532937       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 13:57:42.545448       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-123-62.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 13:57:42.545576       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 13:57:42.559592       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3775246790/tls.crt::/tmp/serving-cert-3775246790/tls.key"\nStaticPodsDegraded: F0514 13:57:42.921371       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1790344153524080640junit4 days ago
May 14 13:15:19.046 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-99-80.ec2.internal" not ready since 2024-05-14 13:15:00 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 13:15:28.040 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-99-80.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 13:15:22.913603       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 13:15:22.913886       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715692522 cert, and key in /tmp/serving-cert-2132746897/serving-signer.crt, /tmp/serving-cert-2132746897/serving-signer.key\nStaticPodsDegraded: I0514 13:15:23.239164       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 13:15:23.252729       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-99-80.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 13:15:23.252880       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 13:15:23.268580       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2132746897/tls.crt::/tmp/serving-cert-2132746897/tls.key"\nStaticPodsDegraded: F0514 13:15:23.596527       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1790328783010533376junit4 days ago
May 14 12:29:38.683 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-36-46.us-west-2.compute.internal" not ready since 2024-05-14 12:27:38 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 12:30:13.676 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-36-46.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 12:30:05.252504       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 12:30:05.252997       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715689805 cert, and key in /tmp/serving-cert-1140661639/serving-signer.crt, /tmp/serving-cert-1140661639/serving-signer.key\nStaticPodsDegraded: I0514 12:30:05.927333       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 12:30:05.944521       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-36-46.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 12:30:05.944644       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 12:30:05.961893       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1140661639/tls.crt::/tmp/serving-cert-1140661639/tls.key"\nStaticPodsDegraded: F0514 12:30:06.245015       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 12:36:06.664 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-127-101.us-west-2.compute.internal" not ready since 2024-05-14 12:34:06 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790320911837040640junit4 days ago
May 14 11:47:46.083 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-37-107.us-east-2.compute.internal" not ready since 2024-05-14 11:47:38 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 11:48:01.494 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-37-107.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 11:47:51.276569       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 11:47:51.276992       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715687271 cert, and key in /tmp/serving-cert-1385933371/serving-signer.crt, /tmp/serving-cert-1385933371/serving-signer.key\nStaticPodsDegraded: I0514 11:47:51.889160       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 11:47:51.898539       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-37-107.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 11:47:51.898626       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 11:47:51.913079       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1385933371/tls.crt::/tmp/serving-cert-1385933371/tls.key"\nStaticPodsDegraded: F0514 11:47:52.107509       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 11:53:28.999 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-125-152.us-east-2.compute.internal" not ready since 2024-05-14 11:51:28 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790305946036080640junit4 days ago
May 14 10:38:18.258 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-76-130.us-east-2.compute.internal" not ready since 2024-05-14 10:36:18 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 10:38:48.085 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-76-130.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 10:38:41.422745       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 10:38:41.423125       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715683121 cert, and key in /tmp/serving-cert-2406128615/serving-signer.crt, /tmp/serving-cert-2406128615/serving-signer.key\nStaticPodsDegraded: I0514 10:38:41.887152       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 10:38:41.893445       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-76-130.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 10:38:41.893538       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 10:38:41.904562       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2406128615/tls.crt::/tmp/serving-cert-2406128615/tls.key"\nStaticPodsDegraded: F0514 10:38:42.118273       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 14 10:44:32.114 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-74-29.us-east-2.compute.internal" not ready since 2024-05-14 10:44:10 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790282672535244800junit4 days ago
May 14 08:59:35.233 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-15-140.us-west-1.compute.internal" not ready since 2024-05-14 08:57:35 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 09:00:13.018 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-15-140.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 09:00:04.908873       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 09:00:04.909046       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715677204 cert, and key in /tmp/serving-cert-717213238/serving-signer.crt, /tmp/serving-cert-717213238/serving-signer.key\nStaticPodsDegraded: I0514 09:00:05.114705       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 09:00:05.115991       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-15-140.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 09:00:05.116088       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 09:00:05.116683       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-717213238/tls.crt::/tmp/serving-cert-717213238/tls.key"\nStaticPodsDegraded: F0514 09:00:05.289416       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 09:05:54.249 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-107-205.us-west-1.compute.internal" not ready since 2024-05-14 09:03:54 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790274587674546176junit4 days ago
May 14 08:35:55.203 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-53-27.us-west-2.compute.internal" not ready since 2024-05-14 08:33:55 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 08:36:31.999 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-53-27.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 08:36:21.085641       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 08:36:21.085872       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715675781 cert, and key in /tmp/serving-cert-373818813/serving-signer.crt, /tmp/serving-cert-373818813/serving-signer.key\nStaticPodsDegraded: I0514 08:36:21.818596       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 08:36:21.835005       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-53-27.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 08:36:21.835140       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 08:36:21.861732       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-373818813/tls.crt::/tmp/serving-cert-373818813/tls.key"\nStaticPodsDegraded: F0514 08:36:22.223862       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 08:43:53.168 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-25-169.us-west-2.compute.internal" not ready since 2024-05-14 08:43:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790267251975262208junit4 days ago
May 14 08:17:19.529 - 41s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-14-50.us-west-1.compute.internal" not ready since 2024-05-14 08:15:19 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 08:18:01.099 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-14-50.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 08:17:49.072455       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 08:17:49.072741       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715674669 cert, and key in /tmp/serving-cert-4073185080/serving-signer.crt, /tmp/serving-cert-4073185080/serving-signer.key\nStaticPodsDegraded: I0514 08:17:49.820969       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 08:17:49.837606       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-14-50.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 08:17:49.837953       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 08:17:49.861438       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4073185080/tls.crt::/tmp/serving-cert-4073185080/tls.key"\nStaticPodsDegraded: F0514 08:17:50.152177       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1790252009782251520junit4 days ago
May 14 07:01:42.156 - 20s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-35-70.us-west-1.compute.internal" not ready since 2024-05-14 07:01:39 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 07:02:02.991 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-35-70.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 07:01:53.547352       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 07:01:53.547783       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715670113 cert, and key in /tmp/serving-cert-3638608898/serving-signer.crt, /tmp/serving-cert-3638608898/serving-signer.key\nStaticPodsDegraded: I0514 07:01:53.856103       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 07:01:53.867578       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-35-70.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 07:01:53.867675       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 07:01:53.885470       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3638608898/tls.crt::/tmp/serving-cert-3638608898/tls.key"\nStaticPodsDegraded: F0514 07:01:54.272633       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 14 07:07:56.076 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-84-35.us-west-1.compute.internal" not ready since 2024-05-14 07:07:38 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790252009782251520junit4 days ago
May 14 07:14:07.614 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-37-171.us-west-1.compute.internal" not ready since 2024-05-14 07:13:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 07:14:18.891 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-37-171.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 07:14:08.422477       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 07:14:08.422798       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715670848 cert, and key in /tmp/serving-cert-4089817841/serving-signer.crt, /tmp/serving-cert-4089817841/serving-signer.key\nStaticPodsDegraded: I0514 07:14:08.898069       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 07:14:08.911524       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-37-171.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 07:14:08.911647       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 07:14:08.933121       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4089817841/tls.crt::/tmp/serving-cert-4089817841/tls.key"\nStaticPodsDegraded: F0514 07:14:09.234270       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1790244606177185792junit4 days ago
May 14 06:40:40.220 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-66-176.us-east-2.compute.internal" not ready since 2024-05-14 06:40:32 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 06:40:56.070 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-66-176.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 06:40:45.238288       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 06:40:45.238485       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715668845 cert, and key in /tmp/serving-cert-3051916226/serving-signer.crt, /tmp/serving-cert-3051916226/serving-signer.key\nStaticPodsDegraded: I0514 06:40:45.965431       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 06:40:45.982570       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-66-176.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 06:40:45.982651       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 06:40:46.004634       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3051916226/tls.crt::/tmp/serving-cert-3051916226/tls.key"\nStaticPodsDegraded: F0514 06:40:46.298099       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1790237029259284480junit4 days ago
May 14 06:15:31.302 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-122-72.us-west-2.compute.internal" not ready since 2024-05-14 06:13:31 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 06:16:09.149 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-122-72.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 06:15:59.175398       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 06:15:59.175754       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715667359 cert, and key in /tmp/serving-cert-2570184405/serving-signer.crt, /tmp/serving-cert-2570184405/serving-signer.key\nStaticPodsDegraded: I0514 06:15:59.700667       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 06:15:59.722245       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-122-72.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 06:15:59.722399       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 06:15:59.734262       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2570184405/tls.crt::/tmp/serving-cert-2570184405/tls.key"\nStaticPodsDegraded: F0514 06:15:59.931940       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 06:22:01.272 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-32-71.us-west-2.compute.internal" not ready since 2024-05-14 06:21:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790229783515238400junit4 days ago
May 14 05:35:45.296 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-42-16.us-east-2.compute.internal" not ready since 2024-05-14 05:35:38 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 05:36:00.062 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-42-16.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 05:35:50.184921       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 05:35:50.185268       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715664950 cert, and key in /tmp/serving-cert-3074162720/serving-signer.crt, /tmp/serving-cert-3074162720/serving-signer.key\nStaticPodsDegraded: I0514 05:35:50.990792       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 05:35:50.998471       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-42-16.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 05:35:50.998684       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 05:35:51.009757       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3074162720/tls.crt::/tmp/serving-cert-3074162720/tls.key"\nStaticPodsDegraded: F0514 05:35:51.280596       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 05:41:23.333 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-85-46.us-east-2.compute.internal" not ready since 2024-05-14 05:39:23 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790199182972162048junit4 days ago
May 14 03:38:52.121 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-123-247.us-west-1.compute.internal" not ready since 2024-05-14 03:36:52 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 03:39:23.199 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-123-247.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 03:39:14.094791       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 03:39:14.095235       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715657954 cert, and key in /tmp/serving-cert-357204600/serving-signer.crt, /tmp/serving-cert-357204600/serving-signer.key\nStaticPodsDegraded: I0514 03:39:14.775396       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 03:39:14.787500       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-123-247.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 03:39:14.787613       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 03:39:14.799067       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-357204600/tls.crt::/tmp/serving-cert-357204600/tls.key"\nStaticPodsDegraded: F0514 03:39:15.185594       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 03:45:10.141 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-81-106.us-west-1.compute.internal" not ready since 2024-05-14 03:43:10 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790161312605540352junit5 days ago
May 14 00:58:46.374 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-50-76.us-west-2.compute.internal" not ready since 2024-05-14 00:56:46 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 00:59:22.905 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-50-76.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 00:59:12.484691       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 00:59:12.487512       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715648352 cert, and key in /tmp/serving-cert-583393377/serving-signer.crt, /tmp/serving-cert-583393377/serving-signer.key\nStaticPodsDegraded: I0514 00:59:13.001003       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 00:59:13.018460       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-50-76.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 00:59:13.018602       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 00:59:13.053064       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-583393377/tls.crt::/tmp/serving-cert-583393377/tls.key"\nStaticPodsDegraded: F0514 00:59:13.344028       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 01:05:14.166 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-124-64.us-west-2.compute.internal" not ready since 2024-05-14 01:05:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790192758506393600junit4 days ago
May 14 03:08:29.569 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-40-168.us-east-2.compute.internal" not ready since 2024-05-14 03:06:29 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 03:08:57.936 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-40-168.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 03:08:52.349818       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 03:08:52.350125       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715656132 cert, and key in /tmp/serving-cert-481909122/serving-signer.crt, /tmp/serving-cert-481909122/serving-signer.key\nStaticPodsDegraded: I0514 03:08:52.894667       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 03:08:52.905191       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-40-168.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 03:08:52.905334       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 03:08:52.923617       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-481909122/tls.crt::/tmp/serving-cert-481909122/tls.key"\nStaticPodsDegraded: F0514 03:08:53.193443       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 14 03:14:43.335 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-79-128.us-east-2.compute.internal" not ready since 2024-05-14 03:14:21 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790184152289513472junit4 days ago
May 14 02:38:33.919 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-71-29.us-west-2.compute.internal" not ready since 2024-05-14 02:36:33 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 02:39:02.568 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-71-29.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 02:38:52.691892       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 02:38:52.692834       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715654332 cert, and key in /tmp/serving-cert-295347824/serving-signer.crt, /tmp/serving-cert-295347824/serving-signer.key\nStaticPodsDegraded: I0514 02:38:53.167965       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 02:38:53.175403       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-71-29.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 02:38:53.175559       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 02:38:53.188716       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-295347824/tls.crt::/tmp/serving-cert-295347824/tls.key"\nStaticPodsDegraded: F0514 02:38:53.375221       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 14 02:45:14.135 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-13-10.us-west-2.compute.internal" not ready since 2024-05-14 02:45:06 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790146680616652800junit5 days ago
May 14 00:07:17.868 - 38s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-27-54.us-west-2.compute.internal" not ready since 2024-05-14 00:05:17 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 00:07:56.206 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-27-54.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 00:07:45.726520       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 00:07:45.726872       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715645265 cert, and key in /tmp/serving-cert-851749845/serving-signer.crt, /tmp/serving-cert-851749845/serving-signer.key\nStaticPodsDegraded: I0514 00:07:46.438150       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 00:07:46.449945       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-54.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 00:07:46.450117       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 00:07:46.471068       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-851749845/tls.crt::/tmp/serving-cert-851749845/tls.key"\nStaticPodsDegraded: F0514 00:07:46.847706       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 00:14:14.390 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-79-172.us-west-2.compute.internal" not ready since 2024-05-14 00:14:05 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790136762740248576junit5 days ago
May 13 23:21:22.641 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-78-136.us-east-2.compute.internal" not ready since 2024-05-13 23:19:22 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 13 23:21:56.464 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-78-136.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0513 23:21:46.349451       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0513 23:21:46.349894       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715642506 cert, and key in /tmp/serving-cert-2210504703/serving-signer.crt, /tmp/serving-cert-2210504703/serving-signer.key\nStaticPodsDegraded: I0513 23:21:46.977255       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0513 23:21:46.991906       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-78-136.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0513 23:21:46.992154       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0513 23:21:47.024268       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2210504703/tls.crt::/tmp/serving-cert-2210504703/tls.key"\nStaticPodsDegraded: F0513 23:21:47.488290       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 13 23:27:24.657 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-115-99.us-east-2.compute.internal" not ready since 2024-05-13 23:25:24 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790153909302464512junit5 days ago
May 14 00:40:40.121 - 19s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-121-155.us-west-2.compute.internal" not ready since 2024-05-14 00:40:24 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 00:40:59.544 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-121-155.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 00:40:51.578671       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 00:40:51.579291       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715647251 cert, and key in /tmp/serving-cert-1650930304/serving-signer.crt, /tmp/serving-cert-1650930304/serving-signer.key\nStaticPodsDegraded: I0514 00:40:51.920156       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 00:40:51.922307       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-155.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 00:40:51.922443       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0514 00:40:51.923068       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1650930304/tls.crt::/tmp/serving-cert-1650930304/tls.key"\nStaticPodsDegraded: F0514 00:40:52.185363       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 00:47:10.183 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-88-253.us-west-2.compute.internal" not ready since 2024-05-14 00:46:59 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790113090386268160junit5 days ago
May 13 22:01:37.666 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-82-158.us-west-2.compute.internal" not ready since 2024-05-13 21:59:37 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 13 22:02:07.280 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-82-158.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0513 22:01:59.248722       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0513 22:01:59.249064       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715637719 cert, and key in /tmp/serving-cert-2490896206/serving-signer.crt, /tmp/serving-cert-2490896206/serving-signer.key\nStaticPodsDegraded: I0513 22:01:59.667789       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0513 22:01:59.677479       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-82-158.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0513 22:01:59.677579       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0513 22:01:59.692512       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2490896206/tls.crt::/tmp/serving-cert-2490896206/tls.key"\nStaticPodsDegraded: F0513 22:01:59.964991       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 13 22:08:26.390 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-44-49.us-west-2.compute.internal" not ready since 2024-05-13 22:08:16 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1788215544592207872junit10 days ago
May 08 16:04:11.923 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-52-180.us-east-2.compute.internal" not ready since 2024-05-08 16:02:11 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 08 16:04:48.931 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-52-180.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0508 16:04:37.881895       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0508 16:04:37.889636       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715184277 cert, and key in /tmp/serving-cert-4277142639/serving-signer.crt, /tmp/serving-cert-4277142639/serving-signer.key\nStaticPodsDegraded: I0508 16:04:38.440101       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0508 16:04:38.449950       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-52-180.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0508 16:04:38.450103       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0508 16:04:38.468466       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4277142639/tls.crt::/tmp/serving-cert-4277142639/tls.key"\nStaticPodsDegraded: F0508 16:04:38.788201       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 08 16:10:38.385 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-109-60.us-east-2.compute.internal" not ready since 2024-05-08 16:10:29 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1789332475470680064junit7 days ago
May 11 18:11:42.290 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-59-151.us-east-2.compute.internal" not ready since 2024-05-11 18:11:33 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 11 18:11:57.519 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-59-151.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0511 18:11:48.431787       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0511 18:11:48.432013       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715451108 cert, and key in /tmp/serving-cert-438573697/serving-signer.crt, /tmp/serving-cert-438573697/serving-signer.key\nStaticPodsDegraded: I0511 18:11:48.725645       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0511 18:11:48.727408       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-59-151.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0511 18:11:48.727528       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0511 18:11:48.728369       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-438573697/tls.crt::/tmp/serving-cert-438573697/tls.key"\nStaticPodsDegraded: F0511 18:11:48.906015       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 11 18:17:29.230 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-90-104.us-east-2.compute.internal" not ready since 2024-05-11 18:15:29 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1788184125128052736junit10 days ago
May 08 14:02:59.183 - 26s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-9-42.us-east-2.compute.internal" not ready since 2024-05-08 14:00:59 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 08 14:03:25.881 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-9-42.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0508 14:03:15.632374       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0508 14:03:15.632665       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715176995 cert, and key in /tmp/serving-cert-3622551620/serving-signer.crt, /tmp/serving-cert-3622551620/serving-signer.key\nStaticPodsDegraded: I0508 14:03:16.261758       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0508 14:03:16.270573       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-9-42.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0508 14:03:16.272488       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0508 14:03:16.284638       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3622551620/tls.crt::/tmp/serving-cert-3622551620/tls.key"\nStaticPodsDegraded: F0508 14:03:16.473261       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 08 14:10:40.180 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-88-247.us-east-2.compute.internal" not ready since 2024-05-08 14:10:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1789302884760817664junit7 days ago
May 11 16:07:18.341 - 5s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-95-243.ec2.internal" not ready since 2024-05-11 16:07:06 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 11 16:07:24.255 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-95-243.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0511 16:07:18.521580       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0511 16:07:18.521915       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715443638 cert, and key in /tmp/serving-cert-285143227/serving-signer.crt, /tmp/serving-cert-285143227/serving-signer.key\nStaticPodsDegraded: I0511 16:07:19.067257       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0511 16:07:19.077983       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-95-243.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0511 16:07:19.078116       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0511 16:07:19.089649       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-285143227/tls.crt::/tmp/serving-cert-285143227/tls.key"\nStaticPodsDegraded: F0511 16:07:19.310581       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 11 16:13:05.539 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-29-186.ec2.internal" not ready since 2024-05-11 16:13:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1788133112580935680junit10 days ago
May 08 10:38:48.567 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-58-43.us-east-2.compute.internal" not ready since 2024-05-08 10:38:40 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 08 10:39:03.735 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-58-43.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0508 10:38:53.072337       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0508 10:38:53.072602       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715164733 cert, and key in /tmp/serving-cert-1774621796/serving-signer.crt, /tmp/serving-cert-1774621796/serving-signer.key\nStaticPodsDegraded: I0508 10:38:53.446222       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0508 10:38:53.453594       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-58-43.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0508 10:38:53.453772       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0508 10:38:53.465799       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1774621796/tls.crt::/tmp/serving-cert-1774621796/tls.key"\nStaticPodsDegraded: F0508 10:38:53.726274       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 08 10:44:38.056 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-64-225.us-east-2.compute.internal" not ready since 2024-05-08 10:44:28 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1788139877959733248junit10 days ago
May 08 11:04:39.053 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-38-96.ec2.internal" not ready since 2024-05-08 11:04:37 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 08 11:05:07.332 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-38-96.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0508 11:04:59.217419       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0508 11:04:59.222628       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715166299 cert, and key in /tmp/serving-cert-935305739/serving-signer.crt, /tmp/serving-cert-935305739/serving-signer.key\nStaticPodsDegraded: I0508 11:04:59.585522       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0508 11:04:59.603925       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-38-96.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0508 11:04:59.604021       1 builder.go:299] check-endpoints version 4.15.0-202405070637.p0.gcb537c7.assembly.stream.el9-cb537c7-cb537c7a834b1e67210ee4d3620d9e94d402e960\nStaticPodsDegraded: I0508 11:04:59.631093       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-935305739/tls.crt::/tmp/serving-cert-935305739/tls.key"\nStaticPodsDegraded: F0508 11:04:59.857779       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 08 11:10:54.482 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-46-209.ec2.internal" not ready since 2024-05-08 11:10:45 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787509881381588992junit12 days ago
May 06 17:27:16.384 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-15-172.us-west-1.compute.internal" not ready since 2024-05-06 17:25:16 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 17:27:54.227 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-15-172.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 17:27:44.753559       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 17:27:44.753771       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715016464 cert, and key in /tmp/serving-cert-1648984766/serving-signer.crt, /tmp/serving-cert-1648984766/serving-signer.key\nStaticPodsDegraded: I0506 17:27:45.079364       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 17:27:45.097416       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-15-172.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 17:27:45.097535       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 17:27:45.120192       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1648984766/tls.crt::/tmp/serving-cert-1648984766/tls.key"\nStaticPodsDegraded: W0506 17:27:47.873802       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nStaticPodsDegraded: F0506 17:27:47.873891       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 06 17:33:48.391 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-78-6.us-west-1.compute.internal" not ready since 2024-05-06 17:33:30 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787607575940829184junit12 days ago
May 06 23:54:28.741 - 36s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-2-52.ec2.internal" not ready since 2024-05-06 23:52:28 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 23:55:04.828 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-2-52.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 23:54:54.605761       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 23:54:54.606060       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715039694 cert, and key in /tmp/serving-cert-1278669773/serving-signer.crt, /tmp/serving-cert-1278669773/serving-signer.key\nStaticPodsDegraded: I0506 23:54:55.390535       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 23:54:55.400890       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-2-52.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 23:54:55.401053       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 23:54:55.414306       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1278669773/tls.crt::/tmp/serving-cert-1278669773/tls.key"\nStaticPodsDegraded: F0506 23:54:55.668177       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 07 00:00:39.745 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-106-184.ec2.internal" not ready since 2024-05-07 00:00:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787533036926013440junit12 days ago
May 06 19:08:01.923 - 25s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-7-114.us-west-1.compute.internal" not ready since 2024-05-06 19:07:51 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 19:08:27.333 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-7-114.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 19:08:19.476896       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 19:08:19.477096       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715022499 cert, and key in /tmp/serving-cert-1180573974/serving-signer.crt, /tmp/serving-cert-1180573974/serving-signer.key\nStaticPodsDegraded: I0506 19:08:20.076050       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 19:08:20.077523       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-7-114.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 19:08:20.077629       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 19:08:20.078252       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1180573974/tls.crt::/tmp/serving-cert-1180573974/tls.key"\nStaticPodsDegraded: F0506 19:08:20.202886       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1787496446417702912junit12 days ago
May 06 16:44:39.366 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-17-71.ec2.internal" not ready since 2024-05-06 16:44:33 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 16:44:53.220 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-17-71.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 16:44:43.680987       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 16:44:43.681360       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715013883 cert, and key in /tmp/serving-cert-1150806203/serving-signer.crt, /tmp/serving-cert-1150806203/serving-signer.key\nStaticPodsDegraded: I0506 16:44:44.037154       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 16:44:44.047729       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-17-71.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 16:44:44.047836       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 16:44:44.063965       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1150806203/tls.crt::/tmp/serving-cert-1150806203/tls.key"\nStaticPodsDegraded: F0506 16:44:44.269588       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1787421298251534336junit12 days ago
May 06 11:33:07.049 - 39s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-84-68.us-east-2.compute.internal" not ready since 2024-05-06 11:31:07 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 11:33:46.411 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-84-68.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 11:33:36.697460       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 11:33:36.698040       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714995216 cert, and key in /tmp/serving-cert-1212254177/serving-signer.crt, /tmp/serving-cert-1212254177/serving-signer.key\nStaticPodsDegraded: I0506 11:33:37.093528       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 11:33:37.104918       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-84-68.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 11:33:37.105047       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 11:33:37.119480       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1212254177/tls.crt::/tmp/serving-cert-1212254177/tls.key"\nStaticPodsDegraded: F0506 11:33:37.719675       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 11:39:36.885 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-17-82.us-east-2.compute.internal" not ready since 2024-05-06 11:39:28 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787399574218870784junit12 days ago
May 06 10:06:55.721 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-44-27.us-west-1.compute.internal" not ready since 2024-05-06 10:06:34 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 10:07:11.231 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-44-27.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 10:07:01.722852       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 10:07:01.723081       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714990021 cert, and key in /tmp/serving-cert-1679022585/serving-signer.crt, /tmp/serving-cert-1679022585/serving-signer.key\nStaticPodsDegraded: I0506 10:07:01.981849       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 10:07:01.983159       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-44-27.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 10:07:01.983270       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 10:07:01.983849       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1679022585/tls.crt::/tmp/serving-cert-1679022585/tls.key"\nStaticPodsDegraded: F0506 10:07:02.172771       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 06 10:13:19.238 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-69-136.us-west-1.compute.internal" not ready since 2024-05-06 10:13:02 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787410915205844992junit12 days ago
May 06 11:00:09.148 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-53-24.us-west-2.compute.internal" not ready since 2024-05-06 10:59:58 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 11:00:23.425 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-53-24.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 11:00:13.240300       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 11:00:13.240631       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714993213 cert, and key in /tmp/serving-cert-1180452635/serving-signer.crt, /tmp/serving-cert-1180452635/serving-signer.key\nStaticPodsDegraded: I0506 11:00:14.024181       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 11:00:14.037160       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-53-24.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 11:00:14.037263       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 11:00:14.064374       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1180452635/tls.crt::/tmp/serving-cert-1180452635/tls.key"\nStaticPodsDegraded: F0506 11:00:14.380674       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 06 11:06:28.149 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-214.us-west-2.compute.internal" not ready since 2024-05-06 11:06:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787384432567521280junit12 days ago
May 06 09:03:23.808 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-49-18.us-east-2.compute.internal" not ready since 2024-05-06 09:01:23 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 09:03:59.509 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-49-18.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 09:03:48.753855       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 09:03:48.754209       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714986228 cert, and key in /tmp/serving-cert-759112939/serving-signer.crt, /tmp/serving-cert-759112939/serving-signer.key\nStaticPodsDegraded: I0506 09:03:49.267411       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 09:03:49.293296       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-49-18.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 09:03:49.293414       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 09:03:49.322465       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-759112939/tls.crt::/tmp/serving-cert-759112939/tls.key"\nStaticPodsDegraded: F0506 09:03:49.504803       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 09:09:34.063 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-119-13.us-east-2.compute.internal" not ready since 2024-05-06 09:09:26 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787362558940811264junit12 days ago
May 06 07:42:22.936 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-46-23.us-west-1.compute.internal" not ready since 2024-05-06 07:42:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 07:42:37.901 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-46-23.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 07:42:27.750403       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 07:42:27.750704       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714981347 cert, and key in /tmp/serving-cert-3843431160/serving-signer.crt, /tmp/serving-cert-3843431160/serving-signer.key\nStaticPodsDegraded: I0506 07:42:28.338050       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 07:42:28.354889       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-46-23.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 07:42:28.355002       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 07:42:28.376868       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3843431160/tls.crt::/tmp/serving-cert-3843431160/tls.key"\nStaticPodsDegraded: F0506 07:42:28.576720       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 06 07:48:04.944 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-117-128.us-west-1.compute.internal" not ready since 2024-05-06 07:46:04 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787392133854924800junit12 days ago
May 06 09:40:25.943 - 6s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-22-117.ec2.internal" not ready since 2024-05-06 09:40:14 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 09:40:32.725 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-22-117.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 09:40:25.550200       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 09:40:25.550526       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714988425 cert, and key in /tmp/serving-cert-2259244783/serving-signer.crt, /tmp/serving-cert-2259244783/serving-signer.key\nStaticPodsDegraded: I0506 09:40:25.952101       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 09:40:25.963023       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-117.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 09:40:25.963145       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 09:40:25.975033       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2259244783/tls.crt::/tmp/serving-cert-2259244783/tls.key"\nStaticPodsDegraded: F0506 09:40:26.178427       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 09:45:57.951 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-67-71.ec2.internal" not ready since 2024-05-06 09:43:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787355700356190208junit12 days ago
May 06 07:14:21.572 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-57-250.us-west-2.compute.internal" not ready since 2024-05-06 07:12:21 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 07:14:50.728 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-57-250.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 07:14:42.760459       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 07:14:42.761193       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714979682 cert, and key in /tmp/serving-cert-2223169849/serving-signer.crt, /tmp/serving-cert-2223169849/serving-signer.key\nStaticPodsDegraded: I0506 07:14:43.257176       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 07:14:43.270260       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-57-250.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 07:14:43.270362       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 07:14:43.295048       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2223169849/tls.crt::/tmp/serving-cert-2223169849/tls.key"\nStaticPodsDegraded: F0506 07:14:43.467618       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 07:20:34.537 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-90-213.us-west-2.compute.internal" not ready since 2024-05-06 07:18:34 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787355700356190208junit12 days ago
May 06 07:27:10.281 - 18s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-40-16.us-west-2.compute.internal" not ready since 2024-05-06 07:26:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 07:27:28.843 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-40-16.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 07:27:20.526159       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 07:27:20.526404       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714980440 cert, and key in /tmp/serving-cert-1664207665/serving-signer.crt, /tmp/serving-cert-1664207665/serving-signer.key\nStaticPodsDegraded: I0506 07:27:20.829242       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 07:27:20.830715       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-40-16.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 07:27:20.830840       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 07:27:20.831435       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1664207665/tls.crt::/tmp/serving-cert-1664207665/tls.key"\nStaticPodsDegraded: F0506 07:27:21.057916       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1787380038065197056junit12 days ago
May 06 08:54:37.163 - 39s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-1-255.us-west-1.compute.internal" not ready since 2024-05-06 08:52:37 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 08:55:17.053 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-1-255.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 08:55:08.096197       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 08:55:08.096425       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714985708 cert, and key in /tmp/serving-cert-1965269642/serving-signer.crt, /tmp/serving-cert-1965269642/serving-signer.key\nStaticPodsDegraded: I0506 08:55:08.318397       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 08:55:08.319954       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-1-255.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 08:55:08.320084       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 08:55:08.320727       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1965269642/tls.crt::/tmp/serving-cert-1965269642/tls.key"\nStaticPodsDegraded: F0506 08:55:08.645565       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 09:00:45.790 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-118-152.us-west-1.compute.internal" not ready since 2024-05-06 08:58:45 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787338179741749248junit12 days ago
May 06 05:59:53.425 - 5s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-36-23.us-east-2.compute.internal" not ready since 2024-05-06 05:59:41 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 05:59:58.783 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-36-23.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 05:59:52.554017       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 05:59:52.554258       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714975192 cert, and key in /tmp/serving-cert-2914571179/serving-signer.crt, /tmp/serving-cert-2914571179/serving-signer.key\nStaticPodsDegraded: I0506 05:59:52.797176       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 05:59:52.816658       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-36-23.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 05:59:52.816787       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 05:59:52.843554       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2914571179/tls.crt::/tmp/serving-cert-2914571179/tls.key"\nStaticPodsDegraded: F0506 05:59:53.387051       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 06 06:05:35.730 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-126-81.us-east-2.compute.internal" not ready since 2024-05-06 06:03:35 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787338179741749248junit12 days ago
May 06 06:11:42.313 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-41-181.us-east-2.compute.internal" not ready since 2024-05-06 06:11:33 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 06:11:57.373 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-41-181.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 06:11:45.775840       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 06:11:45.776627       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714975905 cert, and key in /tmp/serving-cert-2375779798/serving-signer.crt, /tmp/serving-cert-2375779798/serving-signer.key\nStaticPodsDegraded: I0506 06:11:46.285560       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 06:11:46.305571       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-41-181.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 06:11:46.305725       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 06:11:46.333996       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2375779798/tls.crt::/tmp/serving-cert-2375779798/tls.key"\nStaticPodsDegraded: F0506 06:11:46.530450       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1787349245808873472junit12 days ago
May 06 06:45:51.032 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-42-89.ec2.internal" not ready since 2024-05-06 06:45:43 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 06:46:00.432 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-42-89.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 06:45:55.632345       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 06:45:55.632769       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714977955 cert, and key in /tmp/serving-cert-553862195/serving-signer.crt, /tmp/serving-cert-553862195/serving-signer.key\nStaticPodsDegraded: I0506 06:45:56.271778       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 06:45:56.281037       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-42-89.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 06:45:56.281136       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 06:45:56.293250       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-553862195/tls.crt::/tmp/serving-cert-553862195/tls.key"\nStaticPodsDegraded: F0506 06:45:56.468400       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 06:51:41.950 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-83-234.ec2.internal" not ready since 2024-05-06 06:51:22 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787331809504137216junit12 days ago
May 06 05:49:01.120 - 39s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-97-217.us-west-2.compute.internal" not ready since 2024-05-06 05:47:01 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 05:49:41.013 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-97-217.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 05:49:31.461170       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 05:49:31.473887       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714974571 cert, and key in /tmp/serving-cert-3424240765/serving-signer.crt, /tmp/serving-cert-3424240765/serving-signer.key\nStaticPodsDegraded: I0506 05:49:31.944980       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 05:49:31.961470       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-97-217.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 05:49:31.961608       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 05:49:31.974393       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3424240765/tls.crt::/tmp/serving-cert-3424240765/tls.key"\nStaticPodsDegraded: F0506 05:49:32.257013       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 06 05:55:55.552 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-69-79.us-west-2.compute.internal" not ready since 2024-05-06 05:55:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787302237362458624junit12 days ago
May 06 03:34:44.429 - 41s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-36-26.ec2.internal" not ready since 2024-05-06 03:32:44 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 03:35:25.851 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-36-26.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 03:35:14.415285       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 03:35:14.416807       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714966514 cert, and key in /tmp/serving-cert-2269562611/serving-signer.crt, /tmp/serving-cert-2269562611/serving-signer.key\nStaticPodsDegraded: I0506 03:35:15.356898       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 03:35:15.369074       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-36-26.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 03:35:15.369204       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 03:35:15.389581       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2269562611/tls.crt::/tmp/serving-cert-2269562611/tls.key"\nStaticPodsDegraded: F0506 03:35:15.551342       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 06 03:40:56.513 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-115-99.ec2.internal" not ready since 2024-05-06 03:40:46 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787302237362458624junit12 days ago
May 06 03:46:18.716 - 32s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-86-89.ec2.internal" not ready since 2024-05-06 03:44:18 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 03:46:50.802 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-86-89.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 03:46:41.134157       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 03:46:41.134594       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714967201 cert, and key in /tmp/serving-cert-1092401529/serving-signer.crt, /tmp/serving-cert-1092401529/serving-signer.key\nStaticPodsDegraded: I0506 03:46:41.429449       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 03:46:41.442418       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-86-89.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 03:46:41.442545       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 03:46:41.453540       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1092401529/tls.crt::/tmp/serving-cert-1092401529/tls.key"\nStaticPodsDegraded: F0506 03:46:41.891796       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1787296407078572032junit12 days ago
May 06 03:31:14.579 - 38s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-25-10.us-west-1.compute.internal" not ready since 2024-05-06 03:29:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 03:31:52.701 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-25-10.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 03:31:43.462668       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 03:31:43.462880       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714966303 cert, and key in /tmp/serving-cert-1712730033/serving-signer.crt, /tmp/serving-cert-1712730033/serving-signer.key\nStaticPodsDegraded: I0506 03:31:43.965561       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 03:31:43.978414       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-25-10.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 03:31:43.978521       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 03:31:43.998028       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1712730033/tls.crt::/tmp/serving-cert-1712730033/tls.key"\nStaticPodsDegraded: W0506 03:31:46.412743       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nStaticPodsDegraded: F0506 03:31:46.412782       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:check-endpoints" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1787285253077864448junit12 days ago
May 06 02:33:59.261 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-12-84.ec2.internal" not ready since 2024-05-06 02:33:49 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 02:34:08.670 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-12-84.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 02:34:01.042900       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 02:34:01.043258       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714962841 cert, and key in /tmp/serving-cert-3393136824/serving-signer.crt, /tmp/serving-cert-3393136824/serving-signer.key\nStaticPodsDegraded: I0506 02:34:01.348193       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 02:34:01.357390       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-12-84.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 02:34:01.357590       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 02:34:01.367294       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3393136824/tls.crt::/tmp/serving-cert-3393136824/tls.key"\nStaticPodsDegraded: F0506 02:34:01.751589       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 02:39:37.733 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-80-221.ec2.internal" not ready since 2024-05-06 02:37:37 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787272238374850560junit12 days ago
May 06 01:56:13.760 - 40s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-3-204.us-west-1.compute.internal" not ready since 2024-05-06 01:54:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 01:56:54.121 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-3-204.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 01:56:43.832892       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 01:56:43.833285       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714960603 cert, and key in /tmp/serving-cert-885815004/serving-signer.crt, /tmp/serving-cert-885815004/serving-signer.key\nStaticPodsDegraded: I0506 01:56:44.512217       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 01:56:44.520182       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-3-204.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 01:56:44.520293       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 01:56:44.539057       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-885815004/tls.crt::/tmp/serving-cert-885815004/tls.key"\nStaticPodsDegraded: F0506 01:56:44.941832       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 02:02:45.303 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-127-194.us-west-1.compute.internal" not ready since 2024-05-06 02:02:34 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787248190194454528junit13 days ago
May 06 00:15:18.357 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-31-162.us-west-1.compute.internal" not ready since 2024-05-06 00:15:16 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 00:15:52.494 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-31-162.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 00:15:41.028497       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 00:15:41.028974       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714954541 cert, and key in /tmp/serving-cert-3048762867/serving-signer.crt, /tmp/serving-cert-3048762867/serving-signer.key\nStaticPodsDegraded: I0506 00:15:41.740793       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 00:15:41.754079       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-31-162.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 00:15:41.754253       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 00:15:41.775042       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3048762867/tls.crt::/tmp/serving-cert-3048762867/tls.key"\nStaticPodsDegraded: F0506 00:15:42.060562       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 06 00:21:39.539 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-126-128.us-west-1.compute.internal" not ready since 2024-05-06 00:21:28 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787265062235279360junit12 days ago
May 06 01:36:23.427 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-118-16.us-east-2.compute.internal" not ready since 2024-05-06 01:36:02 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 01:36:35.263 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-118-16.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 01:36:25.725007       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 01:36:25.730318       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714959385 cert, and key in /tmp/serving-cert-399131777/serving-signer.crt, /tmp/serving-cert-399131777/serving-signer.key\nStaticPodsDegraded: I0506 01:36:26.144189       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 01:36:26.159843       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-16.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 01:36:26.159981       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 01:36:26.182401       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-399131777/tls.crt::/tmp/serving-cert-399131777/tls.key"\nStaticPodsDegraded: F0506 01:36:26.427403       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1787254673158180864junit13 days ago
May 06 00:38:09.339 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-101-205.ec2.internal" not ready since 2024-05-06 00:38:03 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 00:38:24.614 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-101-205.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 00:38:15.217436       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 00:38:15.217892       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714955895 cert, and key in /tmp/serving-cert-1953178759/serving-signer.crt, /tmp/serving-cert-1953178759/serving-signer.key\nStaticPodsDegraded: I0506 00:38:15.754057       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 00:38:15.770128       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-205.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 00:38:15.770283       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0506 00:38:15.783373       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1953178759/tls.crt::/tmp/serving-cert-1953178759/tls.key"\nStaticPodsDegraded: F0506 00:38:16.133574       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 00:43:55.786 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-23-169.ec2.internal" not ready since 2024-05-06 00:43:45 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787232359616090112junit13 days ago
May 05 23:09:44.142 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-91-131.us-west-2.compute.internal" not ready since 2024-05-05 23:09:23 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 23:09:59.568 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-91-131.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 23:09:48.212667       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 23:09:48.212939       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714950588 cert, and key in /tmp/serving-cert-1099629169/serving-signer.crt, /tmp/serving-cert-1099629169/serving-signer.key\nStaticPodsDegraded: I0505 23:09:48.779055       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 23:09:48.805749       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-91-131.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 23:09:48.805868       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 23:09:48.831532       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1099629169/tls.crt::/tmp/serving-cert-1099629169/tls.key"\nStaticPodsDegraded: F0505 23:09:49.067972       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 23:15:28.003 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-50-163.us-west-2.compute.internal" not ready since 2024-05-05 23:13:27 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787242763465527296junit13 days ago
May 05 23:39:38.041 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-2-161.us-east-2.compute.internal" not ready since 2024-05-05 23:37:38 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 23:40:06.719 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-2-161.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 23:39:58.657174       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 23:39:58.657499       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714952398 cert, and key in /tmp/serving-cert-1797010024/serving-signer.crt, /tmp/serving-cert-1797010024/serving-signer.key\nStaticPodsDegraded: I0505 23:39:59.156291       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 23:39:59.168384       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-2-161.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 23:39:59.168509       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 23:39:59.176863       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1797010024/tls.crt::/tmp/serving-cert-1797010024/tls.key"\nStaticPodsDegraded: F0505 23:39:59.431483       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 23:45:13.045 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-78-209.us-east-2.compute.internal" not ready since 2024-05-05 23:43:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787219732907167744junit13 days ago
May 05 22:15:34.406 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-64-35.ec2.internal" not ready since 2024-05-05 22:13:34 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 22:16:03.819 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-64-35.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 22:15:54.014725       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 22:15:54.015114       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714947354 cert, and key in /tmp/serving-cert-3412618347/serving-signer.crt, /tmp/serving-cert-3412618347/serving-signer.key\nStaticPodsDegraded: I0505 22:15:54.802524       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 22:15:54.812735       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-64-35.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 22:15:54.812935       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 22:15:54.831381       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3412618347/tls.crt::/tmp/serving-cert-3412618347/tls.key"\nStaticPodsDegraded: F0505 22:15:55.184011       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 22:21:51.048 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-92-233.ec2.internal" not ready since 2024-05-05 22:21:34 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787225309121089536junit13 days ago
May 05 22:39:11.844 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-33-98.us-west-1.compute.internal" not ready since 2024-05-05 22:37:11 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 22:39:41.741 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-33-98.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 22:39:33.769220       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 22:39:33.769621       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714948773 cert, and key in /tmp/serving-cert-1231102339/serving-signer.crt, /tmp/serving-cert-1231102339/serving-signer.key\nStaticPodsDegraded: I0505 22:39:34.260795       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 22:39:34.267470       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-33-98.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 22:39:34.267616       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 22:39:34.289988       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1231102339/tls.crt::/tmp/serving-cert-1231102339/tls.key"\nStaticPodsDegraded: F0505 22:39:34.499322       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 22:45:31.288 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-126-225.us-west-1.compute.internal" not ready since 2024-05-05 22:45:23 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787195580246659072junit13 days ago
May 05 20:40:35.448 - 38s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-117-253.us-west-1.compute.internal" not ready since 2024-05-05 20:38:35 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 20:41:14.035 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-117-253.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 20:41:04.436499       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 20:41:04.436777       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714941664 cert, and key in /tmp/serving-cert-3232417258/serving-signer.crt, /tmp/serving-cert-3232417258/serving-signer.key\nStaticPodsDegraded: I0505 20:41:05.086177       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 20:41:05.107215       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-117-253.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 20:41:05.107336       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 20:41:05.141075       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3232417258/tls.crt::/tmp/serving-cert-3232417258/tls.key"\nStaticPodsDegraded: F0505 20:41:05.387637       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 20:47:14.366 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-66-11.us-west-1.compute.internal" not ready since 2024-05-05 20:47:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787212801870139392junit13 days ago
May 05 21:53:53.449 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-72-90.us-west-2.compute.internal" not ready since 2024-05-05 21:51:53 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 21:54:24.647 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-72-90.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 21:54:18.772120       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 21:54:18.777623       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714946058 cert, and key in /tmp/serving-cert-810479970/serving-signer.crt, /tmp/serving-cert-810479970/serving-signer.key\nStaticPodsDegraded: I0505 21:54:19.326309       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 21:54:19.334859       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-72-90.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 21:54:19.334937       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 21:54:19.352696       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-810479970/tls.crt::/tmp/serving-cert-810479970/tls.key"\nStaticPodsDegraded: F0505 21:54:19.612213       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 22:00:28.578 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-55-46.us-west-2.compute.internal" not ready since 2024-05-05 22:00:20 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787182398123806720junit13 days ago
May 05 19:42:51.686 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-5-39.us-east-2.compute.internal" not ready since 2024-05-05 19:42:37 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 19:43:00.118 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-5-39.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 19:42:50.323095       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 19:42:50.327389       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714938170 cert, and key in /tmp/serving-cert-2899348713/serving-signer.crt, /tmp/serving-cert-2899348713/serving-signer.key\nStaticPodsDegraded: I0505 19:42:50.832668       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 19:42:50.840141       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-5-39.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 19:42:50.840246       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 19:42:50.863907       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2899348713/tls.crt::/tmp/serving-cert-2899348713/tls.key"\nStaticPodsDegraded: F0505 19:42:51.159637       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 19:48:09.155 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-79-176.us-east-2.compute.internal" not ready since 2024-05-05 19:46:09 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787165327050674176junit13 days ago
May 05 18:35:17.054 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-50-184.us-west-2.compute.internal" not ready since 2024-05-05 18:33:17 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 18:35:50.863 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-50-184.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 18:35:41.055159       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 18:35:41.055520       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714934141 cert, and key in /tmp/serving-cert-4091054075/serving-signer.crt, /tmp/serving-cert-4091054075/serving-signer.key\nStaticPodsDegraded: I0505 18:35:41.401026       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 18:35:41.411942       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-50-184.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 18:35:41.412079       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 18:35:41.428496       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4091054075/tls.crt::/tmp/serving-cert-4091054075/tls.key"\nStaticPodsDegraded: F0505 18:35:41.741423       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 18:41:19.058 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-75-247.us-west-2.compute.internal" not ready since 2024-05-05 18:39:19 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787150062963396608junit13 days ago
May 05 17:39:53.634 - 5s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-16-227.ec2.internal" not ready since 2024-05-05 17:39:32 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 17:39:59.101 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-16-227.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 17:39:54.366475       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 17:39:54.366700       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714930794 cert, and key in /tmp/serving-cert-1688514057/serving-signer.crt, /tmp/serving-cert-1688514057/serving-signer.key\nStaticPodsDegraded: I0505 17:39:54.641706       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 17:39:54.672535       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-16-227.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 17:39:54.672663       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 17:39:54.693204       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1688514057/tls.crt::/tmp/serving-cert-1688514057/tls.key"\nStaticPodsDegraded: F0505 17:39:54.913458       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 17:45:15.978 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-95-33.ec2.internal" not ready since 2024-05-05 17:43:15 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787144371733270528junit13 days ago
May 05 17:26:25.516 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-73-14.us-west-2.compute.internal" not ready since 2024-05-05 17:26:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 17:26:40.722 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-73-14.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 17:26:31.669420       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 17:26:31.669851       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714929991 cert, and key in /tmp/serving-cert-2141435131/serving-signer.crt, /tmp/serving-cert-2141435131/serving-signer.key\nStaticPodsDegraded: I0505 17:26:32.079711       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 17:26:32.093485       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-73-14.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 17:26:32.093639       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 17:26:32.108535       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2141435131/tls.crt::/tmp/serving-cert-2141435131/tls.key"\nStaticPodsDegraded: F0505 17:26:32.408289       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 17:32:45.331 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-7-118.us-west-2.compute.internal" not ready since 2024-05-05 17:32:32 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787136219348471808junit13 days ago
May 05 16:44:42.616 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-15.us-west-1.compute.internal" not ready since 2024-05-05 16:42:42 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 16:45:16.684 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-10-15.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 16:45:08.580992       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 16:45:08.581271       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714927508 cert, and key in /tmp/serving-cert-747552829/serving-signer.crt, /tmp/serving-cert-747552829/serving-signer.key\nStaticPodsDegraded: I0505 16:45:09.000418       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 16:45:09.001846       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-15.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 16:45:09.001943       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 16:45:09.002574       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-747552829/tls.crt::/tmp/serving-cert-747552829/tls.key"\nStaticPodsDegraded: F0505 16:45:09.234602       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 16:50:44.434 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-70-9.us-west-1.compute.internal" not ready since 2024-05-05 16:48:44 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787120243873681408junit13 days ago
May 05 15:36:07.167 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-20-39.us-east-2.compute.internal" not ready since 2024-05-05 15:35:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 15:36:22.614 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-20-39.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 15:36:11.082415       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 15:36:11.082665       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714923371 cert, and key in /tmp/serving-cert-967008381/serving-signer.crt, /tmp/serving-cert-967008381/serving-signer.key\nStaticPodsDegraded: I0505 15:36:11.880022       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 15:36:11.896589       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-20-39.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 15:36:11.896680       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 15:36:11.915994       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-967008381/tls.crt::/tmp/serving-cert-967008381/tls.key"\nStaticPodsDegraded: F0505 15:36:12.111280       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 15:41:55.154 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-27-63.us-east-2.compute.internal" not ready since 2024-05-05 15:41:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787128266474131456junit13 days ago
May 05 16:12:21.176 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-63-184.us-west-2.compute.internal" not ready since 2024-05-05 16:10:21 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 16:12:56.012 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-63-184.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 16:12:44.714064       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 16:12:44.733234       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714925564 cert, and key in /tmp/serving-cert-1920112426/serving-signer.crt, /tmp/serving-cert-1920112426/serving-signer.key\nStaticPodsDegraded: I0505 16:12:45.633242       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 16:12:45.639954       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-63-184.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 16:12:45.640246       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 16:12:45.658054       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1920112426/tls.crt::/tmp/serving-cert-1920112426/tls.key"\nStaticPodsDegraded: F0505 16:12:46.118061       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 16:19:06.189 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-118-217.us-west-2.compute.internal" not ready since 2024-05-05 16:18:58 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787113276828553216junit13 days ago
May 05 15:11:15.684 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-12.us-east-2.compute.internal" not ready since 2024-05-05 15:11:06 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 15:11:29.191 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-100-12.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 15:11:18.776247       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 15:11:18.776557       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714921878 cert, and key in /tmp/serving-cert-2913412825/serving-signer.crt, /tmp/serving-cert-2913412825/serving-signer.key\nStaticPodsDegraded: I0505 15:11:19.294405       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 15:11:19.310660       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-100-12.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 15:11:19.310841       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 15:11:19.336467       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2913412825/tls.crt::/tmp/serving-cert-2913412825/tls.key"\nStaticPodsDegraded: F0505 15:11:19.447185       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 15:17:14.657 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-76-219.us-east-2.compute.internal" not ready since 2024-05-05 15:17:03 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787088455839256576junit13 days ago
May 05 13:25:21.279 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-5-224.us-east-2.compute.internal" not ready since 2024-05-05 13:25:01 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 13:25:35.747 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-5-224.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 13:25:24.923211       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 13:25:24.923782       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714915524 cert, and key in /tmp/serving-cert-754730570/serving-signer.crt, /tmp/serving-cert-754730570/serving-signer.key\nStaticPodsDegraded: I0505 13:25:25.382049       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 13:25:25.404114       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-5-224.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 13:25:25.404207       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 13:25:25.434016       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-754730570/tls.crt::/tmp/serving-cert-754730570/tls.key"\nStaticPodsDegraded: F0505 13:25:25.625042       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 13:30:48.258 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-81-55.us-east-2.compute.internal" not ready since 2024-05-05 13:28:48 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787100087722184704junit13 days ago
May 05 14:22:55.367 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-18-251.us-east-2.compute.internal" not ready since 2024-05-05 14:22:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 14:23:08.095 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-18-251.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 14:22:56.773441       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 14:22:56.773665       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714918976 cert, and key in /tmp/serving-cert-3049503599/serving-signer.crt, /tmp/serving-cert-3049503599/serving-signer.key\nStaticPodsDegraded: I0505 14:22:57.134929       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 14:22:57.145892       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-18-251.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 14:22:57.146032       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 14:22:57.166646       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3049503599/tls.crt::/tmp/serving-cert-3049503599/tls.key"\nStaticPodsDegraded: F0505 14:22:57.445230       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1787070146305593344junit13 days ago
May 05 12:14:03.287 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-121-246.ec2.internal" not ready since 2024-05-05 12:12:03 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 12:14:38.851 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-121-246.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 12:14:33.980571       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 12:14:33.980831       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714911273 cert, and key in /tmp/serving-cert-3705383074/serving-signer.crt, /tmp/serving-cert-3705383074/serving-signer.key\nStaticPodsDegraded: I0505 12:14:34.666891       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 12:14:34.681976       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-246.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 12:14:34.682195       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 12:14:34.696544       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3705383074/tls.crt::/tmp/serving-cert-3705383074/tls.key"\nStaticPodsDegraded: F0505 12:14:34.829907       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 12:20:00.296 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-126-224.ec2.internal" not ready since 2024-05-05 12:18:00 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787070146305593344junit13 days ago
May 05 12:26:08.164 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-21.ec2.internal" not ready since 2024-05-05 12:25:56 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 12:26:16.448 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-10-21.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 12:26:07.807619       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 12:26:07.808079       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714911967 cert, and key in /tmp/serving-cert-1150267964/serving-signer.crt, /tmp/serving-cert-1150267964/serving-signer.key\nStaticPodsDegraded: I0505 12:26:08.236296       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 12:26:08.253093       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-21.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 12:26:08.253192       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 12:26:08.263490       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1150267964/tls.crt::/tmp/serving-cert-1150267964/tls.key"\nStaticPodsDegraded: F0505 12:26:08.582386       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1787076307796889600junit13 days ago
May 05 12:54:11.726 - 18s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-13-113.us-west-2.compute.internal" not ready since 2024-05-05 12:54:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 12:54:29.933 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-13-113.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 12:54:18.259981       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 12:54:18.282650       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714913658 cert, and key in /tmp/serving-cert-25656613/serving-signer.crt, /tmp/serving-cert-25656613/serving-signer.key\nStaticPodsDegraded: I0505 12:54:18.868491       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 12:54:18.885769       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-13-113.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 12:54:18.885932       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 12:54:18.912796       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-25656613/tls.crt::/tmp/serving-cert-25656613/tls.key"\nStaticPodsDegraded: F0505 12:54:19.587175       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 13:00:09.711 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-73-160.us-west-2.compute.internal" not ready since 2024-05-05 12:58:09 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787058270322561024junit13 days ago
May 05 11:50:41.796 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-80-110.us-west-2.compute.internal" not ready since 2024-05-05 11:50:33 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 11:50:57.362 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-80-110.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 11:50:48.475245       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 11:50:48.488287       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714909848 cert, and key in /tmp/serving-cert-3676366566/serving-signer.crt, /tmp/serving-cert-3676366566/serving-signer.key\nStaticPodsDegraded: I0505 11:50:49.035891       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 11:50:49.050136       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-80-110.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 11:50:49.050242       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 11:50:49.075437       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3676366566/tls.crt::/tmp/serving-cert-3676366566/tls.key"\nStaticPodsDegraded: F0505 11:50:49.400428       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1787052301026856960junit13 days ago
May 05 11:02:19.030 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-7-238.us-east-2.compute.internal" not ready since 2024-05-05 11:00:19 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 11:02:53.690 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-7-238.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 11:02:42.601437       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 11:02:42.601733       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714906962 cert, and key in /tmp/serving-cert-1361713871/serving-signer.crt, /tmp/serving-cert-1361713871/serving-signer.key\nStaticPodsDegraded: I0505 11:02:43.205415       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 11:02:43.225109       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-7-238.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 11:02:43.225236       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 11:02:43.255265       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1361713871/tls.crt::/tmp/serving-cert-1361713871/tls.key"\nStaticPodsDegraded: F0505 11:02:43.456810       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 11:08:29.040 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-104-199.us-east-2.compute.internal" not ready since 2024-05-05 11:08:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787043523183251456junit13 days ago
May 05 10:29:00.422 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-67-127.us-east-2.compute.internal" not ready since 2024-05-05 10:28:52 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 10:29:15.984 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-67-127.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 10:29:05.244383       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 10:29:05.245110       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714904945 cert, and key in /tmp/serving-cert-1199164762/serving-signer.crt, /tmp/serving-cert-1199164762/serving-signer.key\nStaticPodsDegraded: I0505 10:29:05.772327       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 10:29:05.790211       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-67-127.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 10:29:05.790497       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 10:29:05.812362       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1199164762/tls.crt::/tmp/serving-cert-1199164762/tls.key"\nStaticPodsDegraded: F0505 10:29:06.409801       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 10:34:51.418 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-90-0.us-east-2.compute.internal" not ready since 2024-05-05 10:34:41 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787043523183251456junit13 days ago
May 05 10:40:28.116 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-13-4.us-east-2.compute.internal" not ready since 2024-05-05 10:40:25 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 10:40:59.826 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-13-4.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 10:40:48.267009       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 10:40:48.267559       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714905648 cert, and key in /tmp/serving-cert-2687489556/serving-signer.crt, /tmp/serving-cert-2687489556/serving-signer.key\nStaticPodsDegraded: I0505 10:40:48.869676       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 10:40:48.898291       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-13-4.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 10:40:48.898483       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 10:40:48.915231       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2687489556/tls.crt::/tmp/serving-cert-2687489556/tls.key"\nStaticPodsDegraded: F0505 10:40:49.210708       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1787035853759975424junit13 days ago
May 05 10:14:44.158 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-30-43.us-west-1.compute.internal" not ready since 2024-05-05 10:14:36 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 10:14:56.210 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-30-43.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 10:14:48.713179       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 10:14:48.713612       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714904088 cert, and key in /tmp/serving-cert-2385741830/serving-signer.crt, /tmp/serving-cert-2385741830/serving-signer.key\nStaticPodsDegraded: I0505 10:14:49.288701       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 10:14:49.302895       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-30-43.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 10:14:49.303095       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 10:14:49.318356       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2385741830/tls.crt::/tmp/serving-cert-2385741830/tls.key"\nStaticPodsDegraded: F0505 10:14:49.882719       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 10:21:00.515 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-70-254.us-west-1.compute.internal" not ready since 2024-05-05 10:20:39 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787028178733109248junit13 days ago
May 05 09:31:38.224 - 27s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-98-45.ec2.internal" not ready since 2024-05-05 09:29:38 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 09:32:06.023 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-98-45.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 09:31:57.123123       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 09:31:57.123358       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714901517 cert, and key in /tmp/serving-cert-3672191826/serving-signer.crt, /tmp/serving-cert-3672191826/serving-signer.key\nStaticPodsDegraded: I0505 09:31:57.544329       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 09:31:57.572689       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-98-45.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 09:31:57.572819       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 09:31:57.591435       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3672191826/tls.crt::/tmp/serving-cert-3672191826/tls.key"\nStaticPodsDegraded: F0505 09:31:58.001528       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 09:37:36.942 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-97-165.ec2.internal" not ready since 2024-05-05 09:37:26 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787006243223638016junit13 days ago
May 05 07:59:51.730 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-2-71.us-east-2.compute.internal" not ready since 2024-05-05 07:57:51 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 08:00:26.414 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-2-71.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 08:00:16.191369       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 08:00:16.191668       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714896016 cert, and key in /tmp/serving-cert-1780563666/serving-signer.crt, /tmp/serving-cert-1780563666/serving-signer.key\nStaticPodsDegraded: I0505 08:00:16.539266       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 08:00:16.550530       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-2-71.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 08:00:16.550621       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 08:00:16.564285       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1780563666/tls.crt::/tmp/serving-cert-1780563666/tls.key"\nStaticPodsDegraded: F0505 08:00:16.767394       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 08:06:02.730 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-115-171.us-east-2.compute.internal" not ready since 2024-05-05 08:05:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787013293919965184junit13 days ago
May 05 08:24:29.434 - 26s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-29-160.ec2.internal" not ready since 2024-05-05 08:22:29 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 08:24:55.789 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-29-160.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 08:24:46.241740       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 08:24:46.242115       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714897486 cert, and key in /tmp/serving-cert-3091069581/serving-signer.crt, /tmp/serving-cert-3091069581/serving-signer.key\nStaticPodsDegraded: I0505 08:24:46.752518       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 08:24:46.764617       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-29-160.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 08:24:46.764735       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 08:24:46.780057       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3091069581/tls.crt::/tmp/serving-cert-3091069581/tls.key"\nStaticPodsDegraded: F0505 08:24:46.947926       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 08:30:16.422 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-80-215.ec2.internal" not ready since 2024-05-05 08:28:16 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787021028761800704junit13 days ago
May 05 09:07:16.557 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-51-50.us-east-2.compute.internal" not ready since 2024-05-05 09:07:10 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 09:07:45.537 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-51-50.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 09:07:34.081948       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 09:07:34.082277       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714900054 cert, and key in /tmp/serving-cert-4145117364/serving-signer.crt, /tmp/serving-cert-4145117364/serving-signer.key\nStaticPodsDegraded: I0505 09:07:34.922521       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 09:07:34.940799       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-51-50.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 09:07:34.940908       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 09:07:34.955926       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4145117364/tls.crt::/tmp/serving-cert-4145117364/tls.key"\nStaticPodsDegraded: F0505 09:07:35.204560       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 09:13:03.538 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-70-120.us-east-2.compute.internal" not ready since 2024-05-05 09:11:03 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786975475973754880junit13 days ago
May 05 05:54:35.740 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-12-190.us-east-2.compute.internal" not ready since 2024-05-05 05:52:35 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 05:55:09.521 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-12-190.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 05:55:05.180265       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 05:55:05.180474       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714888505 cert, and key in /tmp/serving-cert-1009420703/serving-signer.crt, /tmp/serving-cert-1009420703/serving-signer.key\nStaticPodsDegraded: I0505 05:55:05.466581       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 05:55:05.468135       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-12-190.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 05:55:05.468248       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 05:55:05.468876       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1009420703/tls.crt::/tmp/serving-cert-1009420703/tls.key"\nStaticPodsDegraded: F0505 05:55:05.711567       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 06:00:43.283 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-165.us-east-2.compute.internal" not ready since 2024-05-05 06:00:34 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786986436382167040junit13 days ago
May 05 07:05:14.554 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-67-30.us-west-2.compute.internal" not ready since 2024-05-05 07:03:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 07:05:49.603 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-67-30.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 07:05:41.328513       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 07:05:41.328827       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714892741 cert, and key in /tmp/serving-cert-2603356672/serving-signer.crt, /tmp/serving-cert-2603356672/serving-signer.key\nStaticPodsDegraded: I0505 07:05:41.724575       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 07:05:41.733299       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-67-30.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 07:05:41.733432       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 07:05:41.744681       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2603356672/tls.crt::/tmp/serving-cert-2603356672/tls.key"\nStaticPodsDegraded: F0505 07:05:42.065598       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 07:11:54.287 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-13-105.us-west-2.compute.internal" not ready since 2024-05-05 07:11:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786992006690508800junit13 days ago
May 05 07:20:11.433 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-4-210.us-west-1.compute.internal" not ready since 2024-05-05 07:19:46 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 07:20:22.203 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-4-210.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 07:20:11.152463       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 07:20:11.152770       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714893611 cert, and key in /tmp/serving-cert-2608014496/serving-signer.crt, /tmp/serving-cert-2608014496/serving-signer.key\nStaticPodsDegraded: I0505 07:20:11.831748       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 07:20:11.846157       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-210.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 07:20:11.846291       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 07:20:11.869754       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2608014496/tls.crt::/tmp/serving-cert-2608014496/tls.key"\nStaticPodsDegraded: F0505 07:20:12.236438       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
#1786962216533102592junit13 days ago
May 05 05:12:07.152 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-88-24.us-east-2.compute.internal" not ready since 2024-05-05 05:11:59 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 05:12:22.160 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-88-24.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 05:12:11.742448       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 05:12:11.743774       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714885931 cert, and key in /tmp/serving-cert-529335087/serving-signer.crt, /tmp/serving-cert-529335087/serving-signer.key\nStaticPodsDegraded: I0505 05:12:12.272662       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 05:12:12.282395       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-88-24.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 05:12:12.282553       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 05:12:12.295390       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-529335087/tls.crt::/tmp/serving-cert-529335087/tls.key"\nStaticPodsDegraded: F0505 05:12:12.465121       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 05:18:07.602 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-93-145.us-east-2.compute.internal" not ready since 2024-05-05 05:17:56 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786968497847275520junit13 days ago
May 05 05:35:40.160 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-77-97.us-east-2.compute.internal" not ready since 2024-05-05 05:33:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 05:36:14.819 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-77-97.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 05:36:04.209565       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 05:36:04.209916       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714887364 cert, and key in /tmp/serving-cert-1852846521/serving-signer.crt, /tmp/serving-cert-1852846521/serving-signer.key\nStaticPodsDegraded: I0505 05:36:05.104841       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 05:36:05.120978       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-77-97.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 05:36:05.121086       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 05:36:05.139677       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1852846521/tls.crt::/tmp/serving-cert-1852846521/tls.key"\nStaticPodsDegraded: F0505 05:36:05.411558       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 05:41:45.143 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-66-239.us-east-2.compute.internal" not ready since 2024-05-05 05:41:38 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786957116825669632junit13 days ago
May 05 04:47:02.803 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-84-32.us-east-2.compute.internal" not ready since 2024-05-05 04:46:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 04:47:19.127 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-84-32.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 04:47:07.561491       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 04:47:07.566082       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714884427 cert, and key in /tmp/serving-cert-3719459597/serving-signer.crt, /tmp/serving-cert-3719459597/serving-signer.key\nStaticPodsDegraded: I0505 04:47:07.962456       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 04:47:07.981912       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-84-32.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 04:47:07.982039       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 04:47:08.007430       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3719459597/tls.crt::/tmp/serving-cert-3719459597/tls.key"\nStaticPodsDegraded: F0505 04:47:08.266477       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 04:52:52.259 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-31-215.us-east-2.compute.internal" not ready since 2024-05-05 04:52:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1786945400087777280junit13 days ago
May 05 04:03:29.162 - 13s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-69-27.ec2.internal" not ready since 2024-05-05 04:03:20 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 04:03:42.306 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-69-27.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 04:03:31.851434       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 04:03:31.853649       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714881811 cert, and key in /tmp/serving-cert-2215928167/serving-signer.crt, /tmp/serving-cert-2215928167/serving-signer.key\nStaticPodsDegraded: I0505 04:03:32.330447       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 04:03:32.343160       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-69-27.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 04:03:32.343529       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 04:03:32.365686       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2215928167/tls.crt::/tmp/serving-cert-2215928167/tls.key"\nStaticPodsDegraded: F0505 04:03:32.560771       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 04:09:25.585 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-81-46.ec2.internal" not ready since 2024-05-05 04:09:13 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786931355007848448junit13 days ago
May 05 03:07:35.420 - 509ms E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-2-159.ec2.internal" not ready since 2024-05-05 03:07:19 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 03:07:35.929 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-2-159.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 03:07:31.021355       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 03:07:31.021962       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714878451 cert, and key in /tmp/serving-cert-3507944250/serving-signer.crt, /tmp/serving-cert-3507944250/serving-signer.key\nStaticPodsDegraded: I0505 03:07:31.709693       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 03:07:31.720223       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-2-159.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 03:07:31.720358       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 03:07:31.731853       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3507944250/tls.crt::/tmp/serving-cert-3507944250/tls.key"\nStaticPodsDegraded: F0505 03:07:31.984855       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 05 03:12:56.947 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-67-33.ec2.internal" not ready since 2024-05-05 03:10:56 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786931355007848448junit13 days ago
May 05 03:21:10.044 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-51-112.ec2.internal" not ready since 2024-05-05 03:21:01 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 03:21:22.145 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-51-112.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 03:21:13.062891       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 03:21:13.063366       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714879273 cert, and key in /tmp/serving-cert-3970771440/serving-signer.crt, /tmp/serving-cert-3970771440/serving-signer.key\nStaticPodsDegraded: I0505 03:21:13.714150       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 03:21:13.723495       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-51-112.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 03:21:13.723616       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 03:21:13.743011       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3970771440/tls.crt::/tmp/serving-cert-3970771440/tls.key"\nStaticPodsDegraded: F0505 03:21:13.942583       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1786914881031639040junit13 days ago
May 05 02:20:07.576 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-99-36.us-west-2.compute.internal" not ready since 2024-05-05 02:18:07 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 02:20:42.525 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-99-36.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 02:20:34.015951       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 02:20:34.016217       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714875634 cert, and key in /tmp/serving-cert-3106697434/serving-signer.crt, /tmp/serving-cert-3106697434/serving-signer.key\nStaticPodsDegraded: I0505 02:20:34.357970       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 02:20:34.359631       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-99-36.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 02:20:34.359814       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 02:20:34.360483       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3106697434/tls.crt::/tmp/serving-cert-3106697434/tls.key"\nStaticPodsDegraded: F0505 02:20:34.661424       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 02:26:51.095 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-90-110.us-west-2.compute.internal" not ready since 2024-05-05 02:26:42 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1786892185476534272junit2 weeks ago
May 05 00:41:04.655 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-121-38.us-west-1.compute.internal" not ready since 2024-05-05 00:39:04 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 05 00:41:41.661 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-121-38.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 00:41:33.388942       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 00:41:33.389131       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714869693 cert, and key in /tmp/serving-cert-197322703/serving-signer.crt, /tmp/serving-cert-197322703/serving-signer.key\nStaticPodsDegraded: I0505 00:41:33.736667       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 00:41:33.738152       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-38.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 00:41:33.738266       1 builder.go:299] check-endpoints version 4.15.0-202405032206.p0.gf5c5a60.assembly.stream.el9-f5c5a60-f5c5a609fa5e318379ceb86d346122814dc81762\nStaticPodsDegraded: I0505 00:41:33.738970       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-197322703/tls.crt::/tmp/serving-cert-197322703/tls.key"\nStaticPodsDegraded: F0505 00:41:34.087030       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 05 00:47:31.882 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-36-93.us-west-1.compute.internal" not ready since 2024-05-05 00:47:21 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

Found in 81.63% of runs (761.90% of failures) across 196 total runs and 1 jobs (10.71% failed) in 1.533s - clear search | chart view - source code located on github